Lecture 21: 5.6 Rank and Nullity

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 21: 5.6 Rank and Nullity Lecture 21: 5.6 Rank and Nullity Wei-Ta Chu 2008/12/5 Rank and Nullity Definition The common dimension of the row and column space of a matrix A is called the rank (秩) of A and is denoted by rank(A); the dimension of the nullspace of A is called the nullity (零核維數) of A and is denoted by nullity(A). 2008/12/5 Elementary Linear Algebra 2 Example (Rank and Nullity) Find the rank and nullity of the matrix 1 2 0 4 5 3 3 7 2 0 1 4 A 2 5 2 4 6 1 4 9 2 4 4 7 Solution: The reduced row-echelon form of A is 1 0 4 28 37 13 0 1 2 12 16 5 0 0 0 0 0 0 0 0 0 0 0 0 Since there are two nonzero rows (two leading 1’s), the row space and column space are both two-dimensional, so rank(A) = 2. 2008/12/5 Elementary Linear Algebra 3 Example (Rank and Nullity) To find the nullity of A, we must find the dimension of the solution space of the linear system Ax=0. The corresponding system of equations will be x1 –4x3 –28x4 –37x5 + 13x6 = 0 x2 –2x3 –12x4 –16 x5+ 5 x6 = 0 It follows that the general solution of the system is x1 = 4r + 28s + 37t –13u, x2 = 2r + 12s + 16t –5u, x3 = r, x4 = s, x5 = t, x6 = u or x1 4 28 37 13 x 2 12 16 5 2 x3 1 0 0 0 Thus, nullity(A) = 4. r s t u x4 0 1 0 0 x 0 0 1 0 5 x6 0 0 0 1 2008/12/5 Elementary Linear Algebra 4 Theorems Theorem 5.6.2 T If A is any matrix, then rank(A) = rank(A ). T Proof: rank(A) = dim(row space of A) = dim(column space of A ) = rank(AT) Theorem 5.6.3 (Dimension Theorem for Matrices) If A is a matrix with n columns, then rank(A) + nullity(A) = n. 2008/12/5 Elementary Linear Algebra 5 Proof of Theorem 5.6.3 Since A has n columns, Ax = 0 has n unknowns. These fall into two categories: the leading variables and the free variables. The number of leading 1’s in the reduced row-echelon form of A is the rank of A 2008/12/5 Elementary Linear Algebra 6 Proof of Theorem 5.6.3 The number of free variables is equal to the nullity of A. This is so because the nullity of A is the dimension of the solution space of Ax=0, which is the same as the number of parameters in the general solution, which is the same as the number of free variables. Thus rank(A) + nullity(A) = n 2008/12/5 Elementary Linear Algebra 7 Theorem 5.6.4 Theorem 5.6.4 If A is an mn matrix, then: rank(A) = Number of leading variables in the solution of Ax = 0. nullity(A) = Number of parameters in the general solution of Ax = 0. 2008/12/5 Elementary Linear Algebra 8 Example (Sum of Rank and Nullity) The matrix 1 2 0 4 5 3 3 7 2 0 1 4 A 2 5 2 4 6 1 4 9 2 4 4 7 has 6 columns, so rank(A) + nullity(A) = 6 This is consistent with the previous example, where we showed that rank(A) = 2 and nullity(A) = 4 2008/12/5 Elementary Linear Algebra 9 Example Find the number of parameters in the general solution of Ax = 0 if A is a 57 matrix of rank 3. Solution: nullity(A) = n –rank(A) = 7 –3 = 4 Thus, there are four parameters. 2008/12/5 Elementary Linear Algebra 10 Dimensions of Fundamental Spaces Suppose that A is an mn matrix of rank r, then T A is an nm matrix of rank r by Theorem 5.6.2 T nullity(A) = n –r, nullity(A ) = m –r by Theorem 5.6.3 Fundamental Space Dimension Row space of A r Column space of A r Nullspace of A n –r Nullspace of AT m –r 2008/12/5 Elementary Linear Algebra 11 Maximum Value for Rank If A is an mn matrix The row vectors lie in Rn and the column vectors lie in Rm. The row space of A is at most n-dimensional and the column space is at most m-dimensional. Since the row and column space have the same dimension (the rank A), we must conclude that if m n, then the rank of A is at most the smaller of the values of m or n. That is, rank(A) min(m, n) 2008/12/5 Elementary Linear Algebra 12 Example If A is a 74 matrix, then the rank of A is at most 4 and, consequently, the seven row vectors must be linearly dependent. If A is a 47 matrix, then again the rank of A is at most 4 and, consequently, the seven column vectors must be linearly dependent. 2008/12/5 Elementary Linear Algebra 13 Theorem 5.6.5 Theorem 5.6.5 (The Consistency Theorem) If Ax = b is a linear system of m equations in n unknowns, then the following are equivalent. Ax = b is consistent. b is in the column space of A. The coefficient matrix A and the augmented matrix [A | b] have the same rank. 2008/12/5 Elementary Linear Algebra 14 Proof of Theorem 5.6.5 (b) → (c) If b is in the column space of A, then the column spaces of A and [A | b] are actually the same, from which it will follow that these two matrices have the same rank. The column spaces of A and [A | b] can be expressed as span{c1, c2, …, cn} and span{c1, c2, …, cn, b} If b is in the column space of A, then each vector in the set {c1,c2,…,cn,b} is a linear combination of the vectors in {c1,c2,…,cn}. Thus, from Theorem 5.2.4, the column spaces of A and [A | b] are the same. 2008/12/5 Elementary Linear Algebra 15 Proof of Theorem 5.6.5 (c) → (b) Assume that A and [A | b] have the same rank r. By Theorem 5.4.6a, there is some subset of the column vectors of A that forms a basis for the column space of A. Suppose that those column vectors are c’1, c’2,…,c’r These r basis vectors also belong to the r-dimensional column space of [A | b]; hence they also form a basis for the column space of [A | b] by Theorem 5.4.6a. This means that b is expressible as a linear combination of c’1, c’2,…,c’r, and consequently b lies in the column space of A. 2008/12/5 Elementary Linear Algebra 16 Example We see from the third row that the system is inconsistent. However, it is also because of this row that the reduced row- echelon form of the augmented matrix has fewer zero rows than the reduced row-echelon form of the coefficient matrix. This forces the coefficient matrix and the augmented matrix for the system to have different ranks. 2008/12/5 Elementary Linear Algebra 17 Theorem 5.6.6 Theorem 5.6.6 If Ax = b is a linear system of m equations in n unknowns, then the following are equivalent. Ax = b is consistent for every m1 matrix b. The column vectors of A span Rm. rank(A) = m. 2008/12/5 Elementary Linear Algebra 18 Proof of Theorem 5.6.6 : The system Ax = b can be expressed as from which we can conclude that Ax=b is consistent for every matrix b if and only if every such b is expressible as a linear combination of the column vectors c1, c2, …, cn, or, equivalently, if and only if these column vectors span Rm. 2008/12/5 Elementary Linear Algebra 19 Proof of Theorem 5.6.6 : From the assumption that Ax=b is consistent for every matrix b, and from (a) and (b) of the Consistency Theorem (5.6.5), it follows that every vector b in Rm lies in the column space of A; that is, the column space of A is all of Rm. Thus rank(A)=dim(Rm)=m. 2008/12/5 Elementary Linear Algebra 20 Proof of Theorem 5.6.6 : From the assumption that rank(A)=m, it follows that the column space of A is a subspace of Rm of dimension m and hence must be all of Rm by Theorem 5.4.7. It now follows from parts (a) and (b) of the Consistency Theorem (5.6.5) that Ax=b is consistent from ever vector b in Rm, since every such b is in the column space of A. 2008/12/5 Elementary Linear Algebra 21 Overdetermined System A linear system with more equations than unknowns is called an overdetermined linear system (超定線性方程 組). If Ax = b is an overdetermined linear system of m equations in n unknowns (so that m > n), then the column vectors of A cannot span Rm. Thus, the overdetermined linear system Ax = b cannot be consistent for every possible b. 2008/12/5 Elementary Linear Algebra 22 Example x12 x 2 b 1 x1 x 2 b 2 The linear system x1 x 2 b 3 x12 x 2 b 4 x13 x 2 b 5 is overdetermined, so it cannot be consistent for all possible values of b1, b2, b3, b4, and b5.
Recommended publications
  • 8.5 Least Squares Solutions to Inconsistent Systems
    8.5 Least Squares Solutions to Inconsistent Systems Performance Criterion: 8. (f) Find the least-squares approximation to the solution of an inconsistent system of equations. Solve a problem using least-squares approximation. (g) Give the least squares error and least squares error vector for a least squares approximation to a solution to a system of equations. Recall that an inconsistent system is one for which there is no solution. Often we wish to solve inconsistent systems and it is just not acceptable to have no solution. In those cases we can find some vector (whose components are the values we are trying to find when attempting to solve the system) that is “closer to being a solution” than all other vectors. The theory behind this process is part of the second term of this course, but we now have enough knowledge to find such a vector in a “cookbook” manner. Suppose that we have a system of equations Ax = b. Pause for a moment to reflect on what we know and what we are trying to find when solving such a system: We have a system of linear equations, and the entries of A are the coefficients of all the equations. The vector b is the vector whose components are the right sides of all the equations, and the vector x is the vector whose components are the unknown values of the variables we are trying to find. So we know A and b and we are trying to find x. If A is invertible, the solution vector x is given by x = A−1 b.
    [Show full text]
  • Linear Algebra I
    Linear Algebra I Gregg Waterman Oregon Institute of Technology c 2016 Gregg Waterman This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. The essence of the license is that You are free: to Share to copy, distribute and transmit the work • to Remix to adapt the work • Under the following conditions: Attribution You must attribute the work in the manner specified by the author (but not in • any way that suggests that they endorse you or your use of the work). Please contact the author at [email protected] to determine how best to make any attribution. Noncommercial You may not use this work for commercial purposes. • Share Alike If you alter, transform, or build upon this work, you may distribute the resulting • work only under the same or similar license to this one. With the understanding that: Waiver Any of the above conditions can be waived if you get permission from the copyright • holder. Public Domain Where the work or any of its elements is in the public domain under applicable • law, that status is in no way affected by the license. Other Rights In no way are any of the following rights affected by the license: • Your fair dealing or fair use rights, or other applicable copyright exceptions and limitations; ⋄ The author’s moral rights; ⋄ Rights other persons may have either in the work itself or in how the work is used, such as ⋄ publicity or privacy rights. Notice For any reuse or distribution, you must make clear to others the license terms of this • work.
    [Show full text]
  • Cramer's Rule 1 Cramer's Rule
    Cramer's rule 1 Cramer's rule In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the vector of right hand sides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750[1], although Colin Maclaurin also published special cases of the rule in 1748[2] (and probably knew of it as early as 1729).[3][4][5] General case Consider a system of n linear equations for n unknowns, represented in matrix multiplication form as follows: where the n by n matrix has a nonzero determinant, and the vector is the column vector of the variables. Then the theorem states that in this case the system has a unique solution, whose individual values for the unknowns are given by: where is the matrix formed by replacing the ith column of by the column vector . The rule holds for systems of equations with coefficients and unknowns in any field, not just in the real numbers. It has recently been shown that Cramer's rule can be implemented in O(n3) time,[6] which is comparable to more common methods of solving systems of linear equations, such as Gaussian elimination. Proof The proof for Cramer's rule uses just two properties of determinants: linearity with respect to any given column (taking for that column a linear combination of column vectors produces as determinant the corresponding linear combination of their determinants), and the fact that the determinant is zero whenever two columns are equal (the determinant is alternating in the columns).
    [Show full text]
  • Underdetermined and Overdetermined Linear Algebraic Systems
    Underdetermined and Overdetermined Linear Algebraic Systems ES100 March 1, 1999 T.S. Whitten Objectives ● Define underdetermined systems ● Define overdetermined systems ● Least Squares Examples Review B 500 N = − ° = ∑ Fx 0; 500 FBC sin 45 0 = ° − = ∑ Fy 0; FBC cos 45 FBA 0 FBC FBA − sin 45° 0 F − 500 ⋅ BC = ° 1cos42445 4341 1F2BA3 0 coefficients variables Review cont. − sin 45° 0 F − 500 ⋅ BC = ° 1cos42445 4341 1F2BA3 0 coefficients variables The system of matrices above is of the form: Ax = b and can be solved using MATLAB left division thus, x = A\b × results in a 1 2 matrix of values for FBC and FBA Review Summary ● A system of two Equations and two unknowns may yield a unique solution. ● The exception is when the determinant of A is equal to zero. Then the system is said to be singular. ● The left division operator will solve the linear system in one step by combining two matrix operations ●A\B is equivalent to A-1*B Graphical Representation of Unique vs. Singular Systems unique solution y singular system x Underdetermined Systems ● A system of linear equations is may be undetermined if; 1 The determinant of A is equal to zero A = 0 2 The matrix A is not square, i.e. the are more unknowns than there are equations + + = x x 3y 2z 2 1 3 2 2 ⋅ y = x + y + z = 4 1 1 1 4 z Overdetermined Systems ● The converse of an underdetermined system is an overdetermined system where there are more equations than there are variables ● This situation arises frequently in engineering.
    [Show full text]
  • Solutions of Overdetermined Linear Problems (Least Square Problems)
    Chapter 3 Solutions of overdetermined linear problems (Least Square problems) 1. Over-constrained problems See Chapter 11 from the textbook 1.1. Definition In the previous chapter, we focused on solving well-defined linear problems de- fined by m linear equations for m unknowns, put into a compact matrix-vector form Ax = b with A an m m square matrix, and b and x m long column vectors. We focussed on using⇥ direct methods to seek exact solutions− to such well-defined linear systems, which exist whenever A is nonsingular. We will re- visit these problems later this quarter when we learn about iterative methods. In this chapter, we look at a more general class of problems defined by so- called overdetermined systems – systems with a larger numbers of equations (m) than unknowns (n): this time, we have Ax = b with A an m n matrix with m>n, x an n-long column vector and b an m long column vector.⇥ This time there generally are no exact solutions to the problem.− Rather we now want to find approximate solutions to overdetermined linear systems which minimize the residual error E = r = b Ax (3.1) || || || − || using some norm. The vector r = b Ax is called the residual vector. − Any choice of norm would generally work, although, in practice, we prefer to use the Euclidean norm (i.e., the 2-norm) which is more convenient for nu- merical purposes, as they provide well-established relationships with the inner product and orthogonality, as well as its smoothness and convexity (we shall see later).
    [Show full text]
  • Math 253 - Homework 2 Due in Class on Wednesday, February 19
    Math 253 - Homework 2 Due in class on Wednesday, February 19 Write your answers clearly and carefully, being sure to emphasize your answer and the key steps of your work. You may work with others in this class, but the solutions handed in must be your own. If you work with someone or get help from another source, give a brief citation on each problem for which that is the case. Part I While you are expected to complete all of these problems, do not hand in the problems in Part I. You are encouraged to write complete solutions and to discuss them with me or your peers. As extra motivation, some of these problems will appear on the weekly quizzes. 1. Practice Problems: (a) Section 1.3: 1, 2 2. Exercises: (a) Section 1.3: 1-13 odd, 19, 21, 25, 27, 29, 33 Part II Hand in each problem separately, individually stapled if necessary. Please keep all problems together with a paper clip. n 1. The center of mass of v1;:::; vk in R , with a mass of mi at vi, for i = 1; : : : ; k and total mass m = m1 + ··· + mk, is given by m v + ··· + m v m m v¯ = 1 1 k k = 1 v + ··· + k v : m m 1 m k n The centroid, which can be thought of as the geometric center, of v1;:::; vk in R is the center of mass with mi = 1 for each i = 1; : : : ; k, i.e. v + ··· + v 1 1 c¯ = 1 k = v + ··· + v : k k 1 k k 243 2−13 2 1 3 2−53 2 1 3 Pk (a) Let v1 = 415 ; v2 = 4 0 5 ; v3 = 4 3 5 ; v4 = 4 0 5 ; v5 = 4−45 : Let uk = i=1 vi.
    [Show full text]
  • Investigating an Overdetermined System of Linear Equations by Using Convex Functions
    Hacettepe Journal of Mathematics and Statistics Volume 46 (5) (2017), 865 874 Investigating an overdetermined system of linear equations by using convex functions Zlatko Pavi¢ ∗ y and Vedran Novoselac z Abstract The paper studies the application of convex functions in order to prove the existence of optimal solutions of an overdetermined system of lin- ear equations. The study approaches the problem by using even convex functions instead of projections. The research also relies on some spe- cial properties of unbounded convex sets, and the lower level sets of continuous functions. Keywords: overdetermined system, convex function, global minimum. 2000 AMS Classication: 15A06, 26B25. Received : 30.08.2016 Accepted : 19.12.2016 Doi : 10.15672/ HJMS.2017.423 1. Introduction We consider a system of m linear equations with n unknowns over the eld of real numbers given by a11x1 + ::: + a1nxn = b1 . (1.1) . .. : am1x1+ ::: + amnxn = bm Including the matrices 2 a11 : : : a1n 3 2 x1 3 2 b1 3 . (1.2) 6 . .. 7 6 . 7 6 . 7 A = 4 . 5 ; x = 4 . 5 ; b = 4 . 5 ; am1 : : : amn xn bm the given system gets the matrix form (1.3) Ax = b: ∗Department of Mathematics, Mechanical Engineering Faculty in Slavonski Brod, University of Osijek, Croatia, Email: [email protected] yCorresponding Author. zDepartment of Mathematics, Mechanical Engineering Faculty in Slavonski Brod, University of Osijek, Croatia, Email: [email protected] 866 n m Identifying the matrix A with a linear operator from R to R , the column matrix x n m with a vector of R , and the column matrices Ax and b with vectors of R , the given system takes the operator form.
    [Show full text]
  • System of Linear Equations - Wikipedia, the Free Encyclopedia
    System of linear equations - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/System_of_linear_equations System of linear equations From Wikipedia, the free encyclopedia In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. For example, is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by A linear system in three variables determines a collection of planes. The intersection point is the solution. since it makes all three equations valid.[1] In mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject which is used in most parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, the coefficients of the equations are real or complex numbers and the solutions are searched in the same set of numbers, but the theory and the algorithms apply for coefficients and solutions in any field. For solutions in an integral domain like the ring of the integers, or in other algebraic structures, other theories have been developed.
    [Show full text]
  • Low-Rank Approximation
    Ivan Markovsky Low-Rank Approximation Algorithms, Implementation, Applications January 19, 2018 Springer vi Preface Preface Low-rank approximation is a core problem in applications. Generic examples in systems and control are model reduction and system identification. Low-rank approximation is equivalent to the principal component analysis method in machine learning. Indeed, dimensionality reduction, classification, and information retrieval problems can be posed and solved as particular low-rank approximation problems. Sylvester structured low-rank approximation has applications in computer algebra for the decoupling, factorization, and common divisor computation of polynomials. The book covers two complementary aspects of data modeling: stochastic esti- mation and deterministic approximation. The former aims to find from noisy data that is generated by a low-complexity system an estimate of that data generating system. The latter aims to find from exact data that is generated by a high com- plexity system a low-complexity approximation of the data generating system. In applications, both the stochastic estimation and deterministic approximation aspects are present: the data is imprecise due to measurement errors and is possibly gener- Simple linear models are commonly used in engineering despite of the fact that ated by a complicated phenomenon that is not exactly representable by a model in the real world is often nonlinear. At the same time as being simple, however, the the considered model class. The development of data modeling methods in system models have to be accurate. Mathematical models are obtained from first princi- identification and signal processing, however, has been dominated by the stochastic ples (natural laws, interconnection, etc.) and experimental data.
    [Show full text]
  • Systems of Linear Equations
    Systems of Linear Equations 0.1 Definitions Recall that if A 2 Rm×n and B 2 Rm×p, then the augmented matrix [A j B] 2 Rm×n+p is the matrix [AB], that is the matrix whose first n columns are the columns of A, and whose last p columns are the columns of B. Typically we consider B = 2 Rm×1 ' Rm, a column vector. We also recall that a matrix A 2 Rm×n is said to be in reduced row echelon form if, counting from the topmost row to the bottom-most, 1. any row containing a nonzero entry precedes any row in which all the entries are zero (if any) 2. the first nonzero entry in each row is the only nonzero entry in its column 3. the first nonzero entry in each row is 1 and it occurs in a column to the right of the first nonzero entry in the preceding row Example 0.1 The following matrices are not in reduced echelon form because they all fail some part of 3 (the first one also fails 2): 01 1 01 00 1 0 21 2 0 0 0 1 0 1 0 0 1 @ A @ A 0 1 0 1 0 1 0 0 1 1 A matrix that is in reduced row echelon form is: 01 0 1 01 @0 1 0 0A 0 0 0 1 A system of m linear equations in n unknowns is a set of m equations, numbered from 1 to m going down, each in n variables xi which are multiplied by coefficients aij 2 F , whose sum equals some bj 2 R: 8 a x + a x + ··· + a x = b > 11 1 12 2 1n n 1 > <> a21x1 + a22x2+ ··· + a2nxn = b2 (S) .
    [Show full text]
  • Systems of Differential Equations
    Chapter 11 Systems of Differential Equations 11.1: Examples of Systems 11.2: Basic First-order System Methods 11.3: Structure of Linear Systems 11.4: Matrix Exponential 11.5: The Eigenanalysis Method for x′ = Ax 11.6: Jordan Form and Eigenanalysis 11.7: Nonhomogeneous Linear Systems 11.8: Second-order Systems 11.9: Numerical Methods for Systems Linear systems. A linear system is a system of differential equa- tions of the form x = a x + + a x + f , 1′ 11 1 · · · 1n n 1 x2′ = a21x1 + + a2nxn + f2, (1) . · · · . · · · x = a x + + a x + f , m′ m1 1 · · · mn n m where ′ = d/dt. Given are the functions aij(t) and fj(t) on some interval a<t<b. The unknowns are the functions x1(t), . , xn(t). The system is called homogeneous if all fj = 0, otherwise it is called non-homogeneous. Matrix Notation for Systems. A non-homogeneous system of linear equations (1) is written as the equivalent vector-matrix system x′ = A(t)x + f(t), where x1 f1 a11 a1n . · · · . x = . , f = . , A = . . · · · xn fn am1 amn · · · 11.1 Examples of Systems 521 11.1 Examples of Systems Brine Tank Cascade ................. 521 Cascades and Compartment Analysis ................. 522 Recycled Brine Tank Cascade ................. 523 Pond Pollution ................. 524 Home Heating ................. 526 Chemostats and Microorganism Culturing ................. 528 Irregular Heartbeats and Lidocaine ................. 529 Nutrient Flow in an Aquarium ................. 530 Biomass Transfer ................. 531 Pesticides in Soil and Trees ................. 532 Forecasting Prices ................. 533 Coupled Spring-Mass Systems ................. 534 Boxcars ................. 535 Electrical Network I ................. 536 Electrical Network II ................. 537 Logging Timber by Helicopter ................
    [Show full text]
  • Computing the Strict Chebyshev Solution of Overdetermined Linear Equations
    MATHEMATICS OF COMPUTATION, VOLUME 31, NUMBER 140 OCTOBER 1977, PAGES 974-983 Computing the Strict Chebyshev Solution of Overdetermined Linear Equations By Nabih N. Abdelmalek Abstract. A method for calculating the strict Chebyshev solution of overdetermined systems of linear equations using linear programming techniques is described. This method provides: (1) a way to determine, for the majority of cases, all the equations belonging to the characteristic set, (2) an efficient method to obtain the inverse of the matrix needed to calculate the strict Chebyshev solution, and (3) a way of recog- nizing when an element of the Chebyshev solution equals a corresponding element of the strict Chebyshev solution. As a result, in general, the computational effort is con- siderably reduced. Also the present method deals with full rank as well as rank defi- cient cases. Numerical results are given. 1. Introduction. Consider the overdetermined system of linear equations (1) Ca = f, where C is a given real n x m matrix of rank k < m < n and / is a given real «-vector. Let E denote the set of n equations (1). The Chebyshev solution (C.S.) of system (1) is the OT-vectora* = (a*) which minimizes the Chebyshev norm z, (2) z = max\r¡(a)\, i E E, where r, is the rth residual in (1) and is given by (3) a-,.= cixax+ ••• + cimam - f¡, i E E. Let us denote the C.S. a* by (a*)c s . It is known that if C satisfies the Haar condition, the C.S. (a*)c s is unique. Otherwise it may not be unique.
    [Show full text]