
<p>AN ELEMENTARY PROOF OF THE SPECTRAL <br>RADIUS FORMULA FOR MATRICES </p><p>JOEL A. TROPP <br>Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula </p><p>1/n </p><p>ρ(A) = lim kA<sup style="top: -0.34em;">n</sup>k </p><p>,</p><p>n→∞ </p><p>where k · k represents any matrix norm. </p><p>1. Introduction </p><p>It is a well-known fact from the theory of Banach algebras that the spectral radius of any element A is given by the formula </p><p>1/n </p><p>ρ(A) = lim kA<sup style="top: -0.41em;">n</sup>k </p><p>.</p><p>(1.1) </p><p>n→∞ </p><p>For a matrix, the spectrum is just the collection of eigenvalues, so this formula yields a technique for estimating for the top eigenvalue. The proof of Equation 1.1 is beautiful but advanced. See, for example, Rudin’s treatment in his Functional Analysis. It turns out that elementary techniques suffice to develop the formula for matrices. </p><p>2. Preliminaries </p><p>For completeness, we shall briefly introduce the major concepts required in the proof. It is expected that the reader is already familiar with these ideas. </p><p>2.1. Norms. A norm is a mapping k · k from a vector space X into </p><p>+</p><p>the nonnegative real numbers R which has three properties: <br>(1) kxk = 0 if and only if x = 0; (2) kαxk = |α| kxk for any scalar α and vector x; and (3) kx + yk ≤ kxk + kyk for any vectors x and y. <br>The most fundamental example of a norm is the Euclidean norm k·k<sub style="top: 0.15em;">2 </sub></p><p>n</p><p>which corresponds to the standard topology on R . It is defined by </p><p>q</p><p>kxk<sub style="top: 0.15em;">2 </sub>= k(x<sub style="top: 0.15em;">1</sub>, x<sub style="top: 0.15em;">2</sub>, . . . , x<sub style="top: 0.15em;">n</sub>)k<sub style="top: 0.15em;">2 </sub>= x<sup style="top: -0.35em;">2</sup><sub style="top: 0.24em;">1 </sub>+ x<sub style="top: 0.24em;">2</sub><sup style="top: -0.35em;">2 </sup>+ · · · + x<sub style="top: 0.24em;">n</sub><sup style="top: -0.29em;">2 </sup>. </p><p>Date: 30 November 2001. </p><p>1</p><ul style="display: flex;"><li style="flex:1">SPECTRAL RADIUS FORMULA </li><li style="flex:1">2</li></ul><p></p><p>One particular norm we shall consider is the ` norm which is defined </p><p>∞</p><p>n</p><p>for x in R by the formula </p><p>kxk<sub style="top: 0.15em;">∞ </sub>= k(x<sub style="top: 0.15em;">1</sub>, x<sub style="top: 0.15em;">2</sub>, . . . , x<sub style="top: 0.15em;">n</sub>)k<sub style="top: 0.15em;">∞ </sub>= max{x<sub style="top: 0.15em;">i</sub>}. </p><p>i</p><p>For a matrix A, define the infinity norm as </p><p>P</p><p>kAk<sub style="top: 0.15em;">∞ </sub>= max <sub style="top: 0.29em;">j</sub>|A<sub style="top: 0.15em;">ij</sub>|. </p><p>i</p><p>This norm is consistent with itself and with the ` vector norm. That </p><p>∞</p><p>is, </p><p>kABk<sub style="top: 0.15em;">∞ </sub>≤ kAk<sub style="top: 0.15em;">∞</sub>kBk<sub style="top: 0.15em;">∞ </sub></p><p>and </p><p>kAxk<sub style="top: 0.15em;">∞ </sub>≤ kAk<sub style="top: 0.15em;">∞</sub>kxk<sub style="top: 0.15em;">∞</sub>, </p><p>where A and B are matrices and x is a vector. <br>Two norms k · k and k · k<sub style="top: 0.15em;">? </sub>on a vector space X are said be equivalent <br>¯if there are positive constants C and C such that </p><p>¯<br>¯</p><p>Ckxk ≤ kxk<sub style="top: 0.15em;">? </sub>≤ Ckxk </p><p>¯for every vector x. For finite-dimensional spaces, we have the following powerful result. </p><p>Theorem 2.1. All norms on a finite-dimensional vector space are equivalent. </p><p>n</p><p>Proof. We shall demonstrate that any norm k · k on R is equivalent </p><p>n</p><p>to the Euclidean norm. Let {e<sub style="top: 0.15em;">i</sub>} be the canonical basis for R , so any </p><p>P</p><p>vector has an expression as x = continuous with respect to the Euclidean norm. For all pairs of vectors x and y, x<sub style="top: 0.15em;">i</sub>e<sub style="top: 0.15em;">i</sub>. First, let us check that k · k is </p><p>P</p><p>kx − yk = k (x<sub style="top: 0.15em;">i </sub>− y<sub style="top: 0.15em;">i</sub>)e<sub style="top: 0.15em;">i</sub>k </p><p>P</p><p></p><ul style="display: flex;"><li style="flex:1">≤</li><li style="flex:1">|x<sub style="top: 0.15em;">i </sub>− y<sub style="top: 0.15em;">i</sub>| ke<sub style="top: 0.15em;">i</sub>k </li></ul><p></p><p>P</p><p>≤ max<sub style="top: 0.15em;">i</sub>{ke<sub style="top: 0.15em;">i</sub>k} |x<sub style="top: 0.15em;">i </sub>− y<sub style="top: 0.15em;">i</sub>| </p><p>P</p><p>1/2 </p><p>≤ M { (x<sub style="top: 0.15em;">i </sub>− y<sub style="top: 0.15em;">i</sub>)<sup style="top: -0.36em;">2</sup>} </p><p>= Mkx − yk<sub style="top: 0.15em;">2</sub>, </p><p>where M = max<sub style="top: 0.15em;">i</sub>{ke<sub style="top: 0.15em;">i</sub>k}. In other words, when two vectors are nearby with respect to the Euclidean norm, they are also nearby with respect to any other norm. Notice that the Cauchy-Schwarz inequality for real numbers has played a starring role at this stage. <br>Now, consider the unit sphere with respect to the Euclidean norm, </p><p>n</p><p>S = {x ∈ R : kxk<sub style="top: 0.15em;">2 </sub>= 1}. This set is evidently closed and bounded in the Euclidean topology, so the Heine-Borel theorem shows that it </p><p></p><ul style="display: flex;"><li style="flex:1">SPECTRAL RADIUS FORMULA </li><li style="flex:1">3</li></ul><p></p><p>is compact. Therefore, the continuous function k · k attains maximum <br>¯and minimum values on S, say C and C. That is, </p><p>¯<br>¯</p><p>Ckxk<sub style="top: 0.15em;">2 </sub>≤ kxk ≤ Ckxk<sub style="top: 0.15em;">2 </sub></p><p>¯for any x with unit Euclidean norm. But every vector y can be expressed as y = αx for some x on the Euclidean unit sphere. If we multiply the foregoing inequality by |α| and draw the scalar into the norms, we reach <br>¯</p><p>Ckyk<sub style="top: 0.15em;">2 </sub>≤ kyk ≤ Ckyk<sub style="top: 0.15em;">2 </sub></p><p>¯for any vector y. <br>¯<br>It remains to check that the constants C and C are positive. They </p><p>¯<br>¯are clearly nonnegative since k · k is nonnegative, and C ≤ C by defi- </p><p>¯nition. Assume that C = 0, which implies the existence of a point x <br>¯on the Euclidean unit sphere for which kxk = 0. But then x = 0, a </p><p>contradiction. </p><p>ꢀ</p><p>2.2. The spectrum of a matrix. For an n-dimensional matrix A, consider the equation </p><p>Ax = λx, </p><p>(2.1) where x is a nonzero vector and λ is a complex number. Numbers λ which satisfy Equation 2.1 are called eigenvalues and the corresponding x are called eigenvectors. Nonzero vector solutions to this equation exist if and only if </p><ul style="display: flex;"><li style="flex:1">det(A − λI) = 0, </li><li style="flex:1">(2.2) </li></ul><p>where I is the identity matrix. The left-hand side of Equation 2.2 is called the characteristic polynomial of A because it is a polynomial in λ of degree n whose solutions are identical with the eigenvalues of A. The algebraic multiplicity of an eigenvalue λ is the multiplicity of λ as a root of the characteristic polynomial. Meanwhile, the geometric multiplicity of λ is the number of linearly independent eigenvectors corresponding to this eigenvalue. The geometric multiplicity of an eigenvalue never exceeds the algebraic multiplicity. Now, the collection of eigenvalues of a matrix, along with their geometric and algebraic multiplicities, completely determines the eigenstructure of the matrix. It turns out that all matrices with the same eigenstructure are similar to each other. That is, if A and B have the same eigenstructure, there exists a nonsingular matrix S such that S<sup style="top: -0.36em;">−1</sup>AS = B. <br>We call the set of all eigenvalues of a matrix A its spectrum, which is written as σ(A). The spectral radius ρ(A) is defined by </p><p>ρ(A) = sup{|λ| : λ ∈ σ(A)}. </p><p></p><ul style="display: flex;"><li style="flex:1">SPECTRAL RADIUS FORMULA </li><li style="flex:1">4</li></ul><p></p><p>In other words, the spectral radius measures the largest magnitude attained by any eigenvalue. </p><p>2.3. Jordan canonical form. We say that a matrix is in Jordan canonical form if it is block-diagonal and each block has the form </p><p><br></p><p>d×d </p><p>λ 1 λ 1 <br>. . . . λ 1 </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p><br></p><p>.λ</p><p>It can be shown that the lone eigenvalue of this Jordan block is λ. Moreover, the geometric multiplicity of λ is exactly one and the algebraic multiplicity of λ is exactly d, the block size. The eigenvalues of a block-diagonal matrix are simply the eigenvalues of its blocks with the algebraic and geometric multiplicities of identical eigenvalues summed across the blocks. Therefore, a diagonal matrix composed of Jordan blocks has its eigenstructure laid bare. Using the foregoing facts, it is easy to construct a matrix in Jordan canonical form which has any eigenstructure whatsoever. Therefore, every matrix is similar to a matrix in Jordan canonical form. </p><p>ꢀ ꢁ </p><p>nk</p><p></p><ul style="display: flex;"><li style="flex:1">Define the choose function </li><li style="flex:1">according to the following convention: </li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">ꢂ ꢃ </li><li style="flex:1">ꢄ</li></ul><p></p><p>n! </p><p>nwhen k = 0, . . . , n and </p><p>k!(n−k)! </p><p>=</p><ul style="display: flex;"><li style="flex:1">0</li><li style="flex:1">otherwise. </li></ul><p></p><p>k</p><p>Lemma 2.1. If J is a Jordan block with eigenvalue λ, then the components of its nth power satisfy </p><p></p><ul style="display: flex;"><li style="flex:1">ꢂ</li><li style="flex:1">ꢃ</li></ul><p></p><p>n</p><p>(J<sup style="top: -0.41em;">n</sup>)<sub style="top: 0.29em;">ij </sub>= </p><p>λ<sup style="top: -0.41em;">n−j+i </sup></p><p>.</p><p>(2.3) </p><p>j − i </p><p>Proof. For n = 1, it is straightforward to verify that Equation 2.3 yields </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>d×d </p><p>λ 1 </p><p><br></p><p>λ 1 <br>. . . . λ 1 </p><p>1</p><p>J = </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>λ</p><p></p><ul style="display: flex;"><li style="flex:1">SPECTRAL RADIUS FORMULA </li><li style="flex:1">5</li></ul><p></p><p>Now, for arbitrary indices i and j, use induction to calculate that </p><p>d</p><p>X</p><ul style="display: flex;"><li style="flex:1">ꢀ</li><li style="flex:1">ꢁ</li></ul><p></p><p>J<sup style="top: -0.41em;">n+1 </sup></p><p>==</p><p>(J<sup style="top: -0.41em;">n</sup>)<sub style="top: 0.29em;">ik </sub>J<sub style="top: 0.15em;">kj </sub></p><p>ij </p><p>k=1 </p><p></p><ul style="display: flex;"><li style="flex:1">ꢄꢂ </li><li style="flex:1">ꢃ</li><li style="flex:1">ꢅ ꢄꢂ </li></ul><p>ꢂ</p><ul style="display: flex;"><li style="flex:1">ꢃ</li><li style="flex:1">ꢅ</li></ul><p></p><p>d</p><p>X</p><p>n</p><p>1</p><p></p><ul style="display: flex;"><li style="flex:1">λ<sup style="top: -0.41em;">n−k+i </sup></li><li style="flex:1">λ<sup style="top: -0.41em;">1−j+k </sup></li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">k − i </li><li style="flex:1">j − k </li></ul><p>n</p><p>k=1 </p><p></p><ul style="display: flex;"><li style="flex:1">ꢂ</li><li style="flex:1">ꢃ</li><li style="flex:1">ꢃ</li></ul><p></p><p>n</p><p>==</p><p>λ<sup style="top: -0.41em;">(n−j+i)+1 </sup></p><p>+</p><p>λ<sup style="top: -0.41em;">n−j+(i+1) </sup></p><p>j − i </p><p>n + 1 </p><p>j − i </p><p>j − (i + 1) </p><p></p><ul style="display: flex;"><li style="flex:1">ꢂ</li><li style="flex:1">ꢃ</li></ul><p></p><p>λ<sup style="top: -0.41em;">(n+1)−j+i </sup></p><p>as advertised. </p><p>ꢀ</p><p>3. The spectral radius formula </p><p>First, we prove the following special case. </p><p>Theorem 3.1. For any matrix A, the spectral radius formula holds for the infinity matrix norm: </p><p>kA<sup style="top: -0.41em;">n</sup>k<sub style="top: 0.24em;">∞ </sub>−→ ρ(A). </p><p>1/n </p><p>Proof. Throughout this argument, we shall denote the ` vector and </p><p>∞</p><p>matrix norms by k · k. <br>Let S be a similarity transform such that S<sup style="top: -0.36em;">−1</sup>AS has Jordan form: </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>J<sub style="top: 0.15em;">1 </sub></p><p>−1 </p><p>.</p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>.</p><p></p><ul style="display: flex;"><li style="flex:1">J = S AS = </li><li style="flex:1">.</li></ul><p></p><p>.</p><p>J<sub style="top: 0.15em;">s </sub></p><p>Using the consistency of k · k, we develop the following bounds. </p><p>kA<sup style="top: -0.41em;">n</sup>k = kSJ<sup style="top: -0.41em;">n</sup>S<sup style="top: -0.41em;">−1</sup>k </p><p></p><ul style="display: flex;"><li style="flex:1">1/n </li><li style="flex:1">1/n </li></ul><p></p><p>ꢆ</p><p>1/n </p><p></p><ul style="display: flex;"><li style="flex:1">≤ kSkkS<sup style="top: -0.42em;">−1</sup>k </li><li style="flex:1">kJ<sup style="top: -0.42em;">n</sup>k </li></ul><p></p><p>,</p><p>1/n </p><p>and </p><p></p><ul style="display: flex;"><li style="flex:1">ꢄ</li><li style="flex:1">ꢅ</li></ul><p></p><p>1/n </p><p>kS<sup style="top: -0.36em;">−1</sup>kkSJ<sup style="top: -0.36em;">n</sup>S<sup style="top: -0.36em;">−1</sup>kkSk </p><p>1/n </p><p>kA<sup style="top: -0.41em;">n</sup>k </p><p>=</p><p>kSkkS<sup style="top: -0.29em;">−1</sup>k </p><p>ꢆ</p><p>−1/n </p><p></p><ul style="display: flex;"><li style="flex:1">≥ kSkkS<sup style="top: -0.42em;">−1</sup>k </li><li style="flex:1">kJ<sup style="top: -0.42em;">n</sup>k </li></ul><p></p><p>.</p><p>1/n </p><p>In each inequality, the former term on the right-hand side tends toward one as n approaces infinity. Therefore, we need only investigate the </p><p>1/n </p><p></p><ul style="display: flex;"><li style="flex:1">behavior of kJ<sup style="top: -0.37em;">n</sup>k </li><li style="flex:1">.</li></ul><p></p><p></p><ul style="display: flex;"><li style="flex:1">SPECTRAL RADIUS FORMULA </li><li style="flex:1">6</li></ul><p></p><p>Now, the matrix J is block-diagonal, so its powers are also blockdiagonal with the blocks exponentiated individually: </p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>J<sub style="top: 0.25em;">1</sub><sup style="top: -0.36em;">n </sup></p><p>n</p><p>.</p><p></p><ul style="display: flex;"><li style="flex:1"></li><li style="flex:1"></li></ul><p></p><p>.</p><p></p><ul style="display: flex;"><li style="flex:1">J = </li><li style="flex:1">.</li></ul><p></p><p>.</p><p>J<sub style="top: 0.25em;">s</sub><sup style="top: -0.36em;">n </sup></p><p>Since we are using the infinity norm, </p><p>kJ<sup style="top: -0.41em;">n</sup>k = max {kJ<sub style="top: 0.25em;">k</sub><sup style="top: -0.41em;">n</sup>k} . </p><p>k</p><p>The nth root is monotonic, so we may draw it inside the maximum to obtain </p><p>ꢆ</p><p>kJ<sup style="top: -0.41em;">n</sup>k = max kJ<sub style="top: 0.24em;">k</sub><sup style="top: -0.41em;">n</sup>k </p><p>1/n </p><p>.</p><p>1/n k</p><p>What is the norm of an exponentiated Jordan block? Recall the fact that the infinity norm of a matrix equals the greatest absolute row sum, and apply it to the explicit form of J<sub style="top: 0.26em;">k</sub><sup style="top: -0.36em;">n </sup>provided in Lemma 2.1. </p><p>d<sub style="top: 0.12em;">k </sub></p><p>X</p><p></p><ul style="display: flex;"><li style="flex:1">kJ<sub style="top: 0.24em;">k</sub><sup style="top: -0.41em;">n</sup>k = </li><li style="flex:1">|(J<sub style="top: 0.24em;">k</sub><sup style="top: -0.41em;">n</sup>)<sub style="top: 0.29em;">1j</sub>| </li></ul><p></p><p>j=1 </p><p></p><ul style="display: flex;"><li style="flex:1">ꢂ</li><li style="flex:1">ꢃ</li></ul><p></p><p>d<sub style="top: 0.12em;">k </sub></p><p>X</p><p>n</p><p>n−j+1 </p><p>=</p><p>|λ<sub style="top: 0.15em;">k</sub>| </p><p>j − 1 </p><p>j=1 </p><p></p><ul style="display: flex;"><li style="flex:1">(</li><li style="flex:1">)</li></ul><p></p><ul style="display: flex;"><li style="flex:1">ꢂ</li><li style="flex:1">ꢃ</li></ul><p>X</p><p>n</p><p>n</p><p>1−d<sub style="top: 0.12em;">k </sub></p><p>d<sub style="top: 0.12em;">k</sub>−j </p><p>= |λ<sub style="top: 0.15em;">k</sub>| |λ<sub style="top: 0.15em;">k</sub>| </p><p>d<sub style="top: 0.15em;">k </sub></p><p>|λ<sub style="top: 0.15em;">k</sub>| </p><p>,j − 1 </p><p>j=1 </p><p>where λ<sub style="top: 0.15em;">k </sub>is the eigenvalue of block J<sub style="top: 0.15em;">k </sub>and d<sub style="top: 0.15em;">k </sub>is the block size. Bound </p><p></p><ul style="display: flex;"><li style="flex:1">ꢀ</li><li style="flex:1">ꢁ</li></ul><p></p><p>n</p><p>j−1 </p><p></p><ul style="display: flex;"><li style="flex:1">the choose function above and below with 1 ≤ </li><li style="flex:1">≤ n<sup style="top: -0.37em;">d</sup><sup style="top: 0.12em;">k </sup>, and write </li></ul><p></p><p>P</p><p>1−d<sub style="top: 0.12em;">k </sub></p><p>d<sub style="top: 0.12em;">k</sub>−j </p><p>to obtain the relation </p><p>M<sub style="top: 0.15em;">k </sub>= |λ<sub style="top: 0.15em;">k</sub>| </p><p><sub style="top: 0.29em;">j</sub>|λ<sub style="top: 0.15em;">k</sub>| </p><p></p><ul style="display: flex;"><li style="flex:1">n</li><li style="flex:1">n</li></ul><p></p><p>M<sub style="top: 0.15em;">k </sub>|λ<sub style="top: 0.15em;">k</sub>| ≤ kJ<sub style="top: 0.25em;">k</sub><sup style="top: -0.41em;">n</sup>k ≤ M<sub style="top: 0.15em;">k </sub>n<sup style="top: -0.41em;">d</sup><sup style="top: 0.12em;">k </sup>|λ<sub style="top: 0.15em;">k</sub>| . </p><p>Extracting the nth root and taking the limit as n approaches infinity, we reach lim kJ<sub style="top: 0.25em;">k</sub><sup style="top: -0.41em;">n</sup>k = |λ<sub style="top: 0.15em;">k</sub>|. </p><p>1/n </p><p>n→∞ </p><p>A careful reader will notice that the foregoing argument does not apply to a Jordan block J<sub style="top: 0.15em;">k </sub>with a zero eigenvalue. But such a matrix is nilpotent: placing a large exponent on J<sub style="top: 0.15em;">k </sub>yields the zero matrix. The norm of a zero matrix is zero, so we have </p><p>lim kJ<sub style="top: 0.25em;">k</sub><sup style="top: -0.41em;">n</sup>k = 0. </p><p>1/n </p><p>n→∞ </p><p></p><ul style="display: flex;"><li style="flex:1">SPECTRAL RADIUS FORMULA </li><li style="flex:1">7</li></ul><p></p><p>Combining these facts, we conclude that </p><p>1/n </p><p>lim kJ<sub style="top: 0.15em;">n</sub>k = max {|λ<sub style="top: 0.15em;">k</sub>|} = ρ(J). </p><p>n→∞ </p><p>k</p><p>The spectrum of the Jordan form J is identical with the spectrum of A, which completes the proof. </p><p>ꢀ</p><p>It is quite easy to bootstrap the general result from this special case. </p><p>Corollary 3.1. The spectral radius formula holds for any matrix and any norm: </p><p>kA<sup style="top: -0.41em;">n</sup>k −→ ρ(A). </p><p>1/n </p><p>Proof. Theorem 2.1 on the equivalence of norms yields the inequality </p><p></p><ul style="display: flex;"><li style="flex:1">n</li><li style="flex:1">n</li><li style="flex:1">n</li></ul><p></p><p>¯</p><p>CkA k<sub style="top: 0.15em;">∞ </sub>≤ kA k ≤ CkA k<sub style="top: 0.15em;">∞ </sub></p><p>¯<br>¯for positive numbers C and C. Extract the nth root of this inequality, </p><p>and take the limit. The root drives the constants toward one, which <br>¯leaves the relation </p><p></p><ul style="display: flex;"><li style="flex:1">1/n </li><li style="flex:1">1/n </li><li style="flex:1">1/n </li></ul><p></p><p>lim kA<sup style="top: -0.41em;">n</sup>k<sub style="top: 0.25em;">∞ </sub>≤ lim kA<sup style="top: -0.41em;">n</sup>k ≤ lim kA<sup style="top: -0.41em;">n</sup>k<sub style="top: 0.25em;">∞ </sub></p><p>.</p><p></p><ul style="display: flex;"><li style="flex:1">n→∞ </li><li style="flex:1">n→∞ </li><li style="flex:1">n→∞ </li></ul><p></p><p>Apply Theorem 3.1 to the upper and lower bounds to reach </p><p>1/n </p><p>lim kA<sup style="top: -0.41em;">n</sup>k = ρ(A). </p><p>n→∞ </p><p>ꢀ</p>
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-