Supplementary Notes on Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = fb1,..., bng in a vector space V , any vector v 2 V can be represented as a linear combination v = b1b1 + ··· + bnbn (1) and this representation is unique, i.e., there is only one sequence of coeffi- cients b1, ··· , bn for which (1) holds. 1.1.2 The correspondence between vectors in V and the coefficients in the ex- pansion (1) defines n real valued functions on V , ∗ ∗ bi : V ! R, bi (v) ˜ bi, (i = 1, . , n).(2) 1.1.3 If 0 0 0 v = b1b1 + ··· + bnbn (3) is another vector, then 0 0 0 v + v = (b1 + b1)b1 + ··· + (bn + bn)bn which shows that ∗ 0 0 ∗ ∗ 0 bi (v + v ) = bi + bi = bi (v) + bi (v ), (i = 1, . , n). Similarly, for any number a, one has av = (ab1)b1 + ··· + (abn)bn which shows that ∗ ∗ bi (av) = abi (v), (i = 1, . , n). 3 ∗ In other words, each function bi : V ! R is a linear transformation from the vector space V to the one-dimensional vector space R, and the corre- spondence 2 3 b1 ˜ 6 . 7 v 7−! [v]B 4 . 5 (4) bn is a linear transformation from V to the n-dimensional vecor space of column vectors Rn . 1.1.4 The coordinatization isomorphism of V with Rn The kernel of (4) is f0g since vectors b1,..., bn are linearly independent. n The range of (4) is the whole R since vectors b1,..., bn span V . Thus, the n correspondence v 7! [v]B identifies V with the vector space R . We shall refer to (4) as the coordinatization of the vector space V in basis B. _ 1.2 The dual space V 1.2.1 Linear transformations V ! R are referred to as (linear) functionals on V (they are also called linear forms on V ). Linear functionals form a vector _ space of their own which is called the dual of V . We will denote it V (pronounce it “V dual” or “V check”). 1.2.2 An example: the trace of a matrix The trace of an n × n matrix A = aij is the sum of the diagonal entries, tr A ˜ a11 + ··· + ann.(5) The correspondence A 7! tr A is a linear functional on the vector space Matn(R) of n × n matrices. Exercise 1 Calculate both tr AB and tr BA and show that tr AB = tr BA (6) where 2 3 a11 ... a1n 6 . 7 A = 4 . 5 (7) am1 ... amn 4 denotes an arbitrary m × n matrix and 2 3 b11 ... b1m 6 . 7 B = 4 . 5 (8) bn1 ... bnm denotes an arbitrary n × m matrix. 1.2.3 An example: the dual of the space of m × n matrices For any n × m matrix (8), let us consider the linear functional on the space of m × n matrices: fB : A 7−! tr AB (A 2 Matmn(R)).(9) t t Exercise 2 Calculate fB(B ), where B denotes the transpose of B, and show that it vanishes if and only if B = 0. Deduce that fB = 0 if and only if B = 0. 1.2.4 The correspondence _ f : Matnm(R) −! Matmn(R) , B 7−! fB,(10) is a natural linear transformation from the space of n × m matrices into the dual of the space of m × n matrices. In view of Exercise 2 it is injective. By considering bases in V , in the next sections we will show that _ the dimension of V equals the dimension of V if the latter is finite. In _ particular, this will imply that dim Matmn(R) = dim Matmn(R). Since the transposition of matrices, t A 7−! A (A 2 Matmn(R)), is an isomorphism of vector spaces, it will follow that _ dim Matnm(R) = dim Matmn(R) . A corollary of this is that the dual space Matmn(R) is naturally identified with the vector space of n × m matrices, via identification (10). 5 1.2.5 The duality between the spaces of row and column vectors n In particular, the space of column vectors R = Matn1(R) is naturaly identified with the dual of the space of row vectors Mat1n(R) and, vice- versa, the space of row vectors Mat1n(R) is naturally identified with the n dual of the space of column vectors R = Matn1(R). 1.2.6 The coordinate functionals The coordinatization isomorphism of V with Rn is made up of n coordi- _ nate functionals, cf. (2). They span V . Indeed, given a linear functional f : V ! R, let a1 ˜ f(b1),..., an ˜ f(bn). Then, for any vector v 2 V , one has f(v) = f(b1b1 + ··· + bnbn) = b1f(b1) + ··· + bnf(bn) = b1a1 + ··· + bnan = a1b1 + ··· + anbn ∗ ∗ = a1b1 (v) + ··· + anbn(v) ∗ ∗ = a1b1 + ··· + anbn (v) which shows that the linear functional f is a linear combination of func- ∗ ∗ tionals b1,..., bn , ∗ ∗ f = a1b1 + ··· + anbn. 1.2.7 ∗ ∗ The coordinate functionals b1,..., bn are linearly independent. Indeed, if a linear combination ∗ ∗ a1b1 + ··· anbn is the zero functional, then its values on v = b1,..., bn are all zero. But those values are: a1,..., an, since ( ∗ 1 if i = j bi (bj) = .(11) 0 if i , j 6 1.2.8 The dual basis B∗ ∗ ˜ ∗ ∗ Thus, B fb1,..., bng forms a basis of the dual space. Note, that _ dim V = dim V. 1.3 Scalar products 1.3.1 Bilinear pairings A function of two vector arguments h , i : V × V ! R (12) is said to be a bilinear pairing on V if it is a linear functional in each argument. (Bilinear pairings are also called bilinear forms on V .) 1.3.2 We say that the bilinear pairing is nondegenerate if, for any nonzero vector v 2 V , there exists v0 2 V , such that hv, v0i , 0. 1.3.3 We say that the bilinear pairing is symmetric if, for any vectors v, v0 2 V , one has hv0, vi = hv, v0i. 1.3.4 Orthogonality We say that vectors v and v0 are orthogonal if hv, v0i = 0. We denote this fact by v ? v0 . 1.3.5 If X is a subset of V , the set of vectors orthogonal to every element of X is denoted X? ˜ fv 2 V j v ? x for all x 2 X g (13) Exercise 3 Show that X? is a vector subspace of V and X ⊆ X??.(14) 7 1.3.6 We say that the bilinear pairing is positively defined if, for any vector v 2 V , one has hv, vi ≥ 0. Theorem 1.1 (The Cauchy-Schwarz Inequality) Let h , i be a positively de- fined symmetric bilinear pairing on a vector space V . Then, for any vectors v, v0 2 V , one has the following inequality hv, v0i2 ≤ hv, vihv0, v0i.(15) 1.3.7 We shall demonstrate (15) by considering the second degree polynomial p(t) ˜ htv + v0, tv + v0i = hv, vit2 + hv, v0i + hv0, vit + hv0, v0i = at2 + bt + c where a = hv, vi, b = 2hv, v0i and c = hv0, v0i. In view of the hypothesis, p(t) ≥ 0 for all real number t. This is equivalent to the inequality b2 ≤ 4ac which yields inequality (15). 1.3.8 An immediate corollary of the Cauchy-Schwarz Inequality is that a sym- metric bilinear pairing is nondegenerate and positively defined if and only if hv, vi > 0 for any nonzero vector in V . 1.3.9 Scalar products A nondegenerate positively defined symmetric bilinear pairing on V is called a scalar product. Exercise 4 Show that a set of nonzero vectors fv1,..., vng, mutually orthogonal with respect to some scalar product on V , is linearly independent. (Hint: for a linear combination representing the zero vector, a1vq + ··· + anvn = 0 calculate the scalar product of both sides with each vi .) 8 1.3.10 The associated norm For any scalar product, the functional q v 7−! kvk ˜ hv, vi (16) is called the associated norm. Using the norm notation, we can rewrite the Cauchy-Schwarz Inequality as 0 0 hv, v i ≤ kvkkv k.(17) 1.3.11 The Triangle Inequality Note that kv + v0k2 = kvk2 + 2hv, v0i + kv0k2 while 2 kvk + kv0k = kvk2 + 2kvkkv0k + kv0k2. In view of the Cauchy-Schwarz Inequality, the bottom expression is not less than the top expression. Equivalently, kv + v0k ≤ kvk + kv0k,(18) for any pair of vectors v and v0 in V . This is known as the Triangle Inequality. 1.3.12 The associated norm satisfies also the following two conditions kavk = jajkvk,(19) for any real number a and any vector v 2 V , and kvk > 0 (20) for any nonzero vector v 2 V . 1.3.13 Norms on a vector space Any function V −! [0, ¥] that satisfies the Triangle Inequality (18) and conditions (19) and (20) is called a norm on V . 9 1.3.14 Polarization Formula In terms of the associated norm, the scalar product is expressed by means of the identity 1 hv, v0i = kv + v0k2 − kvk2 − kv0k2.(21) 2 known as the Polarization Formula. If a norm k k on a vector space V is associated with a scalar product, then the right-hand-side of (21) must depend on v linearly. If it does not, then that norm is not associated with a scalar product. 1.3.15 Quadratic forms A function q : V ! R is called a quadratic form if the pairing assigning the number 1 hv, v0i ˜ q(v + v0) − q(v) − q(v0) (22) 2 to a pair of vectors v and v0 in V , is bilinear.
Recommended publications
  • HILBERT SPACE GEOMETRY Definition: a Vector Space Over Is a Set V (Whose Elements Are Called Vectors) Together with a Binary Operation
    HILBERT SPACE GEOMETRY Definition: A vector space over is a set V (whose elements are called vectors) together with a binary operation +:V×V→V, which is called vector addition, and an external binary operation ⋅: ×V→V, which is called scalar multiplication, such that (i) (V,+) is a commutative group (whose neutral element is called zero vector) and (ii) for all λ,µ∈ , x,y∈V: λ(µx)=(λµ)x, 1 x=x, λ(x+y)=(λx)+(λy), (λ+µ)x=(λx)+(µx), where the image of (x,y)∈V×V under + is written as x+y and the image of (λ,x)∈ ×V under ⋅ is written as λx or as λ⋅x. Exercise: Show that the set 2 together with vector addition and scalar multiplication defined by x y x + y 1 + 1 = 1 1 + x 2 y2 x 2 y2 x λx and λ 1 = 1 , λ x 2 x 2 respectively, is a vector space. 1 Remark: Usually we do not distinguish strictly between a vector space (V,+,⋅) and the set of its vectors V. For example, in the next definition V will first denote the vector space and then the set of its vectors. Definition: If V is a vector space and M⊆V, then the set of all linear combinations of elements of M is called linear hull or linear span of M. It is denoted by span(M). By convention, span(∅)={0}. Proposition: If V is a vector space, then the linear hull of any subset M of V (together with the restriction of the vector addition to M×M and the restriction of the scalar multiplication to ×M) is also a vector space.
    [Show full text]
  • Solution to Homework 5
    Solution to Homework 5 Sec. 5.4 1 0 2. (e) No. Note that A = is in W , but 0 2 0 1 1 0 0 2 T (A) = = 1 0 0 2 1 0 is not in W . Hence, W is not a T -invariant subspace of V . 3. Note that T : V ! V is a linear operator on V . To check that W is a T -invariant subspace of V , we need to know if T (w) 2 W for any w 2 W . (a) Since we have T (0) = 0 2 f0g and T (v) 2 V; so both of f0g and V to be T -invariant subspaces of V . (b) Note that 0 2 N(T ). For any u 2 N(T ), we have T (u) = 0 2 N(T ): Hence, N(T ) is a T -invariant subspace of V . For any v 2 R(T ), as R(T ) ⊂ V , we have v 2 V . So, by definition, T (v) 2 R(T ): Hence, R(T ) is also a T -invariant subspace of V . (c) Note that for any v 2 Eλ, λv is a scalar multiple of v, so λv 2 Eλ as Eλ is a subspace. So we have T (v) = λv 2 Eλ: Hence, Eλ is a T -invariant subspace of V . 4. For any w in W , we know that T (w) is in W as W is a T -invariant subspace of V . Then, by induction, we know that T k(w) is also in W for any k. k Suppose g(T ) = akT + ··· + a1T + a0, we have k g(T )(w) = akT (w) + ··· + a1T (w) + a0(w) 2 W because it is just a linear combination of elements in W .
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • Does Geometric Algebra Provide a Loophole to Bell's Theorem?
    Discussion Does Geometric Algebra provide a loophole to Bell’s Theorem? Richard David Gill 1 1 Leiden University, Faculty of Science, Mathematical Institute; [email protected] Version October 30, 2019 submitted to Entropy Abstract: Geometric Algebra, championed by David Hestenes as a universal language for physics, was used as a framework for the quantum mechanics of interacting qubits by Chris Doran, Anthony Lasenby and others. Independently of this, Joy Christian in 2007 claimed to have refuted Bell’s theorem with a local realistic model of the singlet correlations by taking account of the geometry of space as expressed through Geometric Algebra. A series of papers culminated in a book Christian (2014). The present paper first explores Geometric Algebra as a tool for quantum information and explains why it did not live up to its early promise. In summary, whereas the mapping between 3D geometry and the mathematics of one qubit is already familiar, Doran and Lasenby’s ingenious extension to a system of entangled qubits does not yield new insight but just reproduces standard QI computations in a clumsy way. The tensor product of two Clifford algebras is not a Clifford algebra. The dimension is too large, an ad hoc fix is needed, several are possible. I further analyse two of Christian’s earliest, shortest, least technical, and most accessible works (Christian 2007, 2011), exposing conceptual and algebraic errors. Since 2015, when the first version of this paper was posted to arXiv, Christian has published ambitious extensions of his theory in RSOS (Royal Society - Open Source), arXiv:1806.02392, and in IEEE Access, arXiv:1405.2355.
    [Show full text]
  • Spectral Coupling for Hermitian Matrices
    Spectral coupling for Hermitian matrices Franc¸oise Chatelin and M. Monserrat Rincon-Camacho Technical Report TR/PA/16/240 Publications of the Parallel Algorithms Team http://www.cerfacs.fr/algor/publications/ SPECTRAL COUPLING FOR HERMITIAN MATRICES FRANC¸OISE CHATELIN (1);(2) AND M. MONSERRAT RINCON-CAMACHO (1) Cerfacs Tech. Rep. TR/PA/16/240 Abstract. The report presents the information processing that can be performed by a general hermitian matrix when two of its distinct eigenvalues are coupled, such as λ < λ0, instead of λ+λ0 considering only one eigenvalue as traditional spectral theory does. Setting a = 2 = 0 and λ0 λ 6 e = 2− > 0, the information is delivered in geometric form, both metric and trigonometric, associated with various right-angled triangles exhibiting optimality properties quantified as ratios or product of a and e. The potential optimisation has a triple nature which offers two j j e possibilities: in the case λλ0 > 0 they are characterised by a and a e and in the case λλ0 < 0 a j j j j by j j and a e. This nature is revealed by a key generalisation to indefinite matrices over R or e j j C of Gustafson's operator trigonometry. Keywords: Spectral coupling, indefinite symmetric or hermitian matrix, spectral plane, invariant plane, catchvector, antieigenvector, midvector, local optimisation, Euler equation, balance equation, torus in 3D, angle between complex lines. 1. Spectral coupling 1.1. Introduction. In the work we present below, we focus our attention on the coupling of any two distinct real eigenvalues λ < λ0 of a general hermitian or symmetric matrix A, a coupling called spectral coupling.
    [Show full text]
  • Spanning Sets the Only Algebraic Operations That Are Defined in a Vector Space V Are Those of Addition and Scalar Multiplication
    i i “main” 2007/2/16 page 258 i i 258 CHAPTER 4 Vector Spaces 23. Show that the set of all solutions to the nonhomoge- and let neous differential equation S1 + S2 = {v ∈ V : y + a1y + a2y = F(x), v = x + y for some x ∈ S1 and y ∈ S2} . where F(x) is nonzero on an interval I, is not a sub- 2 space of C (I). (a) Show that, in general, S1 ∪ S2 is not a subspace of V . 24. Let S1 and S2 be subspaces of a vector space V . Let (b) Show that S1 ∩ S2 is a subspace of V . S ∪ S ={v ∈ V : v ∈ S or v ∈ S }, 1 2 1 2 (c) Show that S1 + S2 is a subspace of V . S1 ∩ S2 ={v ∈ V : v ∈ S1 and v ∈ S2}, 4.4 Spanning Sets The only algebraic operations that are defined in a vector space V are those of addition and scalar multiplication. Consequently, the most general way in which we can combine the vectors v1, v2,...,vk in V is c1v1 + c2v2 +···+ckvk, (4.4.1) where c1,c2,...,ck are scalars. An expression of the form (4.4.1) is called a linear combination of v1, v2,...,vk. Since V is closed under addition and scalar multiplica- tion, it follows that the foregoing linear combination is itself a vector in V . One of the questions we wish to answer is whether every vector in a vector space can be obtained by taking linear combinations of a finite set of vectors. The following terminology is used in the case when the answer to this question is affirmative: DEFINITION 4.4.1 If every vector in a vector space V can be written as a linear combination of v1, v2, ..., vk, we say that V is spanned or generated by v1, v2, ..., vk and call the set of vectors {v1, v2,...,vk} a spanning set for V .
    [Show full text]
  • Section 18.1-2. in the Next 2-3 Lectures We Will Have a Lightning Introduction to Representations of finite Groups
    Section 18.1-2. In the next 2-3 lectures we will have a lightning introduction to representations of finite groups. For any vector space V over a field F , denote by L(V ) the algebra of all linear operators on V , and by GL(V ) the group of invertible linear operators. Note that if dim(V ) = n, then L(V ) = Matn(F ) is isomorphic to the algebra of n×n matrices over F , and GL(V ) = GLn(F ) to its multiplicative group. Definition 1. A linear representation of a set X in a vector space V is a map φ: X ! L(V ), where L(V ) is the set of linear operators on V , the space V is called the space of the rep- resentation. We will often denote the representation of X by (V; φ) or simply by V if the homomorphism φ is clear from the context. If X has any additional structures, we require that the map φ is a homomorphism. For example, a linear representation of a group G is a homomorphism φ: G ! GL(V ). Definition 2. A morphism of representations φ: X ! L(V ) and : X ! L(U) is a linear map T : V ! U, such that (x) ◦ T = T ◦ φ(x) for all x 2 X. In other words, T makes the following diagram commutative φ(x) V / V T T (x) U / U An invertible morphism of two representation is called an isomorphism, and two representa- tions are called isomorphic (or equivalent) if there exists an isomorphism between them. Example. (1) A representation of a one-element set in a vector space V is simply a linear operator on V .
    [Show full text]
  • Linear Spaces
    Chapter 2 Linear Spaces Contents FieldofScalars ........................................ ............. 2.2 VectorSpaces ........................................ .............. 2.3 Subspaces .......................................... .............. 2.5 Sumofsubsets........................................ .............. 2.5 Linearcombinations..................................... .............. 2.6 Linearindependence................................... ................ 2.7 BasisandDimension ..................................... ............. 2.7 Convexity ............................................ ............ 2.8 Normedlinearspaces ................................... ............... 2.9 The `p and Lp spaces ............................................. 2.10 Topologicalconcepts ................................... ............... 2.12 Opensets ............................................ ............ 2.13 Closedsets........................................... ............. 2.14 Boundedsets......................................... .............. 2.15 Convergence of sequences . ................... 2.16 Series .............................................. ............ 2.17 Cauchysequences .................................... ................ 2.18 Banachspaces....................................... ............... 2.19 Completesubsets ....................................... ............. 2.19 Transformations ...................................... ............... 2.21 Lineartransformations.................................. ................ 2.21
    [Show full text]
  • General Inner Product & Fourier Series
    General Inner Products 1 General Inner Product & Fourier Series Advanced Topics in Linear Algebra, Spring 2014 Cameron Braithwaite 1 General Inner Product The inner product is an algebraic operation that takes two vectors of equal length and com- putes a single number, a scalar. It introduces a geometric intuition for length and angles of vectors. The inner product is a generalization of the dot product which is the more familiar operation that's specific to the field of real numbers only. Euclidean space which is limited to 2 and 3 dimensions uses the dot product. The inner product is a structure that generalizes to vector spaces of any dimension. The utility of the inner product is very significant when proving theorems and runing computations. An important aspect that can be derived is the notion of convergence. Building on convergence we can move to represenations of functions, specifally periodic functions which show up frequently. The Fourier series is an expansion of periodic functions specific to sines and cosines. By the end of this paper we will be able to use a Fourier series to represent a wave function. To build toward this result many theorems are stated and only a few have proofs while some proofs are trivial and left for the reader to save time and space. Definition 1.11 Real Inner Product Let V be a real vector space and a; b 2 V . An inner product on V is a function h ; i : V x V ! R satisfying the following conditions: (a) hαa + α0b; ci = αha; ci + α0hb; ci (b) hc; αa + α0bi = αhc; ai + α0hc; bi (c) ha; bi = hb; ai (d) ha; aiis a positive real number for any a 6= 0 Definition 1.12 Complex Inner Product Let V be a vector space.
    [Show full text]
  • Function Space Tensor Decomposition and Its Application in Sports Analytics
    East Tennessee State University Digital Commons @ East Tennessee State University Electronic Theses and Dissertations Student Works 12-2019 Function Space Tensor Decomposition and its Application in Sports Analytics Justin Reising East Tennessee State University Follow this and additional works at: https://dc.etsu.edu/etd Part of the Applied Statistics Commons, Multivariate Analysis Commons, and the Other Applied Mathematics Commons Recommended Citation Reising, Justin, "Function Space Tensor Decomposition and its Application in Sports Analytics" (2019). Electronic Theses and Dissertations. Paper 3676. https://dc.etsu.edu/etd/3676 This Thesis - Open Access is brought to you for free and open access by the Student Works at Digital Commons @ East Tennessee State University. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of Digital Commons @ East Tennessee State University. For more information, please contact [email protected]. Function Space Tensor Decomposition and its Application in Sports Analytics A thesis presented to the faculty of the Department of Mathematics East Tennessee State University In partial fulfillment of the requirements for the degree Master of Science in Mathematical Sciences by Justin Reising December 2019 Jeff Knisley, Ph.D. Nicole Lewis, Ph.D. Michele Joyner, Ph.D. Keywords: Sports Analytics, PCA, Tensor Decomposition, Functional Analysis. ABSTRACT Function Space Tensor Decomposition and its Application in Sports Analytics by Justin Reising Recent advancements in sports information and technology systems have ushered in a new age of applications of both supervised and unsupervised analytical techniques in the sports domain. These automated systems capture large volumes of data points about competitors during live competition.
    [Show full text]
  • Using Functional Distance Measures When Calibrating Journey-To-Crime Distance Decay Algorithms
    USING FUNCTIONAL DISTANCE MEASURES WHEN CALIBRATING JOURNEY-TO-CRIME DISTANCE DECAY ALGORITHMS A Thesis Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Master of Natural Sciences in The Interdepartmental Program of Natural Sciences by Joshua David Kent B.S., Louisiana State University, 1994 December 2003 ACKNOWLEDGMENTS The work reported in this research was partially supported by the efforts of Dr. James Mitchell of the Louisiana Department of Transportation and Development - Information Technology section for donating portions of the Geographic Data Technology (GDT) Dynamap®-Transportation data set. Additional thanks are paid for the thirty-five residence of East Baton Rouge Parish who graciously supplied the travel data necessary for the successful completion of this study. The author also wishes to acknowledge the support expressed by Dr. Michael Leitner, Dr. Andrew Curtis, Mr. DeWitt Braud, and Dr. Frank Cartledge - their efforts helped to make this thesis possible. Finally, the author thanks his wonderful wife and supportive family for their encouragement and tolerance. ii TABLE OF CONTENTS ACKNOWLEDGMENTS .............................................................................................................. ii LIST OF TABLES.......................................................................................................................... v LIST OF FIGURES ......................................................................................................................
    [Show full text]
  • Worksheet 1, for the MATLAB Course by Hans G. Feichtinger, Edinburgh, Jan
    Worksheet 1, for the MATLAB course by Hans G. Feichtinger, Edinburgh, Jan. 9th Start MATLAB via: Start > Programs > School applications > Sci.+Eng. > Eng.+Electronics > MATLAB > R2007 (preferably). Generally speaking I suggest that you start your session by opening up a diary with the command: diary mydiary1.m and concluding the session with the command diary off. If you want to save your workspace you my want to call save today1.m in order to save all the current variable (names + values). Moreover, using the HELP command from MATLAB you can get help on more or less every MATLAB command. Try simply help inv or help plot. 1. Define an arbitrary (random) 3 × 3 matrix A, and check whether it is invertible. Calculate the inverse matrix. Then define an arbitrary right hand side vector b and determine the (unique) solution to the equation A ∗ x = b. Find in two different ways the solution to this problem, by either using the inverse matrix, or alternatively by applying Gauss elimination (i.e. the RREF command) to the extended system matrix [A; b]. In addition look at the output of the command rref([A,eye(3)]). What does it tell you? 2. Produce an \arbitrary" 7 × 7 matrix of rank 5. There are at east two simple ways to do this. Either by factorization, i.e. by obtaining it as a product of some 7 × 5 matrix with another random 5 × 7 matrix, or by purposely making two of the rows or columns linear dependent from the remaining ones. Maybe the second method is interesting (if you have time also try the first one): 3.
    [Show full text]