Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra SOME NECESSARY LINEAR ALGEBRA. 1. Bilinear forms Let V be a vector space over a field k.A bilinear form h( ; ) is a function V × V ! k that is linear in both arguments. For any choice of a basis v1; : : : ; vn in V we can write a matrix H = (hij) for the form h( ; ) in this basis, by assigning hij = h(vi; vj). Then P P P P for two vectors u = aivi and w = bivi we have h(u; w) = h ( aivi; bivi) = P P ajbjh(vi; vj) = hijaibj. 0 0 If two bases fvig and fvig are related by a linear transformation L, that is, vi = Lvi, 0 and L has matrix A in the basis fvig, then the matrices H and H for a bilinear form h 0 0 T in the bases fvig and fvig respectively are related as follows: H = A HA. We call a bilinear form: • degenerate if there is a u s.t. h(u; v) = 0 for all v, or a v s.t. h(u; v) = 0 for all u. We call a form non-degenerate otherwise. • symmetric if h(u; v) = h(v; u). • antisymmetric if h(u; v) = −h(v; u). • a symplectic form if it is antisymmetric and non-degenerate. Every bilinear form can be presented as a sum of a symmetric and an antisymmetric form: h(u; v) = hsym(u; v) + hasym(u; v). This presentation exists and is unique because sym 1 asym 1 then h (u; v) = 2 (h(u; v) + h(v; u)), and h (u; v) = 2 (h(u; v) − h(v; u)). To every bilinear form h( ; ) we can associate a quadratic form q : V ! k by setting q(v) := h(v; v). Lemma 1. The form q is identically zero on V iff h is antisymmetric. Proof. Since h is bilinear, for all u, v we have h(u+v; u+v) = h(u; u)+h(u; v)+h(v; u)+ h(v; v). Then h(u + v; u + v) = h(u; u) = h(v; v) = 0 implies h(u; v) + h(v; u) = 0, so h(u; v) = −h(u; v) for all u; v. If q(v) = h(v; v), you can recover the symmetric part hsym(u; v) from q via the following formula: 1 h(u; v) = q(u + v; u + v) − q(u; u) − q(v; v) : 2 If the form h( ; ) is either symmetric or antisymmetric, h(u; v) = 0 implies h(v; u) = 0, so we can say that u and v are orthogonal iff h(u; v) = h(v; u) = 0. Theorem 1. Any symmetric bilinear form is diagonalizable, i.e. for any symmetric bilinear form h there is a basis in which the matrix H is diagonal. Proof. All we need is to find an orthogonal basis, i.e. a basis v1; : : : ; vn with vi ? vj for all i 6= j. Let us do that by induction on the dimension of V . If dim V = 1, pick any vector as a basic vector. If dim V > 1, find a vector v such that h(v; v) 6= 0. If there is no such vector, then by Lemma 1 the form h is antisymmetric, but since by assumption h is symmetric, it means that h is identically zero and we are done. Now 1 2 SOME NECESSARY LINEAR ALGEBRA. ? we can choose this vector v as the first basic vector v1. The space hv1i of all vectors v such that h(v1; v) = 0 is the zero space of a non-trivial linear function, hence it has dimension n − 1. By induction, we can find an orthogonal basis v2; : : : ; vn there. Then v1; v2; : : : ; vn is an orthogonal basis of V . Corollary 1. If the field k is algebraically closed, for any non-degenerate symmetric form h there is a basis where H is the identity matrix. Proof. Let v1; : : : ; vn be an orthogonal basis of V with respect to the form h. Denote h(vi; vi) by hi. Since the form is non-degenerate, hi 6= 0 for all i. Let ξ1; : : : ; ξn be such 2 numbers that ξi = hi (those exist since k is algebraically closed). Then v1=xi1; : : : ; vn/ξn is the required basis. If we choose a basis in V , thus establishing a system of coordinates on it, a quadratic form is a homogeneous quadratic polynomial of these coordinates. n Corollary 2. Any two hypersurfaces in P given by non-degenerate quadratic forms are isomorphic (assuming the base field is algebraically closed). In particular, any two 2 smooth quadratic curves in P are isomorphic. 2. Tensor product and dual spaces 2.1. The definition of tensor product. Let V , W be two vector spaces. Define the tensor product V ⊗ W as the quotient of the space freely generated by all expressions of the form v ⊗ w where v 2 V , w 2 W , by the subspace generated by all expressions of the form (av + bu) ⊗ w − a(v ⊗ w) − b(u ⊗ w) and v ⊗ (au + bw) − a(v ⊗ u) − b(v ⊗ w). Lemma 2. If fv1; : : : ; vng is a basis for V , and fw1; : : : ; wmg is a basis for W , then fvi ⊗wjg for all i; j is a basis for V ⊗W . In particular, if V and W are finite dimensional, V ⊗ W also is, and dim V ⊗ W = dim V · dim W . Proof. Any element of V ⊗ W is a (class of a) finite linear combination of elements of the form v ⊗ w, and every v ⊗ w can be presented as a linear combination of vi ⊗ wj: if P P P v = aivi and w = bjwj, then v ⊗ w = aibjvi ⊗ wj by the relations that define the tensor product. Therefore, the elements vi ⊗ wj span V ⊗ W . On the other hand, every expression of the form (av + bu) ⊗ w − a(v ⊗ w) − b(u ⊗ w) or v ⊗ (au + bw) − a(v ⊗ u) − b(v ⊗ w) produces all zero coefficients as a linear combination P of vi ⊗ wj as above, so no aijvi ⊗ wj can be a linear combination of these expressions when not all aij are zero, so fvi ⊗ wjg are linearly independent. Tensoring by a given vector space U is a functor from the category of vector spaces to itself, meaning that not only we can produce a vector space U ⊗ V for every vector space V , but we can produce a map (denoted id ⊗ L) between U ⊗ V and U ⊗ W for every map V −!L W . This map takes every u ⊗ v to u ⊗ L(v). The space V ⊗ W does not only consists of vectors of the form v ⊗ w, v 2 V , w 2 W , but it is spanned by such vectors, and we will be using this a lot in the subsequent proofs. 2.2. The definition of the dual space. Let V be a vector space. The dual vector _ space V consists of all linear functions V ! k. If v1; : : : ; vn is a basis for V , then _ 1; : : : ; n is a basis for V , where i is defined by i(vi) = 1, i(vj) = 0 for i 6= j. Thus dim V _ = dim V . SOME NECESSARY LINEAR ALGEBRA. 3 For each linear transformation L : V ! W we get a linear transformation L_ : W _ ! V _ (other common notation for this map is L∗) by setting (Lφ)(v) = φ(Lv), where φ 2 W _. Note that for the composition V −!L W −!M U the dual picture will be _ _ U _ −−!M W _ −−!L V _, so (ML)_ = L_M _. If we fix bases in V and W , and take their dual bases in V _ and W _, the matrix of L_ will be the transpose of the matrix of L. P P Note that V ⊗ W ' W ⊗ V (the isomorphism sends vi ⊗ wi to wi ⊗ vi), and (V ⊗ W )_ ' V _ ⊗ W _ (for any linear functions φ on V and on W , the function P vi ⊗ wi 7! φ(vi) (wi) is linear on V ⊗ W ). 2.3. Tensors and Hom's. For every two vector spaces define the space Hom(V; W ) as the vector space of all linear transformations V ! W . We have the following canonical maps: (1) Hom(U ⊗ V; W ) −! Hom(U; Hom(V; W )) given by φ 7! [u 7! [v 7! φ(u ⊗ v)]], where φ : U ⊗ V ! W , and (2) Hom(U; V ) ⊗ W −! Hom(U; V ⊗ W ) given by φ ⊗ w 7! [u 7! φ(u) ⊗ w] (extend to all of Hom(U; V ) ⊗ W by linearity). When U, V , and W are finite dimensional, these maps are isomorphisms. If we fix a space U, the assignment V 7! Hom(U; V ) is functorial, meaning that every linear map V −!L W induces a map Hom(U; V ) ! Hom(U; W ) (it takes a map U ! V and composes it with L, producing a map U ! W ). The assignment V 7! Hom(V; U) is also functorial, but the functor is contravariant, meaning that for a linear map L : V ! W we get a map Hom(W; U) ! Hom(V; U) (again, it takes a map W ! U and composes it with L, but this time the composition is in different order). The maps above are transformations of functors (commute with applying the corresponding functors to maps between arguments) when we view these spaces as functors of U, V , or W . Lemma 3. We have k ⊗ V =∼ V , and Hom(k;V ) =∼ V .
Recommended publications
  • Baire Category Theorem and Uniform Boundedness Principle)
    Functional Analysis (WS 19/20), Problem Set 3 (Baire Category Theorem and Uniform Boundedness Principle) Baire Category Theorem B1. Let (X; k·kX ) be an innite dimensional Banach space. Prove that X has uncountable Hamel basis. Note: This is Problem A2 from Problem Set 1. B2. Consider subset of bounded sequences 1 A = fx 2 l : only nitely many xk are nonzerog : Can one dene a norm on A so that it becomes a Banach space? Consider the same question with the set of polynomials dened on interval [0; 1]. B3. Prove that the set L2(0; 1) has empty interior as the subset of Banach space L1(0; 1). B4. Let f : [0; 1) ! [0; 1) be a continuous function such that for every x 2 [0; 1), f(kx) ! 0 as k ! 1. Prove that f(x) ! 0 as x ! 1. B5.( Uniform Boundedness Principle) Let (X; k · kX ) be a Banach space and (Y; k · kY ) be a normed space. Let fTαgα2A be a family of bounded linear operators between X and Y . Suppose that for any x 2 X, sup kTαxkY < 1: α2A Prove that . supα2A kTαk < 1 Uniform Boundedness Principle U1. Let F be a normed space C[0; 1] with L2(0; 1) norm. Check that the formula 1 Z n 'n(f) = n f(t) dt 0 denes a bounded linear functional on . Verify that for every , F f 2 F supn2N j'n(f)j < 1 but . Why Uniform Boundedness Principle is not satised in this case? supn2N k'nk = 1 U2.( pointwise convergence of operators) Let (X; k · kX ) be a Banach space and (Y; k · kY ) be a normed space.
    [Show full text]
  • 3 Bilinear Forms & Euclidean/Hermitian Spaces
    3 Bilinear Forms & Euclidean/Hermitian Spaces Bilinear forms are a natural generalisation of linear forms and appear in many areas of mathematics. Just as linear algebra can be considered as the study of `degree one' mathematics, bilinear forms arise when we are considering `degree two' (or quadratic) mathematics. For example, an inner product is an example of a bilinear form and it is through inner products that we define the notion of length in n p 2 2 analytic geometry - recall that the length of a vector x 2 R is defined to be x1 + ... + xn and that this formula holds as a consequence of Pythagoras' Theorem. In addition, the `Hessian' matrix that is introduced in multivariable calculus can be considered as defining a bilinear form on tangent spaces and allows us to give well-defined notions of length and angle in tangent spaces to geometric objects. Through considering the properties of this bilinear form we are able to deduce geometric information - for example, the local nature of critical points of a geometric surface. In this final chapter we will give an introduction to arbitrary bilinear forms on K-vector spaces and then specialise to the case K 2 fR, Cg. By restricting our attention to thse number fields we can deduce some particularly nice classification theorems. We will also give an introduction to Euclidean spaces: these are R-vector spaces that are equipped with an inner product and for which we can `do Euclidean geometry', that is, all of the geometric Theorems of Euclid will hold true in any arbitrary Euclidean space.
    [Show full text]
  • Representation of Bilinear Forms in Non-Archimedean Hilbert Space by Linear Operators
    Comment.Math.Univ.Carolin. 47,4 (2006)695–705 695 Representation of bilinear forms in non-Archimedean Hilbert space by linear operators Toka Diagana This paper is dedicated to the memory of Tosio Kato. Abstract. The paper considers representing symmetric, non-degenerate, bilinear forms on some non-Archimedean Hilbert spaces by linear operators. Namely, upon making some assumptions it will be shown that if φ is a symmetric, non-degenerate bilinear form on a non-Archimedean Hilbert space, then φ is representable by a unique self-adjoint (possibly unbounded) operator A. Keywords: non-Archimedean Hilbert space, non-Archimedean bilinear form, unbounded operator, unbounded bilinear form, bounded bilinear form, self-adjoint operator Classification: 47S10, 46S10 1. Introduction Representing bounded or unbounded, symmetric, bilinear forms by linear op- erators is among the most attractive topics in representation theory due to its significance and its possible applications. Applications include those arising in quantum mechanics through the study of the form sum associated with the Hamil- tonians, mathematical physics, symplectic geometry, variational methods through the study of weak solutions to some partial differential equations, and many oth- ers, see, e.g., [3], [7], [10], [11]. This paper considers representing symmetric, non- degenerate, bilinear forms defined over the so-called non-Archimedean Hilbert spaces Eω by linear operators as it had been done for closed, positive, symmetric, bilinear forms in the classical setting, see, e.g., Kato [11, Chapter VI, Theo- rem 2.23, p. 331]. Namely, upon making some assumptions it will be shown that if φ : D(φ) × D(φ) ⊂ Eω × Eω 7→ K (K being the ground field) is a symmetric, non-degenerate, bilinear form, then there exists a unique self-adjoint (possibly unbounded) operator A such that (1.1) φ(u, v)= hAu, vi, ∀ u ∈ D(A), v ∈ D(φ) where D(A) and D(φ) denote the domains of A and φ, respectively.
    [Show full text]
  • A Symplectic Banach Space with No Lagrangian Subspaces
    transactions of the american mathematical society Volume 273, Number 1, September 1982 A SYMPLECTIC BANACHSPACE WITH NO LAGRANGIANSUBSPACES BY N. J. KALTON1 AND R. C. SWANSON Abstract. In this paper we construct a symplectic Banach space (X, Ü) which does not split as a direct sum of closed isotropic subspaces. Thus, the question of whether every symplectic Banach space is isomorphic to one of the canonical form Y X Y* is settled in the negative. The proof also shows that £(A") admits a nontrivial continuous homomorphism into £(//) where H is a Hilbert space. 1. Introduction. Given a Banach space E, a linear symplectic form on F is a continuous bilinear map ß: E X E -> R which is alternating and nondegenerate in the (strong) sense that the induced map ß: E — E* given by Û(e)(f) = ü(e, f) is an isomorphism of E onto E*. A Banach space with such a form is called a symplectic Banach space. It can be shown, by essentially the argument of Lemma 2 below, that any symplectic Banach space can be renormed so that ß is an isometry. Any symplectic Banach space is reflexive. Standard examples of symplectic Banach spaces all arise in the following way. Let F be a reflexive Banach space and set E — Y © Y*. Define the linear symplectic form fiyby Qy[(^. y% (z>z*)] = z*(y) ~y*(z)- We define two symplectic spaces (£,, ß,) and (E2, ß2) to be equivalent if there is an isomorphism A : Ex -» E2 such that Q2(Ax, Ay) = ß,(x, y). A. Weinstein [10] has asked the question whether every symplectic Banach space is equivalent to one of the form (Y © Y*, üy).
    [Show full text]
  • Properties of Bilinear Forms on Hilbert Spaces Related to Stability Properties of Certain Partial Differential Operators
    JOURNAL OF hlATHEhlATICAL ANALYSIS AND APPLICATIONS 20, 124-144 (1967) Properties of Bilinear Forms on Hilbert Spaces Related to Stability Properties of Certain Partial Differential Operators N. SAUER National Research Institute for Mathematical Sciences, Pretoria, South Africa Submitted by P. Lax INTRODUCTION The continuous dependence of solutions of positive definite, self-adjoint partial differential equations on the domain was investigated by Babugka [I], [2]. In a recent paper, [3], Babugka and Vjibornjr also studied the continuous dependence of the eigenvalues of strongly-elliptic, self-adjoint partial dif- ferential operators on the domain. It is the aim of this paper to study formalisms in abstract Hilbert space by means of which results on various types of continuous dependences can be obtained. In the case of the positive definite operator it is found that the condition of self-adjointness can be dropped. The properties of operators of this type are studied in part II. In part III the behavior of the eigenvalues of a self- adjoint elliptic operator under various “small changes” is studied. In part IV some applications to partial differential operators and variational theory are sketched. I. BASIC: CONCEPTS AND RESULTS Let H be a complex Hilbert space. We use the symbols X, y, z,... for vectors in H and the symbols a, b, c,... for scalars. The inner product in H is denoted by (~,y) and the norm by I( .v 11 = (x, x)llz. Weak and strong convergence of a sequence (x~} to an element x0 E H are denoted by x’, - x0 and x’, + x,, , respectively. A functional f on H is called: (i) continuous if it is continuous in the strong topology on H, (ii) weakly continuous (w.-continuous) if it is continuous in the weak topology on H.
    [Show full text]
  • Fact Sheet Functional Analysis
    Fact Sheet Functional Analysis Literature: Hackbusch, W.: Theorie und Numerik elliptischer Differentialgleichungen. Teubner, 1986. Knabner, P., Angermann, L.: Numerik partieller Differentialgleichungen. Springer, 2000. Triebel, H.: H¨ohere Analysis. Harri Deutsch, 1980. Dobrowolski, M.: Angewandte Funktionalanalysis, Springer, 2010. 1. Banach- and Hilbert spaces Let V be a real vector space. Normed space: A norm is a mapping k · k : V ! [0; 1), such that: kuk = 0 , u = 0; (definiteness) kαuk = jαj · kuk; α 2 R; u 2 V; (positive scalability) ku + vk ≤ kuk + kvk; u; v 2 V: (triangle inequality) The pairing (V; k · k) is called a normed space. Seminorm: In contrast to a norm there may be elements u 6= 0 such that kuk = 0. It still holds kuk = 0 if u = 0. Comparison of two norms: Two norms k · k1, k · k2 are called equivalent if there is a constant C such that: −1 C kuk1 ≤ kuk2 ≤ Ckuk1; u 2 V: If only one of these inequalities can be fulfilled, e.g. kuk2 ≤ Ckuk1; u 2 V; the norm k · k1 is called stronger than the norm k · k2. k · k2 is called weaker than k · k1. Topology: In every normed space a canonical topology can be defined. A subset U ⊂ V is called open if for every u 2 U there exists a " > 0 such that B"(u) = fv 2 V : ku − vk < "g ⊂ U: Convergence: A sequence vn converges to v w.r.t. the norm k · k if lim kvn − vk = 0: n!1 1 A sequence vn ⊂ V is called Cauchy sequence, if supfkvn − vmk : n; m ≥ kg ! 0 for k ! 1.
    [Show full text]
  • Math 611 Homework 7
    Math 611 Homework 7 Paul Hacking November 14, 2013 Reading: Dummit and Foote, 12.2{12.3 (rational canonical form and Jor- dan normal form), 11.1{11.3 (review of vector spaces including dual space). Unfortunately the material on bilinear forms is not covered in Dummit and Foote. I recommend Artin, Algebra, Chapter 7 (1st ed.) / Chapter 8 (2nd ed.). All vector spaces are assumed finite dimensional. (1) Classify matrices A 2 GL4(Q) of orders 4 and 5 up to conjugacy A P AP −1. (2) Let J(n; λ) denote the n × n matrix 0λ 0 0 ··· 0 01 B1 λ 0 ··· 0 0C B C B0 1 λ ··· 0 0C B C B . C B . C B C @0 0 0 ··· λ 0A 0 0 0 ··· 1 λ We call J(n; λ) the Jordan block of size n with eigenvalue λ. (Note: In DF they use the transpose of the above matrix. This corresponds to a change of basis given by reversing the order of the basis.) Show that λ is the unique eigenvalue of J(n; λ) and dim ker(J(n; λ) − λI)k = min(k; n): (3) Let V be a vector space and T : V ! V be a linear transformation from V to itself. What is the Jordan normal form of T ? Suppose that 1 the eigenvalues of T are λi, i = 1; : : : ; r, and the sizes of the Jordan blocks with eigenvalue λi are mi1 ≤ mi2 ≤ ::: ≤ misi . (a) What is the characteristic polynomial of T ? What is the minimal polynomial of T ? k (b) Let dik = dim ker(T − λi idV ) .
    [Show full text]
  • Contents 5 Bilinear and Quadratic Forms
    Linear Algebra (part 5): Bilinear and Quadratic Forms (by Evan Dummit, 2020, v. 1.00) Contents 5 Bilinear and Quadratic Forms 1 5.1 Bilinear Forms . 1 5.1.1 Denition, Associated Matrices, Basic Properties . 1 5.1.2 Symmetric Bilinear Forms and Diagonalization . 3 5.2 Quadratic Forms . 5 5.2.1 Denition and Basic Properties . 6 5.2.2 Quadratic Forms Over Rn: Diagonalization of Quadratic Varieties . 7 5.2.3 Quadratic Forms Over Rn: The Second Derivatives Test . 9 5.2.4 Quadratic Forms Over Rn: Sylvester's Law of Inertia . 11 5 Bilinear and Quadratic Forms In this chapter, we will discuss bilinear and quadratic forms. Bilinear forms are simply linear transformations that are linear in more than one variable, and they will allow us to extend our study of linear phenomena. They are closely related to quadratic forms, which are (classically speaking) homogeneous quadratic polynomials in multiple variables. Despite the fact that quadratic forms are not linear, we can (perhaps surprisingly) still use many of the tools of linear algebra to study them. 5.1 Bilinear Forms • We begin by discussing basic properties of bilinear forms on an arbitrary vector space. 5.1.1 Denition, Associated Matrices, Basic Properties • Let V be a vector space over the eld F . • Denition: A function Φ: V × V ! F is a bilinear form on V if it is linear in each variable when the other variable is xed. Explicitly, this means Φ(v1 + αv2; y) = Φ(v1; w) + αΦ(v2; w) and Φ(v; w1 + αw2) = Φ(v; w1) + αΦ(v; w2) for arbitrary vi; wi 2 V and α 2 F .
    [Show full text]
  • Metric Vector Spaces: the Theory of Bilinear Forms
    Chapter 11 Metric Vector Spaces: The Theory of Bilinear Forms In this chapter, we study vector spaces over arbitrary fields that have a bilinear form defined on them. Unless otherwise mentioned, all vector spaces are assumed to be finite- dimensional. The symbol -- denotes an arbitrary field and denotes a finite field of size . Symmetric, Skew-Symmetric and Alternate Forms We begin with the basic definition. Definition Let = be a vector space over - . A mapping ºÁ »¢ = d = ¦ - is called a bilinear form if it is linear in each coordinate, that is, if º %b &Á'»~ º%Á'»b º&Á'» and º'Á % b &» ~ º'Á %» b º'Á &» A bilinear form is 1) symmetric if º%Á &» ~ º&Á %» for all %Á & = . 2)() skew-symmetric or antisymmetric if º%Á &» ~ cº&Á %» for all %Á & = . 260 Advanced Linear Algebra 3)( alternate or alternating ) if º%Á %» ~ for all %=. A bilinear form that is either symmetric, skew-symmetric, or alternate is referred to as an inner product and a pair ²=ÁºÁ»³, where = is a vector space and ºÁ » is an inner product on = , is called a metric vector space or inner product space. As usual, we will refer to = as a metric vector space when the form is understood. 4) A metric vector space = with a symmetric form is called an orthogonal geometry over - . 5) A metric vector space = with an alternate form is called a symplectic geometry over - . The term symplectic, from the Greek for “intertwined,” was introduced in 1939 by the famous mathematician Hermann Weyl in his book The Classical Groups, as a substitute for the term complex.
    [Show full text]
  • Bilinear Forms and Their Matrices
    Bilinear forms and their matrices Joel Kamnitzer March 11, 2011 0.1 Definitions A bilinear form on a vector space V over a field F is a map H : V × V → F such that (i) H(v1 + v2, w) = H(v1, w) + H(v2, w), for all v1,v2, w ∈ V (ii) H(v, w1 + w2) = H(v, w1) + H(v, w2), for all v, w1, w2 ∈ V (iii) H(av, w) = aH(v, w), for all v, w ∈ V, a ∈ F (iv) H(v, aw) = aH(v, w), for all v, w ∈ V, a ∈ F A bilinear form H is called symmetric if H(v, w) = H(w,v) for all v, w ∈ V . A bilinear form H is called skew-symmetric if H(v, w) = −H(w,v) for all v, w ∈ V . A bilinear form H is called non-degenerate if for all v ∈ V , there exists w ∈ V , such that H(w,v) 6= 0. A bilinear form H defines a map H# : V → V ∗ which takes w to the linear map v 7→ H(v, w). In other words, H#(w)(v) = H(v, w). Note that H is non-degenerate if and only if the map H# : V → V ∗ is injective. Since V and V ∗ are finite-dimensional vector spaces of the same dimension, this map is injective if and only if it is invertible. 0.2 Matrices of bilinear forms If we take V = Fn, then every n × n matrix A gives rise to a bilinear form by the formula t HA(v, w) = v Aw Example 0.1.
    [Show full text]
  • Hilbert Space by Linear Operatorso
    REPRESENTATION OF BILINEAR FORMS IN HILBERT SPACE BY LINEAR OPERATORSO BY ALAN McINTOSH 1. Introduction. This paper is concerned with representing accretive bilinear forms in a Hubert space by maximal accretive operators in the same space. The operator A} associated with a bilinear form J is defined to be the operator with largest domain satisfying J[u, v] = (A}u, v) for all v e 3>(J), the domain of J. If J is accretive (ReJ[u, u]^0, ue!3(J)), then A} is accretive (Re (A3u, u)^0, u e 3i(Aj)). An accretive form J is defined to be representable if A, is maximal accretive (i.e. has no proper accretive extension). This implies that the spectrum of Aj is contained in the right half of the complex plane. Our aim is to give conditions on J under which J is representable. It is clear that a bounded accretive form on J? is representable (cf. Appendix). The first result of this kind for an unbounded form was derived by Friedrichs (1934) when he proved that the operator associated with a closed semibounded hermitian form is selfadjoint (cf. [2, Chapter 6]). This result was extended to a more general case by T. Kato who showed that a closed regularly accretive form is representable (see §4). (Recall that an accretive form is regularly accretive if \lm J[u, u]\ ^y Re J[u, u] for all u e S>(J) and some y ^0.) In this paper we consider accretive forms that are not necessarily regularly accretive. A representation theorem for such forms is presented in §3.
    [Show full text]
  • Lesson: Bilinear, Quadratic and Hermitian Forms Lesson Developer: Vivek N Sharma College / Department: Department of Mathematics, S.G.T.B
    Bilinear, Quadratic and Hermitian Forms Lesson: Bilinear, Quadratic and Hermitian Forms Lesson Developer: Vivek N Sharma College / Department: Department of Mathematics, S.G.T.B. Khalsa College, University of Delhi Institute of Lifelong Learning, University of Delhi pg. 1 Bilinear, Quadratic and Hermitian Forms Table of Contents 1. Learning Outcomes 2. Introduction 3. Definition of a Bilinear Form on a Vector Space 4. Various Types of Bilinear Forms 5. Matrix Representation of a Bilinear Form on a Vector Space 6. Quadratic Forms on ℝ푛 7. Hermitian Forms on a Vector Space 8. Summary 9. Exercises 10. Glossary and Further Reading 11. Solutions/Hints for Exercises 1. Learning Outcomes: After studying this unit, you will be able to define the concept of a bilinear form on a vector space. explain the equivalence of bilinear forms with matrices. represent a bilinear form on a vector space as a square matrix. define the concept of a quadratic form on ℝ푛 . explain the notion of a Hermitian form on a vector space. Institute of Lifelong Learning, University of Delhi pg. 2 Bilinear, Quadratic and Hermitian Forms 2. Introduction: Bilinear forms occupy a unique place in all of mathematics. The study of linear transformations alone is incapable of handling the notions of orthogonality in geometry, optimization in many variables, Fourier series and so on and so forth. In optimization theory, the relevance of quadratic forms is all the more. The concept of dot product is a particular instance of a bilinear form. Quadratic forms, in particular, play an all important role in deciding the maxima-minima of functions of several variables.
    [Show full text]