Vector and Dyadic Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Vector and Dyadic Algebra Chapter 1 Vector and Dyadic Algebra 1-1 Representations of Vector Functions A vector function has both magnitude and direction. The vector functions that we encounter in many physical problems are, in general, functions of space and time. In the first five chapters, we discuss only their characteristics as functions of spatial variables. Functions of space and time are covered in Chapter 6, dealing with a moving surface or a moving contour. A vector function is denoted by F. Geometrically, it is represented by a line with an arrow in a three-dimensional space. The length of the line corresponds to its magnitude, and the direction of the line represents the direction of the vector function. The convenience of using vectors to represent physical quantities is illustrated by a simple example shown in Fig. 1-1, which describes the motion of a mass particle in a frictionless air (vacuum) against a constant gravitational force. The particle is thrown into the space with an initial velocity Vo, making an angle 0o with respect to the horizon. During its flight, the velocity function of the particle changes both its magnitude and direction, as shown by Vi, v2, and so on, at subsequent locations. The gravitational force that acts on the particle is assumed to be constant, and it is represented by F in the figure. A constant vector function means that both the magnitude and the direction of the function are constant, being independent of the spatial variables, x and z in this case. The rule of the addition of two vectors a and b is shown geometrically by Fig. l-2a, b, or c. Algebraically, it is written in the same form as the addition of 2 Vector and Dyadic Algebra Chap. 1 t v- V2 /( F jV^3 / V \ 0 >x *o z = JCI 1 — — tan80 XQ = — vQ sin 2G0 Figure 1-1 Trajectory of a mass particle in 8 a gravitational field showing the velocity v and the constant force vector F at different g = gravitational constant locations. b b b (a) (b) (c) Figure 1-2 Addition of vectors, a -f- b = c. two numbers of two scalar functions, that is, c = a + b. (1.1) The subtraction of vector b from vector a is written in the form d = a-b. (1.2) Now, — b is a vector that has the same magnitude as b, but of opposite direc- tion; then (1.2) can be considered as the addition of a and (—b). Geometrically, the meaning of (1.2) is shown in Fig. 1-3. The sum and the difference of two vectors obey the associative rule, that is, a+b=b+a (1.3) and a-b = -b + a. (1.4) They can be generalized to any number of vectors. The rule of the addition of vectors suggests that any vector can be considered as being made of basic components associated with a proper coordinate system. The Sec. 1-1 Representations of Vector Functions 3 -b b Figure 1-3 Subtraction of vectors, a - b = d. most convenient system to use is the Cartesian system or the rectangular coordinate system, or more specifically, a right-handed rectangular system in which, when JC is turning to y9 a right-handed screw advances to the z direction. The spatial variables in this system are commonly denoted by x, y, z. A vector that has a magnitude equal to unity and pointed in the positive x direction is called a unit vector in the x direction and is denoted by x. Similarly, we have y, z. In such a system, a vector function F that, in general, is a function of position, can be written in the form F=Fxx + Fyy+F2z. (1.5) The three scalar functions Fx, Fy, Fz are called the components of F in the direction of Jc, y, and z, respectively, while Fxx, Fyy, and Fzz are called the vector components of F. The geometrical representation of F is shown in Fig. 1-4. It is seen that Fx, Fy, and Fz can be either positive or negative. In Fig. 1-4, Fx and Fz are positive, but Fy is negative. z i i S ^\ Fzz F ^\ /^ p *+—" "Fxx J **y X Figure 1-4 Components of a vector in a Cartesian system. In addition to the representation by (1.5), it is sometimes desirable to express F in terms of its magnitude, denoted by |F|, and its directional cosines, that is, F = |F| (cos ax + cos $y + cos yz). (1.6) 4 Vector and Dyadic Algebra Chap. 1 a, P, and y are the angles F makes, respectively, with jc, y, and z, as shown in Fig. 1-4. It is obvious from the geometry of that figure that IFI^^ + F^ + F^)172 (1.7) and cosa = |[' cosP = M- cost = W\- (L8) Furthermore, we have the relation cos2 a + cos2 P + cos2 y = 1. (1.9) In view of (1.9), only two of the directional cosine angles are independent. From the previous discussion, we observe that, in general, we need three parameters to specify a vector function. The three parameters could be Fx, Fy, and Fz or |F| and two of the directional cosine angles. Representations such as (1.5) and (1.6) can be extended to other orthogonal coordinate systems, which will be discussed in a later chapter. 1-2 Products and Identities The scalar product of two vectors a and b is denoted by a • b, and it is defined by a b = |a| |b|cosGf (1.10) where 6 is the angle between a and b, as shown in Fig. 1-5. Because of the notation used for such a product, sometimes it is called the dot product. By applying (1.10) to three orthogonal unit vectors u\, W2, W3, one finds */•*; = { I] ]^Jj )> 1,7 = 1,2,3. (l.ll) The value of a • b can also be expressed in terms of the components of a and b in any orthogonal system. Let the system under consideration be the rectangular system, and let c = a — b; then |c|2 = |a - b|2 = |a|2 + |b|2 - 2 |a| |b| cos 0. Hence |a|2 + |b|2 |a b|2 a.b=|aHb|cose= 2- - 2 2 2 = al + aj + a\ + b x + b y + b z - (ax - bxf - (ay - b,f - (a, - bzf ~ 2 = axbx + ayby + a2bz. (1.12) y/^ j " ^ Figure 1-5 Scalar product of two vectors, • a a b = |a||b| cos 9. Sec. 1-2 Products and Identities 5 By equating (1.10) and (1.12), one finds cos6 = j^M*+«A + «A) (1.13) = cos aa cos at, + cos (3fl cos P*> -f cos ya cos jb, a relationship well known in analytical geometry. Equation (1.12) can be used to prove the validity of the distributive law for the scalar products, namely, (a + b) • c = a • c + b • c. (1.14) According to (1.12), we have (a + b) • c = (ax + bx) cx + (ay + by) cy + (az + bz) cz = {axcx + aycy + azcz) + (bxcx + bycy + bzcz) = a • c + b • c. Once we have proved the distributive law for the scalar product, (1.12) can be verified by taking the sum of the scalar products of the individual terms of a and b. The vector product of two vector functions a and b, denoted by a x b, is defined by axb= |a||b|sin6Mc, (1.15) where 0 denotes the angle between a and b, measured from a to b; iic denotes a unit vector perpendicular to both a and b and is pointed to the advancing direction of a right-hand screw when we turn from a to b. Figure 1-6 shows the relative position of uc with respect to a and b. Because of the notation used for the vector product, it is sometimes called the cross product, in contrast to the dot product or the scalar product. For three orthogonal unit vectors in a right-hand system, we have wi x«2 = W3, M2 x "3 = i/i, and W3 x u\ — it2> It is obvious that«/ x fi,- = 0, i = 1, 2, 3. From the definition of the vector product in (1.15), one finds bxa = -axb. (1.16) The value of a x b as described by (1.15) can also be expressed in terms of the components of a and b in a rectangular coordinate system. If we let a x b = v = axb i i uc n ^* Figure 1-6 Vector product of two vectors, yr j ° a a x b = |a||b| sin0«c; ttc ± a, uc JL b. — • 6 Vector and Dyadic Algebra Chap. 1 vxx + vyy + vzz, which is perpendicular to both a and b, then a • v = axvx + ciyVy + azvz = 0, (1.17) b v = bxvx + byVy + bzvz = 0. (1.18) Solving for vx/vz and vy/vz, from (1.17) and (1.18) we obtain Vx z a _ **>^ ~~ zby Vy_ __ azfcjc — Clxbz vz axby - a^ft*' vz axby - ^6* Thus, VX Vy VZ a aybz — azby cizbx ~~ tf*^z &xby ~~ ybx Let the common ratio of these quantities be denoted by c, which can be determined by considering the case with a = Jc, b = j>; then v = axb = z; hence from the last ratio, we find c = 1 because vz = 1 and ax — by = 1, while ay = bx = 0. The three components of v, therefore, are given by vx = tf^ — azby vy = azbx-axbz , (1.19) vz = axby — aybx which can be assembled in a determinant form as x y z v= ax ay az .
Recommended publications
  • DERIVATIONS and PROJECTIONS on JORDAN TRIPLES an Introduction to Nonassociative Algebra, Continuous Cohomology, and Quantum Functional Analysis
    DERIVATIONS AND PROJECTIONS ON JORDAN TRIPLES An introduction to nonassociative algebra, continuous cohomology, and quantum functional analysis Bernard Russo July 29, 2014 This paper is an elaborated version of the material presented by the author in a three hour minicourse at V International Course of Mathematical Analysis in Andalusia, at Almeria, Spain September 12-16, 2011. The author wishes to thank the scientific committee for the opportunity to present the course and to the organizing committee for their hospitality. The author also personally thanks Antonio Peralta for his collegiality and encouragement. The minicourse on which this paper is based had its genesis in a series of talks the author had given to undergraduates at Fullerton College in California. I thank my former student Dana Clahane for his initiative in running the remarkable undergraduate research program at Fullerton College of which the seminar series is a part. With their knowledge only of the product rule for differentiation as a starting point, these enthusiastic students were introduced to some aspects of the esoteric subject of non associative algebra, including triple systems as well as algebras. Slides of these talks and of the minicourse lectures, as well as other related material, can be found at the author's website (www.math.uci.edu/∼brusso). Conversely, these undergraduate talks were motivated by the author's past and recent joint works on derivations of Jordan triples ([116],[117],[200]), which are among the many results discussed here. Part I (Derivations) is devoted to an exposition of the properties of derivations on various algebras and triple systems in finite and infinite dimensions, the primary questions addressed being whether the derivation is automatically continuous and to what extent it is an inner derivation.
    [Show full text]
  • Relativistic Dynamics
    Chapter 4 Relativistic dynamics We have seen in the previous lectures that our relativity postulates suggest that the most efficient (lazy but smart) approach to relativistic physics is in terms of 4-vectors, and that velocities never exceed c in magnitude. In this chapter we will see how this 4-vector approach works for dynamics, i.e., for the interplay between motion and forces. A particle subject to forces will undergo non-inertial motion. According to Newton, there is a simple (3-vector) relation between force and acceleration, f~ = m~a; (4.0.1) where acceleration is the second time derivative of position, d~v d2~x ~a = = : (4.0.2) dt dt2 There is just one problem with these relations | they are wrong! Newtonian dynamics is a good approximation when velocities are very small compared to c, but outside of this regime the relation (4.0.1) is simply incorrect. In particular, these relations are inconsistent with our relativity postu- lates. To see this, it is sufficient to note that Newton's equations (4.0.1) and (4.0.2) predict that a particle subject to a constant force (and initially at rest) will acquire a velocity which can become arbitrarily large, Z t ~ d~v 0 f ~v(t) = 0 dt = t ! 1 as t ! 1 . (4.0.3) 0 dt m This flatly contradicts the prediction of special relativity (and causality) that no signal can propagate faster than c. Our task is to understand how to formulate the dynamics of non-inertial particles in a manner which is consistent with our relativity postulates (and then verify that it matches observation, including in the non-relativistic regime).
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Triple Product Formula and the Subconvexity Bound of Triple Product L-Function in Level Aspect
    TRIPLE PRODUCT FORMULA AND THE SUBCONVEXITY BOUND OF TRIPLE PRODUCT L-FUNCTION IN LEVEL ASPECT YUEKE HU Abstract. In this paper we derived a nice general formula for the local integrals of triple product formula whenever one of the representations has sufficiently higher level than the other two. As an application we generalized Venkatesh and Woodbury’s work on the subconvexity bound of triple product L-function in level aspect, allowing joint ramifications, higher ramifications, general unitary central characters and general special values of local epsilon factors. 1. introduction 1.1. Triple product formula. Let F be a number field. Let ⇡i, i = 1, 2, 3 be three irreducible unitary cuspidal automorphic representations, such that the product of their central characters is trivial: (1.1) w⇡i = 1. Yi Let ⇧=⇡ ⇡ ⇡ . Then one can define the triple product L-function L(⇧, s) associated to them. 1 ⌦ 2 ⌦ 3 It was first studied in [6] by Garrett in classical languages, where explicit integral representation was given. In particular the triple product L-function has analytic continuation and functional equation. Later on Shapiro and Rallis in [19] reformulated his work in adelic languages. In this paper we shall study the following integral representing the special value of triple product L function (see Section 2.2 for more details): − ⇣2(2)L(⇧, 1/2) (1.2) f (g) f (g) f (g)dg 2 = F I0( f , f , f ), | 1 2 3 | ⇧, , v 1,v 2,v 3,v 8L( Ad 1) v ZAD (FZ) D (A) ⇤ \ ⇤ Y D D Here fi ⇡i for a specific quaternion algebra D, and ⇡i is the image of ⇡i under Jacquet-Langlands 2 0 correspondence.
    [Show full text]
  • S-96.510 Advanced Field Theory Course for Graduate Students Lecture Viewgraphs, Fall Term 2004
    S-96.510 Advanced Field Theory Course for graduate students Lecture viewgraphs, fall term 2004 I.V.Lindell Helsinki University of Technology Electromagnetics Laboratory Espoo, Finland I.V.Lindell: Advanced Field Theory, 2004 Helsinki University of Technology 00.01 1 Contents [01] Complex Vectors and Dyadics [02] Dyadic Algebra [03] Basic Electromagnetic Equations [04] Conditions for Fields and Media [05] Duality Transformation [06] Affine Transformation [07] Electromagnetic Field Solutions [08] Singularities and Complex Sources [09] Plane Waves [10] Source Equivalence [11] Huygens’ Principle [12] Field Decompositions Vector Formulas, Dyadic Identites as an appendix I.V.Lindell: Advanced Field Theory, 2004 Helsinki University of Technology 00.02 2 Foreword This lecture material contains all viewgraphs associated with the gradu- ate course S-96.510 Advanced Field Theory given at the Department of Electrical and Communications Engineering, fall 2004. The course is based on Chapters 1–6 of the book Methods for Electromagnetic Field Analysis (Oxford University Press 1992, 2nd edition IEEE Press, 1995, 3rd printing Wiley 2002) by this author. The figures drawn by hand on the blackboard could not be added to the present material. Otaniemi, September 13 2004 I.V. Lindell I.V.Lindell: Advanced Field Theory, 2004 Helsinki University of Technology 00.03 3 I.V.Lindell: Advanced Field Theory, 2004 Helsinki University of Technology 00.04 4 S-96.510 Advanced Field Theory 1. Complex Vectors and Dyadics I.V.Lindell I.V.Lindell: Advanced Field Theory, 2004
    [Show full text]
  • Generic Properties of Symmetric Tensors
    2006 – 1/48 – P.Comon Generic properties of Symmetric Tensors Pierre COMON I3S - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University I3S 2006 – 2/48 – P.Comon Tensors & Arrays Definitions Table T = {Tij..k} Order d of T def= # of its ways = # of its indices def Dimension n` = range of the `th index T is Square when all dimensions n` = n are equal T is Symmetric when it is square and when its entries do not change by any permutation of indices I3S 2006 – 3/48 – P.Comon Tensors & Arrays Properties Outer (tensor) product C = A ◦ B: Cij..` ab..c = Aij..` Bab..c Example 1 outer product between 2 vectors: u ◦ v = u vT Multilinearity. An order-3 tensor T is transformed by the multi-linear map {A, B, C} into a tensor T 0: 0 X Tijk = AiaBjbCkcTabc abc Similarly: at any order d. I3S 2006 – 4/48 – P.Comon Tensors & Arrays Example Example 2 Take 1 v = −1 Then 1 −1 −1 1 v◦3 = −1 1 1 −1 This is a “rank-1” symmetric tensor I3S 2006 – 5/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... I3S 2006 – 6/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... PARAFAC cannot be used when: • Lack of diversity I3S 2006 – 7/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... PARAFAC cannot be used when: • Lack of diversity • Proportional slices I3S 2006 – 8/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • Parallel Spinors and Connections with Skew-Symmetric Torsion in String Theory*
    ASIAN J. MATH. © 2002 International Press Vol. 6, No. 2, pp. 303-336, June 2002 005 PARALLEL SPINORS AND CONNECTIONS WITH SKEW-SYMMETRIC TORSION IN STRING THEORY* THOMAS FRIEDRICHt AND STEFAN IVANOV* Abstract. We describe all almost contact metric, almost hermitian and G2-structures admitting a connection with totally skew-symmetric torsion tensor, and prove that there exists at most one such connection. We investigate its torsion form, its Ricci tensor, the Dirac operator and the V- parallel spinors. In particular, we obtain partial solutions of the type // string equations in dimension n = 5, 6 and 7. 1. Introduction. Linear connections preserving a Riemannian metric with totally skew-symmetric torsion recently became a subject of interest in theoretical and mathematical physics. For example, the target space of supersymmetric sigma models with Wess-Zumino term carries a geometry of a metric connection with skew-symmetric torsion [23, 34, 35] (see also [42] and references therein). In supergravity theories, the geometry of the moduli space of a class of black holes is carried out by a metric connection with skew-symmetric torsion [27]. The geometry of NS-5 brane solutions of type II supergravity theories is generated by a metric connection with skew-symmetric torsion [44, 45, 43]. The existence of parallel spinors with respect to a metric connection with skew-symmetric torsion on a Riemannian spin manifold is of importance in string theory, since they are associated with some string solitons (BPS solitons) [43]. Supergravity solutions that preserve some of the supersymmetry of the underlying theory have found many applications in the exploration of perturbative and non-perturbative properties of string theory.
    [Show full text]