A Some Basic Rules of Tensor Calculus
The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write
a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g.
3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e. g. the sum of two vectors is computed as a sum of their coordinates ci = ai + bi. The introduced basis remains in the background. It must be remembered that a change of the coordinate system leads to the change of the components of tensors. In this work we prefer the direct tensor notation over the index one. When solv- ing applied problems the tensor equations can be “translated” into the language of matrices for a specified coordinate system. The purpose of this Appendix is to 168 A SomeBasicRulesofTensorCalculus give a brief guide to notations and rules of the tensor calculus applied through- out this work. For more comprehensive overviews on tensor calculus we recom- mend [54, 96, 123, 191, 199, 311, 334]. The calculus of matrices is presented in [40, 111, 340], for example. Section A.1 provides a brief overview of basic alge- braic operations with vectors and second rank tensors. Several rules from tensor analysis are summarized in Sect. A.2. Basic sets of invariants for different groups of symmetry transformation are presented in Sect. A.3, where a novel approach to find the functional basis is discussed.
A.1 Basic Operations of Tensor Algebra
A.1.1 Polar and Axial Vectors
A vector in the three-dimensional Euclidean space is defined as a directed line seg- ment with specified magnitude (scalar) and direction. The magnitude (the length) of a vector a is denoted by a . Two vectors a and b are equal if they have the same direction and the same magnitude.| | The zero vector 0 has a magnitude equal to zero. In mechanics two types of vectors can be introduced. The vectors of the first type are directed line segments. These vectors are associated with translations in the three- dimensional space. Examples for polar vectors include the force, the displacement, the velocity, the acceleration, the momentum, etc. The second type is used to char- acterize spinor motions and related quantities, i.e. the moment, the angular velocity, the angular momentum, etc. Figure A.1a shows the so-called spin vector a which represents a rotation about the given axis. The direction of rotation is specified∗ by the circular arrow and the “magnitude” of rotation is the corresponding length. For the given spin vector a the directed line segment a is introduced according to the following rules [334]: ∗ 1. the vector a is placed on the axis of the spin vector,
2. the magnitude of a is equal to the magnitude of a , ∗
a b c a
a a a ∗ ∗ ∗
a
Figure A.1 Axial vectors. a Spin vector, b axial vector in the right-screw oriented reference frame, c axial vector in the left-screw oriented reference frame A.1 Basic Operations of Tensor Algebra 169
3. the vector a is directed according to the right-handed screw, Fig A.1b, or the left-handed screw, Fig A.1c The selection of one of the two cases in 3. corresponds to the convention of ori- entation of the reference frame [334] (it should be not confused with the right- or left-handed triples of vectors or coordinate systems). The directed line segment is called a polar vector if it does not change by changing the orientation of the refer- ence frame. The vector is called to be axial if it changes the sign by changing the orientation of the reference frame. The above definitions are valid for scalars and tensors of any rank too. The axial vectors (and tensors) are widely used in the rigid body dynamics, e.g. [333], in the theories of rods, plates and shells, e.g. [25], in the asymmetric theory of elasticity, e.g. [231], as well as in dynamics of micro-polar media, e.g. [108]. By dealing with polar and axial vectors it should be remembered that they have different physical meanings. Therefore, a sum of a polar and an axial vector has no sense.
A.1.2 Operations with Vectors
Addition. For a given pair of vectors a and b of the same type the sum c = a + b is defined according to one of the rules in Fig. A.2. The sum has the following properties – a + b = b + a (commutativity), – (a + b)+ c = a +(b + c) (associativity), – a + 0 = a
Multiplication by a Scalar. For any vector a and for any scalar α a vector b = αa is defined in such a way that – b = α a , – for| | α >| 0|| the| direction of b coincides with that of a, – for α < 0 the direction of b is opposite to that of a. For α = 0 the product yields the zero vector, i.e. 0 = 0a. It is easy to verify that
α(a + b)= αa + αb, (α + β)a = αa + βa
a b
b c c b
a a Figure A.2 Addition of two vectors. a Parallelogram rule, b triangle rule 170 A SomeBasicRulesofTensorCalculus
a b b b
ϕ ϕ
a a 2π ϕ a − na = (b na )na a · Figure A.3 Scalar product of two vectors. a Angles| | between two vectors, b unit vector and projection
Scalar (Dot) Product of two Vectors. For any pair of vectors a and b a scalar α is defined by α = a b = a b cos ϕ, · | || | where ϕ is the angle between the vectors a and b. As ϕ one can use any of the two angles between the vectors, Fig. A.3a. The properties of the scalar product are – a b = b a (commutativity), · · – a · (b + c·)= a b + a c (distributivity) · · · Two nonzero vectors are said to be orthogonal if their scalar product is zero. The unit vector directed along the vector a is defined by (see Fig. A.3b) a na = a a | | The projection of the vector b onto the vector a is the vector (b n )n , Fig. A.3b. · a a The length of the projection is b cos ϕ . · | || | Vector (Cross) Product of Two Vectors. For the ordered pair of vectors a and b the vector c = a b is defined in two following steps [334]: × – the spin vector c is defined in such a way that the axis is orthogonal∗ to the plane spanned on a and b, Fig. A.4a, • the circular arrow shows the direction of the “shortest” rotation from a to b, • Fig. A.4b, the length is a b sin ϕ, where ϕ is the angle of the “shortest” rotation from a • to b, | || | – from the resulting spin vector the directed line segment c is constructed according to one of the rules listed in Subsect. A.1.1. The properties of the vector product are
a b = b a, a (b + c)= a b + a c × − × × × × The type of the vector c = a b can be established for the known types of the vectors a and b, [334]. If a and×b are polar vectors the result of the cross product A.1 Basic Operations of Tensor Algebra 171
a b c
c c ∗ ϕ ϕ ϕ a b a b a b
Figure A.4 Vector product of two vectors. a Plane spanned on two vectors, b spin vector, c axial vector in the right-screw oriented reference frame will be the axial vector. An example is the moment of momentum for a mass point m defined by r (mv˙ ), where r is the position of the mass point and v is the velocity of the mass point.× The next example is the formula for the distribution of velocities in a rigid body v = ω r. Here the cross product of the axial vector ω (angular velocity) with the polar× vector r (position vector) results in the polar vector v. The mixed product of three vectors a, b and c is defined by (a b) c. The result is a scalar. For the mixed product the following identities are valid× ·
a (b c)= b (c a)= c (a b) (A.1.1) · × · × · × If the cross product is applied twice, the first operation must be set in parentheses, e.g., a (b c). The result of this operation is a vector. The following relation can be applied× × a (b c)= b(a c) c(a b) (A.1.2) × × · − · By use of (A.1.1) and (A.1.2) one can calculate
(a b) (c d) = a [b (c d)] · · × · × = a · (c × b d × d b c) (A.1.3) · · · = a · c b ·d −a d · b c · · − · · A.1.3 Bases
Any triple of linear independent vectors e1, e2, e3 is called basis. A triple of vectors e is linear independent if and only if e (e e ) = 0. i 1 · 2 × 3 6 For a given basis ei any vector a can be represented as follows
a = a1e + a2e + a3e aie 1 2 3 ≡ i i The numbers a are called the coordinates of the vector a for the basis ei. In order to compute the coordinates the dual (reciprocal) basis ek is introduced in such a way that 1, k = i, ek e = δk = · i i 0, k = i 6 172 A SomeBasicRulesofTensorCalculus
k i δi is the Kronecker symbol. The coordinates a can be found by ei a = a ei = ame ei = amδi = ai · · m · m For the selected basis ei the dual basis can be found from e e e e e e e1 = 2 × 3 , e2 = 3 × 1 , e3 = 1 × 2 (A.1.4) (e e ) e (e e ) e (e e ) e 1 × 2 · 3 1 × 2 · 3 1 × 2 · 3 By use of the dual basis a vector a can be represented as follows
a = a e1 + a e2 + a e3 a ei, a = a e , am = a 1 2 3 ≡ i m · m 6 m In the special case of the orthonormal vectors e , i.e. e = 1 and e e = 0 for i i i · k i = k, from (A.1.4) follows that ek = e and consequently| | a = ak. · 6 k k A.1.4 Operations with Second Rank Tensors
A second rank tensor is a finite sum of ordered vector pairs A = a b + ... + c d [334]. One ordered pair of vectors is called the dyad. The symbol⊗ is called⊗ the dyadic (tensor) product of two vectors. A single dyad or a sum of⊗ two dyads are special cases of the second rank tensor. Any finite sum of more than three dyads can be reduced to a sum of three dyads. For example, let
n A = ∑ a(i) b(i) i=1 ⊗ be a second rank tensor. Introducing a basis ek the vectors a(i) can be represented by k , where k are coordinates of the vectors . Now we may write a(i) = a(i)ek a(i) a(i)
n n n k k k k k A = ∑ a(i)ek b(i) = ek ∑ a(i)b(i) = ek d , d ∑ a(i)b(i) i=1 ⊗ ⊗ i=1 ⊗ ≡ i=1 Addition. The sum of two tensors is defined as the sum of the corresponding dyads. The sum has the properties of associativity and commutativity. In addition, for a dyad a b the following operation is introduced ⊗ a (b + c)= a b + a c, (a + b) c = a c + b c ⊗ ⊗ ⊗ ⊗ ⊗ ⊗ Multiplication by a Scalar. This operation is introduced first for one dyad. For any scalar α and any dyad a b ⊗ α(a b)=(αa) b = a (αb), (A.1.5) (α +⊗β)a b = α⊗a b +⊗βa b ⊗ ⊗ ⊗ By setting α = 0 in the first equation of (A.1.5) the zero dyad can be defined, i.e. 0(a b) = 0 b = a 0. The above operations can be generalized for any finite sum⊗ of dyads,⊗ i.e. for second⊗ rank tensors. A.1 Basic Operations of Tensor Algebra 173
Inner Dot Product. For any two second rank tensors A and B the inner dot prod- uct is specified by A B. The rule and the result of this operation can be explained · in the special case of· two dyads, i.e. by setting A = a b and B = c d ⊗ ⊗ A B = a b c d =(b c)a d = αa d, α b c · ⊗ · ⊗ · ⊗ ⊗ ≡ · The result of this operation is a second rank tensor. Note that A B = B A. This can be again verified for two dyads. The operation can be generalized· for6 two· second rank tensors as follows
3 3 3 3 A B = ∑ a(i) b(i) ∑ c(k) d(k) = ∑ ∑ (b(i) c(k))a(i) d(k) · i=1 ⊗ · k=1 ⊗ i=1 k=1 · ⊗ Transpose of a Second Rank Tensor. The transpose of a second rank tensor A is constructed by the following rule
3 T 3 T A = ∑ a(i) b(i) = ∑ b(i) a(i) i=1 ⊗ ! i=1 ⊗
Double Inner Dot Product. For any two second rank tensors A and B the double inner dot product is specified by A B The result of this operation is a scalar. This operation can be explained for two dyads·· as follows
A B = a b c d =(b c)(a d) ·· ⊗ ·· ⊗ · · By analogy to the inner dot product one can generalize this operation for two second rank tensors. It can be verified that A B = B A for second rank tensors A and ·· ·· B. For a second rank tensor A and for·· a dyad a ·· b ⊗ A a b = b A a (A.1.6) ·· ⊗ · · A scalar product of two second rank tensors A and B is defined by
α = A BT ·· One can verify that A BT = BT A = B AT ·· ·· ·· Dot Products of a Second Rank Tensor and a Vector. The right dot product of a second rank tensor A and a vector c is defined by
3 3 A c = ∑ a(i) b(i) c = ∑(b(i) c)a(i) · i=1 ⊗ ! · i=1 · For a single dyad this operation is
a b c = a(b c) ⊗ · · 174 A SomeBasicRulesofTensorCalculus
The left dot product is defined by
3 3 c A = c ∑ a(i) b(i) = ∑(c a(i))b(i) · · i=1 ⊗ ! i=1 · The results of these operations are vectors. One can verify that
A c = c A, A c = c AT · 6 · · · Cross Products of a Second Rank Tensor and a Vector. The right cross product of a second rank tensor A and a vector c is defined by
3 3 A c = ∑ a(i) b(i) c = ∑ a(i) (b(i) c) × i=1 ⊗ ! × i=1 ⊗ × The left cross product is defined by
3 3 c A = c ∑ a(i) b(i) = ∑(c a(i)) b(i) × × i=1 ⊗ ! i=1 × ⊗ The results of these operations are second rank tensors. It can be shown that
A c = [c AT ]T × − × Trace. The trace of a second rank tensor is defined by
3 3 tr A = tr ∑ a(i) b(i) = ∑ a(i) b(i) i=1 ⊗ ! i=1 · By taking the trace of a second rank tensor the dyadic product is replaced by the dot product. It can be shown that
tr A = tr AT, tr (A B)= tr (B A)= tr (AT BT)= A B · · · ·· Symmetric Tensors. A second rank tensor is said to be symmetric if it satisfies the following equality A = AT An alternative definition of the symmetric tensor can be given as follows. A second rank tensor is said to be symmetric if for any vector c = 0 the following equality is valid 6 c A = A c · · An important example of a symmetric tensor is the unit or identity tensor I, which is defined by such a way that for any vector c
c I = I c = c · · A.1 Basic Operations of Tensor Algebra 175
The representations of the identity tensor are
I = e ek = ek e k ⊗ ⊗ k for any basis e and ek, e em = δm. For three orthonormal vectors m, n and p k k · k I = n n + m m + p p ⊗ ⊗ ⊗ A symmetric second rank tensor P satisfying the condition P P = P is called projector. Examples of projectors are ·
m m, n n + p p = I m m, ⊗ ⊗ ⊗ − ⊗ where m, n and p are orthonormal vectors. The result of the dot product of the tensor m m with any vector a is the projection of the vector a onto the line spanned on the⊗ vector m, i.e. m m a = (a m)m. The result of (n n + p p) a is the · · · projection of the vector⊗ a ·onto the plane· spanned on the vectors⊗ n and⊗p. · Skew-symmetric Tensors. A second rank tensor is said to be skew-symmetric if it satisfies the following equality
A = AT − or if for any vector c c A = A c · − · Any skew symmetric tensor A can be represented by
A = a I = I a × × The vector a is called the associated vector. Any second rank tensor can be uniquely decomposed into the symmetric and skew-symmetric parts 1 1 A = A + AT + A AT = A + A , 2 2 − 1 2 1 A = A + AT , A = AT, 1 2 1 1 1 A = A AT , A = AT 2 2 − 2 − 2 Vector Invariant. The vector invariant or “Gibbsian Cross” of a second rank ten- sor A is defined by
3 3 A = ∑ a(i) b(i) = ∑ a(i) b(i) × i=1 ⊗ ! i=1 × × The result of this operation is a vector. The vector invariant of a symmetric tensor is the zero vector. The following identities can be verified
(a I) = 2a, a I b = b a (a b)I × × − × × ⊗ − · 176 A SomeBasicRulesofTensorCalculus
Linear Transformations of Vectors. A vector valued function of a vector ar- gument f (a) is called to be linear if f (α1a1 + α2a2)= α1 f (a1)+ α2 f (a2) for any two vectors a1 and a2 and any two scalars α1 and α2. It can be shown that any linear vector valued function can be represented by f (a) = A a, where A is a second · rank tensor. In many textbooks, e.g. [32, 293] a second rank· tensor A is defined to be the linear transformation of the vector space into itself. Determinant and Inverse of a Second Rank Tensor. Let a, b and c be ar- bitrary linearly-independent vectors. The determinant of a second rank tensor A is defined by (A a) [(A b) (A c)] det A = · · · × · a (b c) · × The following identities can be verified
det(AT)= det(A), det(A B)= det(A) det(B) · 1 The inverse of a second rank tensor A− is introduced as the solution of the follow- ing equation 1 1 A− A = A A− = I · · A is invertible if and only if det A = 0. A tensor A with det A = 0 is called singular. Examples of singular tensors6 are projectors. Cayley-Hamilton Theorem. Any second rank tensor satisfies the following equation A3 J (A)A2 + J (A)A J (A)I = 00, (A.1.7) − 1 2 − 3 where A2 = A A, A3 = A A A and · · · 1 2 2 J1(A)= tr A, J2(A)= [(tr A) tr A ], 2 − (A.1.8) 1 1 1 J (A)= det A = (tr A)3 tr Atr A2 + tr A3 3 6 − 2 3
The scalar-valued functions Ji(A) are called principal invariants of the tensor A. k Coordinates of Second Rank Tensors. Let ei be a basis and e the dual basis. Any two vectors a and b can be represented as follows
i j l m a = a ei = aje , b = b el = bme
A dyad a b has the following representations ⊗ a b = aibje e = aib e ej = a b ei ej = a bjei e ⊗ i ⊗ j j i ⊗ i j ⊗ i ⊗ j For the representation of a second rank tensor A one of the following four bases can be used e e , ei ej, ei e , e ej i ⊗ j ⊗ ⊗ j i ⊗ With these bases one can write A.1 Basic Operations of Tensor Algebra 177
ij i j i j j i A = A ei ej = Aije e = A ∗jei e = Ai∗ e e j ⊗ ⊗ ∗ ⊗ ∗ ⊗ For a selected basis the coordinates of a second rank tensor can be computed as follows Aij = e A e , A = ei A e j, i · j ij · i · · j j i· · A ∗j = ei A e , A∗i = e A e j ∗ · · ∗ · · Principal Values and Directions of Symmetric Second Rank Tensors. Consider a dot product of a second rank tensor A and a unit vector n. The resulting vector a = A n differs in general from n both by the length and the direction. · However, one can· find those unit vectors n, for which A n is collinear with n, i.e. · only the length of n is changed. Such vectors can be found· from the equation
A n = λn or (A λI) n = 0 (A.1.9) · − · The unit vector n is called the principal vector and the scalar λ the principal value of the tensor A. Let A be a symmetric tensor. In this case the principal values are real numbers and there exist at least three mutually orthogonal principal vectors. The principal values can be found as roots of the characteristic polynomial
det(A λI)= λ3 + J (A)λ2 J (A)λ + J (A)= 0 − − 1 − 2 3
The principal values are specified by λI , λII, λIII . For known principal values and principal directions the second rank tensor can be represented as follows (spectral representation)
A = λ n n + λ n n + λ n n I I ⊗ I II II ⊗ II III III ⊗ III Orthogonal Tensors. A second rank tensor Q is said to be orthogonal if it sat- isfies the equation QT Q = I. If Q operates on a vector its length remains un- · changed, i.e. let b = Q · a, then · b 2 = b b = a QT Q a = a a = a 2 | | · · · · · | | Furthermore, the orthogonal tensor does not change the scalar product of two arbi- trary vectors. For two vectors a and b as well as a′ = Q a and b′ = Q b one can calculate · · T a′ b′ = a Q Q b = a b · · · · · From the definition of the orthogonal tensor follows
QT = Q 1, QT Q = Q QT = I, − · · det(Q QT)=(det Q)2 = det I = 1 det Q = 1 · ⇒ ± Orthogonal tensors with det Q = 1 are called proper orthogonal or rotation tensors. The rotation tensors are widely used in the rigid body dynamics, e.g. [333], and in the theories of rods, plates and shells, e.g. [25, 32]. Any orthogonal tensor is either 178 A SomeBasicRulesofTensorCalculus the rotation tensor or the composition of the rotation (proper orthogonal tensor) and the tensor I. Let P be a rotation tensor, det P = 1, then an orthogonal tensor Q with detQ−= 1 can be composed by − Q =( I) P = P ( I), det Q = det( I) det P = 1 − · · − − −
For any two orthogonal tensors Q1 and Q2 the composition Q3 = Q1 Q2 is the or- thogonal tensor, too. This property is used in the theory of symmetry and· symmetry groups, e.g. [232, 331]. Two important examples for orthogonal tensors are rotation tensor about a fixed axis • Q(ψm)= m m + cos ψ(I m m)+ sin ψm I, ⊗ − ⊗ × π < ψ < π, detQ = 1, − where the unit vector m represents the axis and ψ is the angle of rotation, reflection tensor • Q = I 2n n, det Q = 1, − ⊗ − where the unit vector n represents a normal to the mirror plane. One can prove the following identities [334]
(Q a) (Q b)= det QQQQ (a b) · × · · × (A.1.10) Q (a QT)= Q (a I) QT = det Q [(Q a) I] · × · × · · ×
A.2 Elements of Tensor Analysis
A.2.1 Coordinate Systems
The vector r characterizing the position of a point P can be represented by use of the Cartesian coordinates xi as follows, Fig. A.5,
1 2 3 1 2 3 i r(x , x , x )= x e1 + x e2 + x e3 = x ei
Instead of coordinates xi one can introduce any triple of curvilinear coordinates q1, q2, q3 by means of one-to-one transformations
xk = xk(q1, q2, q3) qk = qk(x1, x2, x3) ⇔ It is assumed that the above transformations are continuous and continuous differ- entiable as many times as necessary and for the Jacobians
∂xk ∂qi det = 0, det = 0 ∂qi 6 ∂xk 6 ! A.2 ElementsofTensorAnalysis 179
q3 r3
q2
r2
const 3 = P q x3 r1 r
e3 q1 e2 2 e1 x
x1 Figure A.5 Cartesian and curvilinear coordinates must be valid. With these assumptions the position vector can be considered as a function of curvilinear coordinates qi, i.e. r = r(q1, q2, q3). Surfaces q1 = const, q2 = const, and q3 = const, Fig. A.5, are called coordinate surfaces. For given fixed values q2 = q2 and q3 = q3 a curve can be obtained along which only q1 varies. This curve is∗ called the q1-coordinate∗ line, Fig. A.5. Analogously, one can obtain the q2- and q3-coordinate lines. The partial derivatives of the position vector with respect the to selected coordinates
∂r ∂r ∂r r = , r = , r = , r (r r ) = 0 1 ∂q1 2 ∂q2 3 ∂q3 1 · 2 × 3 6 define the tangential vectors to the coordinate lines in a point P, Fig. A.5. The vec- tors ri are used as the local basis in the point P. By use of (A.1.4) the dual basis k r can be introduced. The vector drr connecting the point P with a point P′ in the differential neighborhood of P is defined by
∂r ∂r ∂r drr = dq1 + dq2 + dq3 = r dqk ∂q1 ∂q2 ∂q3 k The square of the arc length of the line element in the differential neighborhood of P is calculated by
ds2 = drr drr =(r dqi) (r dqk)= g dqidqk, · i · k ik where gik ri rk are the so-called contravariant components of the metric tensor. ≡ · k With gik one can represent the basis vectors ri by the dual basis vectors r as follows
r =(r r )rk = g rk i i · k ik 180 A SomeBasicRulesofTensorCalculus
Similarly ri =(ri rk)r = gikr , gik ri rk, · k k ≡ · where gik are termed covariant components of the metric tensor. For the selected k bases ri and r the second rank unit tensor has the following representations
I = r ri = r gikr = gikr r = g ri rk = ri r i ⊗ i ⊗ k i ⊗ k ik ⊗ ⊗ i A.2.2 The Hamilton (Nabla) Operator
A scalar field is a function which assigns a scalar to each spatial point P for the domain of definition. Let us consider a scalar field ϕ(r) = ϕ(q1, q2, q3). The total differential of ϕ by moving from a point P to a point P′ in the differential neighbor- hood is ∂ϕ ∂ϕ ∂ϕ ∂ϕ dϕ = dq1 + dq2 + dq3 = dqk ∂q1 ∂q2 ∂q3 ∂qk Taking into account that dqk = drr rk · ∂ϕ dϕ = drr rk = drr ϕ · ∂qk ··∇∇
The vector ϕ is called the gradient of the scalar field ϕ and the invariant operator (the Hamilton∇ or nabla operator) is defined by ∇ ∂ = rk ∇ ∂qk
For a vector field a(r) one may write
∂a ∂a daa =(drr rk) = drr rk = drr a =( a)T ddrr, · ∂qk · ⊗ ∂qk ··∇∇∇⊗ ∇∇⊗ · ∂a a = rk ∇∇⊗ ⊗ ∂qk
The gradient of a vector field is a second rank tensor. The operation can be applied to tensors of any rank. For vectors and tensors the following additional∇ operations are defined ∂a diva a = rk ≡∇∇ · · ∂qk ∂a rota a = rk ≡∇∇∇× × ∂qk The following identities can be verified
∂r r = rk = rk r = I, r = 3 ∇∇⊗ ⊗ ∂qk ⊗ k ∇ · A.2 ElementsofTensorAnalysis 181
For a scalar α, a vector a and for a second rank tensor A the following identities are valid ∂(αa) ∂α ∂a (αa)= rk = rk a + αrk =( α) a + α a, ∇ ⊗ ∂qk ∂qk ⊗ ⊗ ∂qk ∇ ⊗ ∇∇⊗ (A.2.1) ∂(A a) ∂A ∂a (A a) = rk · = rk a + rk A ∇ · · · ∂qk · ∂qk · · · ∂qk ∂a (A.2.2) = ( A) a + A rk ∇ · · ·· ∂qk ⊗ = ( A) a + A ( a)T ∇ · · ·· ∇∇⊗ Here the identity (A.1.6) is used. For a second rank tensor A and a position vector r one can prove the following identity
∂(A r) ∂A ∂r (A r) = rk × = rk r + rk A ∇ · × · ∂qk · ∂qk × · × ∂qk (A.2.3) k = ( A) r + r A rk =( A) r A ∇ · × · × ∇ · × − × Here we used the definition of the vector invariant as follows
k k k A = rk r A = rk (r A)= r A rk × ⊗ · × · − · × × A.2.3 Integral Theorems
Let ϕ(r), a(r) and A(r) be scalar, vector and second rank tensor fields. Let V be the volume of a bounded domain with a regular surface A(V) and n be the outer unit normal to the surface at r. The integral theorems can be summarized as follows – Gradient Theorems
ϕ dV = nϕ dA, ∇ VZ A(ZV) a dV = n a dA, ∇∇⊗ ⊗ VZ A(ZV) A dV = n A dA ∇∇⊗ ⊗ VZ A(ZV)
– Divergence Theorems
a dV = n a dA, ∇ · · VZ A(ZV) A dV = n A dA ∇ · · VZ A(ZV) 182 A SomeBasicRulesofTensorCalculus
– Curl Theorems
a dV = n a dA, ∇∇× × VZ A(ZV) A dV = n A dA ∇∇× × VZ A(ZV)
A.2.4 Scalar-Valued Functions of Vectors and Second Rank Tensors Let ψ be a scalar valued function of a vector a and a second rank tensor A, i.e. ψ = ψ(a, A). Introducing a basis ei the function ψ can be represented as follows
ψ(a, A)= ψ(aie , Aije e )= ψ(ai, Aij) i i ⊗ j The partial derivatives of ψ with respect to a and A are defined according to the following rule
∂ψ i ∂ψ ij dψ = i da + ij dA ∂a ∂A (A.2.4) ∂ψ ∂ψ = daa ei + dAA ej ei dAij · ∂ai ·· ⊗ ∂Aij In the coordinates-free form the above rule can be rewritten as follows
∂ψ ∂ψ T dψ = daa + dAA = daa ψ + dAA (ψ )T (A.2.5) · ∂a ·· ∂A · ,a ·· ,A with ∂ψ ∂ψ i ∂ψ ∂ψ i j ψ,a = e , ψ A = e e ≡ ∂a ∂ai ,A ≡ ∂A ∂Aij ⊗ One can verify that ψ,a and ψ,A are independent from the choice of the basis. One may prove the following formulae for the derivatives of principal invariants of a second rank tensor A
2 T 3 2T J1(A),A = I, J1(A ),A = 2A , J1(A ),A = 3A , J (A) = J (A)I AT, (A.2.6) 2 ,A 1 − 2T T T 1 J (A) = A J (A)A + J (A)I = J (A)(A )− 3 ,A − 1 2 3
A.3 Orthogonal Transformations and Orthogonal Invariants An application of the theory of tensor functions is to find a basic set of scalar invari- ants for a given group of symmetry transformations, such that each invariant relative A.3 Orthogonal Transformations and Orthogonal Invariants 183 to the same group is expressible as a single-valued function of the basic set. The ba- sic set of invariants is called functional basis. To obtain a compact representation for invariants, it is required that the functional basis is irreducible in the sense that removing any one invariant from the basis will imply that a complete representation for all the invariants is no longer possible. Such a problem arises in the formulation of constitutive equations for a given group of material symmetries. For example, the strain energy density of an elastic non-polar material is a scalar valued function of the second rank symmetric strain tensor. In the theory of the Cosserat continuum two strain measures are introduced, where the first strain measure is the polar tensor while the second one is the axial tensor, e.g. [108]. The strain energy density of a thin elastic shell is a function of two second rank tensors and one vector, e.g. [25]. In all cases the problem is to find a minimum set of functionally independent invariants for the considered tensorial arguments. For the theory of tensor functions we refer to [71]. Representations of tensor functions are reviewed in [280, 330]. An orthogonal transformation of a scalar α, a vector a and a second rank tensor A is defined by [25, 332]
ζ ζ ζ T α′ (det Q) α, a′ (det Q) Q a, A′ (det Q) Q A Q , (A.3.1) ≡ ≡ · ≡ · · where Q is an orthogonal tensor, i.e. Q QT = I, detQ = 1, I is the second · rank unit tensor, ζ = 0 for absolute (polar)· scalars, vectors and± tensors and ζ = 1 for axial ones. An example of the axial scalar is the mixed product of three polar vectors, i.e. α = a (b c). A typical example of the axial vector is the cross product · of two polar vectors,· × i.e. c = a b. An example of the second rank axial tensor is the skew-symmetric tensor W×= a I, where a is a polar vector. Consider a group of orthogonal transformations S (e.g.,× the material symmetry transformations) characterized by a set of orthogonal tensors Q. A scalar-valued function of a second rank tensor f = f (A) is called to be an orthogonal invariant under the group S if
η Q S : f (A′)=(det Q) f (A), (A.3.2) ∀ ∈ where η = 0 if values of f are absolute scalars and η = 1 if values of f are axial scalars. Any second rank tensor B can be decomposed into the symmetric and the skew- symmetric part, i.e. B = A + a I, where A is the symmetric tensor and a is the associated vector. Therefore f (B×)= f (A, a). If B is a polar (axial) tensor, then a is an axial (polar) vector. For the set of second rank tensors and vectors the definition of an orthogonal invariant (A.3.2) can be generalized as follows
Q S : f (A , A ,..., A , a , a ,..., a ) ∀ ∈ 1′ 2′ n′ 1′ 2′ k′ η T =(det Q) f (A1, A2,... An, a1, a2,..., ak), Ai = Ai (A.3.3) 184 A SomeBasicRulesofTensorCalculus
A.3.1 Invariants for the Full Orthogonal Group
In [335] orthogonal invariants for different sets of second rank tensors and vectors with respect to the full orthogonal group are presented. It is shown that orthogonal invariants are integrals of a generic partial differential equation (basic equations for invariants). Let us present two following examples – Orthogonal invariants of a symmetric second rank tensor A are k Ik = tr A , k = 1, 2, 3
Instead of Ik it is possible to use the principal invariants Jk defined by (A.1.8). – Orthogonal invariants of a symmetric second rank tensor A and a vector a are
I = tr Ak, k = 1, 2, 3, I = a a, I = a A a, k 4 · 5 · · (A.3.4) I = a A2 a, I = a A2 (a A a) 6 · · 7 · · × · In the above set of invariants only 6 are functionally independent. The relation between the invariants (so-called syzygy, [71]) can be formulated as follows
I4 I5 I6 I2 = I I a A3 a , (A.3.5) 7 5 6 · · I a A3 a a A4 a 6 · · · · · · · · 3 4 where a A a and a A a can be expressed by Il, l = 1, . . . 6 applying the Cayley-Hamilton· · theorem· (A.1.7).· The set of invariants for a symmetric second rank tensor A and a vector a can be applied for a non-symmetric second rank tensor B since it can be represented by B = A + a I, A = AT. × A.3.2 Invariants for the Transverse Isotropy Group Transverse isotropy is an important type of the symmetry transformation due to a variety of applications. Transverse isotropy is usually assumed in constitutive mod- eling of fiber reinforced materials, e.g. [21], fiber suspensions, e.g. [22], direction- ally solidified alloys, e.g. [213], deep drawing sheets, e.g. [50, 57] and piezoelectric materials, e.g. [285]. The invariants and generating sets for tensor-valued functions with respect to different cases of transverse isotropy are discussed in [79, 328] (see also relevant references therein). In what follows we analyze the problem of a func- tional basis within the theory of linear first order partial differential equations rather than the algebra of polynomials. We develop the idea proposed in [335] for the in- variants with respect to the full orthogonal group to the case of transverse isotropy. The invariants will be found as integrals of the generic partial differential equa- tions. Although a functional basis formed by these invariants does not include any redundant element, functional relations between them may exist. It may be there- fore useful to find out simple forms of such relations. We show that the proposed approach may supply results in a direct, natural manner. A.3 Orthogonal Transformations and Orthogonal Invariants 185
Invariants for a Single Second Rank Symmetric Tensor. Consider the proper orthogonal tensor which represents a rotation about a fixed axis, i.e.
Q(ϕm)= m m + cos ϕ(I m m)+ sin ϕm I, det Q(ϕm)= 1, ⊗ − ⊗ × (A.3.6) where m is assumed to be a constant unit vector (axis of rotation) and ϕ denotes the angle of rotation about m. The symmetry transformation defined by this tensor corresponds to the transverse isotropy, whereby five different cases are possible, e.g. [299, 331]. Let us find scalar-valued functions of a second rank symmetric tensor A satisfying the condition
T T f (A′(ϕ))= f (Q(ϕm) A Q (ϕm)) = f (A), A′(ϕ) Q(ϕm) A Q (ϕm) · · ≡ · · (A.3.7) Equation (A.3.7) must be valid for any angle of rotation ϕ. In (A.3.7) only the left- hand side depends on ϕ. Therefore its derivative with respect to ϕ can be set to zero, i.e. d f dAA ∂ f T = ′ = 0 (A.3.8) dϕ dϕ ·· ∂A ′ The derivative of A′ with respect to ϕ can be calculated by the following rules
T T dAA′(ϕ)= dQQ(ϕm) A Q (ϕm)+ Q(ϕm) A dQQ (ϕm), · · · · dQQ(ϕm)= m Q(ϕm)dϕ dQQT(ϕm)= QT(ϕm) m dϕ × ⇒ − × (A.3.9) By inserting the above equations into (A.3.8) we obtain
∂ f T (m A A m) = 0 (A.3.10) × − × ·· ∂A Equation (A.3.10) is classified in [92] to be the linear homogeneous first order par- tial differential equation. The characteristic system of (A.3.10) is
dAA =(m A A m) (A.3.11) ds × − × Any system of n linear ordinary differential equations has not more then n 1 − functionally independent integrals [92]. By introducing a basis ei the tensor A can be written down in the form A = Aije e and (A.3.11) is a system of six ordi- i ⊗ j nary differential equations with respect to the coordinates Aij. The five integrals of (A.3.11) may be written down as follows
gi(A)= ci, i = 1,2,...,5, where ci are integration constants. Any function of the five integrals gi is the so- lution of the partial differential equation (A.3.10). Therefore the five integrals gi represent the invariants of the symmetric tensor A with respect to the symmetry transformation (A.3.6). The solutions of (A.3.11) are 186 A SomeBasicRulesofTensorCalculus
Ak(s)= Q(smm) Ak QT(smm), k = 1, 2, 3, (A.3.12) · 0 · where A0 is the initial condition. In order to find the integrals, the variable s must be eliminated from (A.3.12). Taking into account the following identities
tr (Q Ak QT)= tr (QT Q Ak)= tr Ak, m Q(smm)= m, · · · · · (A.3.13) (Q a) (Q b)=(det Q)Q (a b) · × · · × and using the notation Q Q(smm) the integrals can be found as follows m ≡ k k tr (A ) = tr (A0), k = 1, 2, 3, m Al m = m Q Al QT m · · · m · 0 · m · = m Al m, l = 1, 2, · 0 · m A2 (m A m) = m QT A2 Q (m QT A Q m) · · × ·· · m · 0 · m · × m · 0 · m · = m A2 Q (QT m) (QT A m) · 0 · m · m · × m · 0 · 2 = m A0 (m A0 m) · · × · (A.3.14) As a result we can formulate the six invariants of the tensor A with respect to the symmetry transformation (A.3.6) as follows
k Ik = tr (A ), k = 1, 2, 3, I4 = m A m, · · (A.3.15) I = m A2 m, I = m A2 (m A m) 5 · · 6 · · × ·· The invariants with respect to various symmetry transformations are discussed in [79]. For the case of the transverse isotropy six invariants are derived in [79] by the use of another approach. In this sense our result coincides with the result given in [79]. However, from our derivations it follows that only five invariants listed in (A.3.15) are functionally independent. Taking into account that I6 is the mixed product of vectors m, A m and A2 m the relation between the invariants can be written down as follows · ·
1 I4 I5 I2 = det I I m A3 m (A.3.16) 6 4 5 · · I m A3 m m A4 m 5 · · · · One can verify that m A3 m and m A4 m are transversely isotropic invari- ants, too. However, applying· · the the Cayley-Hamilton· · theorem (A.1.7) they can be uniquely expressed by I1, I2,... I5 in the following way [54] m A3 m = J I + J I + J , · · 1 5 2 4 3 m A4 m = (J2 + J )I +(J J + J )I + J J , · · 1 2 5 1 2 3 4 1 3 where J1, J2 and J3 are the principal invariants of A defined by (A.1.8). Let us note that the invariant I6 cannot be dropped. In order to verify this, it is enough to consider two different tensors A.3 Orthogonal Transformations and Orthogonal Invariants 187
A and B = Q A QT, n · · n where
Q Q(πn)= 2n n I, n n = 1, n m = 0, det Q = 1 n ≡ ⊗ − · · n One can prove that the tensor A and the tensor B have the same invariants I1, I2,..., I5. Taking into account that m Qn = m and applying the last iden- tity in (A.3.13) we may write · −
I (B) = m B2 (m B m)= m A2 QT (m Q A m) 6 · · × · · · n · × n · · = m A2 (m A m)= I (A) − · · × · − 6 We observe that the only difference between the two considered tensors is the sign 2 2 of I6. Therefore, the triples of vectors m, A m, A m and m, B m , B m have different orientations and cannot be combined· by a rotation· . It should· be noted· that the functional relation (A.3.16) would in no way imply that the invariant I6 should be “dependent” and hence “redundant”, namely should be removed from the basis (A.3.15). In fact, the relation (A.3.16) determines the magnitude but not the sign of I6. To describe yielding and failure of oriented solids a dyad M = v v has been used in [53, 75], where the vector v specifies a privileged direction.⊗ A plastic po- tential is assumed to be an isotropic function of the symmetric Cauchy stress tensor and the tensor generator M. Applying the representation of isotropic functions the integrity basis including ten invariants was found. In the special case v = m the number of invariants reduces to the five I1, I2,... I5 defined by (A.3.15). Further de- tails of this approach and applications in continuum mechanics are given in [59, 71]. However, the problem statement to find an integrity basis of a symmetric tensor A and a dyad M, i.e. to find scalar valued functions f (A, M) satisfying the condition
f (Q A QT, Q M QT)=(det Q)η f (A, M), · · · · (A.3.17) Q, Q QT = I, det Q = 1 ∀ · ± essentially differs from the problem statement (A.3.7). In order to show this we take into account that the symmetry group of a dyad M, i.e. the set of orthogonal solutions of the equation Q M QT = M includes the following elements · · Q = I, 1,2 ± v Q = Q(ϕm), m = , (A.3.18) 3 v | | Q = Q(πn)= 2n n I, n n = 1, n v = 0, 4 ⊗ − · · where Q(ϕm) is defined by (A.3.6). The solutions of the problem (A.3.17) are automatically the solutions of the following problem
f (Q A QT, M)=(det Q )η f (A, M), i = 1, 2, 3, 4, i · · i i 188 A SomeBasicRulesofTensorCalculus i.e. the problem to find the invariants of A relative to the symmetry group (A.3.18). However, (A.3.18) includes much more symmetry elements if compared to the prob- lem statement (A.3.7). An alternative set of transversely isotropic invariants can be formulated by the use of the following decomposition
A = αm m + β(I m m)+ A + t m + m t, (A.3.19) ⊗ − ⊗ pD ⊗ ⊗ where α, β, A pD and t are projections of A. With the projectors P1 = m m and P = I m m we may write ⊗ 2 − ⊗ α = m A m = tr (A P ), · · · 1 1 1 β = (tr A m A m)= tr (A P ), 2 − · · 2 · 2 (A.3.20) A = P A P βP , pD 2 · · 2 − 2 t = m A P · · 2 The decomposition (A.3.19) is the analogue to the following representation of a vector a a = I a = m m a +(I m m) a = ψm + τ, ψ = a m, τ = P2 a · ⊗ · − ⊗ · · (A.3.21)· Decompositions of the type (A.3.19) are applied in [68, 79]. The projections intro- duced in (A.3.20) have the following properties
tr (A )= 0, A m = m A = 00, t m = 0 (A.3.22) pD pD · · pD · With (A.3.19) and (A.3.22) the tensor equation (A.3.11) can be transformed to the following system of equations dα = 0, ds dβ = 0, ds (A.3.23) dAApD = m A A m, ds × pD − pD × dtt = m t ds × From the first two equations we observe that α and β are transversely isotropic in- variants. The third equation can be transformed to one scalar and one vector equation as follows
dAApD d(A pD A pD) dbb A = 0 ·· = 0, = m b ds ·· pD ⇒ ds ds × with b A t. We observe that tr (A2 ) = A A is the transversely ≡ pD · pD pD ·· pD isotropic invariant, too. Finally, we have to find the integrals of the following system A.3 Orthogonal Transformations and Orthogonal Invariants 189
dtt = t m, ds × (A.3.24) dbb = b m ds × The solutions of (A.3.24) are
t(s)= Q(smm) t , b(s)= Q(smm) b , · 0 · 0 where t0 and b0 are initial conditions. The vectors t and b belong to the plane of isotropy, i.e. t m = 0 and b m = 0. Therefore, one can verify the following integrals · ·
t t = t0 t0, b b = b0 b0, t b = t0 b0, (t b) m =(t0 b0) m · · · · · · × · × (A.3.25)· We found seven integrals, but only five of them are functionally independent. In order to formulate the relation between the integrals we compute
b b = t A2 t, t b = t A t · · pD · · · pD ·
For any plane tensor A p satisfying the equations A p m = m A p = 0 the Cayley- Hamilton theorem can be formulated as follows, see· e.g. [71] · 1 A2 (tr A )A + (tr A )2 tr (A2 ) (I m m)= 0 p − p p 2 p − p − ⊗ h i Since tr A pD = 0 we have 1 2A2 = tr (A2 )(I m m), t A2 t = tr (A2 )(t t) pD pD − ⊗ · pD · 2 pD · Because tr (A2 ) and t t are already defined, the invariant b b can be omitted. pD · · The vector t b is spanned on the axis m. Therefore × t b = γm, γ =(t b) m, × × · γ2 =(t b) (t b)=(t t)(b b) (t b)2 × · × · · − · Now we can summarize six invariants and one relation between them as follows 1 I¯ = α, I¯ = β, I¯ = tr (A2 ), I¯ = t t = t A m, 1 2 3 2 pD 4 · · · I¯ = t A t, I¯ =(t A t) m, (A.3.26) 5 · pD · 6 × pD · · I¯2 = I¯2 I¯ I¯2 6 4 3 − 5 Let us assume that the symmetry transformation Qn Q(πn) belongs to the symmetry group of the transverse isotropy, as it was made≡ in [71, 59]. In this case f (A )= f (Q A QT)= f (A) must be valid. With Q m = m we can write ′ n · · n n · − α′ = α, β′ = β, A′ = A , t′ = Q t pD pD − n · 190 A SomeBasicRulesofTensorCalculus
¯ ¯ Therefore in (A.3.26) Ik′ = Ik, k = 1, 2, . . . , 5 and
I¯ = (t′ A′ t′) m = (Q t) Q A t m 6′ × pD · · n · × n · pD · · = (t A pD t) Qn m = (t A pD t) m = I¯6 × · · · − × · · − Consequently
f (A )= f (I¯ , I¯ ,..., I¯ , I¯ )= f (I¯ , I¯ ,..., I¯ , I¯ ) ′ 1′ 2′ 5′ 6′ 1 2 5 − 6 f (A)= f (I¯ , I¯ ,..., I¯ , I¯2) ⇒ 1 2 5 6 ¯2 and I6 can be omitted due to the last relation in (A.3.26). Invariants for a Set of Vectors and Second Rank Tensors. By setting Q = Q(ϕm) in (A.3.3) and taking the derivative of (A.3.3) with respect to ϕ results in the following generic partial differential equation
n ∂ f T k ∂ f ∑ (m A A m)+ ∑ (m a )= 0 (A.3.27) ∂A ·· × i − i × ∂a · × j i=1 i j=1 j The characteristic system of (A.3.27) is
dAAi =(m Ai Ai m), i = 1,2,..., n, ds × − × (A.3.28) daaj = m a , j = 1,2,..., k ds × j The above system is a system of N ordinary differential equations, where N = 6n + 3k is the total number of coordinates of Ai and a j for a selected basis. The system (A.3.28) has not more then N 1 functionally independent integrals. Therefore we can formulate: − Theorem A.3.1. A set of n symmetric second rank tensors and k vectors with N = 6n + 3k independent coordinates for a given basis has not more than N 1 functionally independent invariants for N > 1 and one invariant for N = 1 with− respect to the symmetry transformation Q(ϕm). In essence, the proof of this theorem is given within the theory of linear first order partial differential equations [92]. As an example let us consider the set of a symmetric second rank tensor A and a vector a. This set has eight independent invariants. For a visual perception it is useful to keep in mind that the considered set is equivalent to
A, a, A a, A2 a · · Therefore it is necessary to find the list of invariants, whose fixation determines this set as a rigid whole. The generic equation (A.3.27) takes the form
∂ f T ∂ f (m A A m) + (m a)= 0 (A.3.29) ∂A ·· × − × ∂a · × A.3 Orthogonal Transformations and Orthogonal Invariants 191
The characteristic system of (A.3.29) is dAA daa = m A A m, = m a (A.3.30) ds × − × ds × This system of ninth order has eight independent integrals. Six of them are invariants of A and a with respect to the full orthogonal group. They fix the considered set as a rigid whole. The orthogonal invariants are defined by Eqs (A.3.4) and (A.3.5). Let us note that the invariant I7 in (A.3.4) cannot be ignored. To verify this it is enough to consider two different sets A, a and B = Q A QT, a, p · · p where Q = I 2p p, p p = 1, p a = 0. One can prove that the invariants p − ⊗ · · I1, I2,..., I6 are the same for these two sets. The only difference is the invariant I7, i.e. a B2 (a B a) = a A2 (a A a) Therefore the triples of vectors a, A · · · · · · · a, A2· a ·and ×a, B· a, B2−a ·have· different× · orientations and cannot be combined by· · · · a rotation.· In order· to fix· the considered set with respect to the unit vector m it is enough to fix the next two invariants I = m A m, I = m a (A.3.31) 8 · · 9 · The eight independent transversely isotropic invariants are (A.3.4), (A.3.5) and (A.3.31).
A.3.3 Invariants for the Orthotropic Symmetry Group
Consider orthogonal tensors Q1 = I 2n1 n1 and Q2 = I 2n2 n2, det Q = det Q = 1. These tensors− represent⊗ the mirror reflections,− whereby⊗ 1 2 − the unit orthogonal vectors n1 and n2, are the normal directions to the mirror planes. The above tensors are± the symmetry± elements of the orthotropic symmetry group. The invariants must be found from f (Q A QT)= f (Q A QT)= f (A) 1 · · 1 2 · · 2 Consequently, f (Q Q A QT QT)= f (Q A QT)= f (Q A QT)= f (A) 1 · 2 · · 2 · 1 1 · · 1 2 · · 2 and the tensor Q = Q Q = 2n n I belongs to the symmetry group, where 3 1 · 2 3 3 the unit vector n is orthogonal· to n ⊗and n−. Taking into account that Q n = n 3 1 2 i · i i (no summation convention), Q n = n , i = j and using the notation· A−= i · j j 6 i′ Q A QT we can write i · · i k k tr (A′ ) = tr (A ), k = 1,...,3, i = 1, 2, 3 n A n = n Q A QT n i · ′ · i i · i · · i · i = n A n , i = 1, 2, 3 i · · i (A.3.32) n A 2 n = n Q A2 QT n i · ′ · i i · i · · i · i = n A2 n , i = 1, 2, 3 i · · i 192 A SomeBasicRulesofTensorCalculus
The above set of includes 9 scalars. The number of independent scalars is 7 due to the obvious relations
tr (Ak)= n Ak n + n Ak n + n Ak n , k = 1, 2, 3 1 · · 1 2 · · 2 3 · · 3