Exterior Algebra Differential Forms

Total Page:16

File Type:pdf, Size:1020Kb

Exterior Algebra Differential Forms Exterior Algebra ! Differential Forms Faraad M Armwood North Dakota State September 4, 2016 Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 1 / 17 Dual Space Let V ; W be a f.d.v.s and let Hom(V ; W ) = ff j f : V ! W is linearg. V V Then the dual vector space V to V is Hom(V ; R). The elements of V are called 1-covectors or covectors on V . P k If we let e1; :::; en be a basis for V then if v 2 V we have v = k v ek . i i i i 1 n Suppose α (ej ) = δj then α (v) = v where v = (v ; :::; v ). Therefore we i i V have α : V ! R is a linear map and so α 2 V . It follows that the fαi : i = 1; :::; ng are linearly independent and they span V V i.e dim(V ) = dim(V V ). Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 2 / 17 Example 2 1 2 2 Let V = R and x ; x be the standard coordinates on R i.e if i T T p = (p1; p2) then x (p) = pi . Let e1 = (1; 0) ; e2 = (0; 1) denote the i i standard basis then x (ej ) = δj . This example and the above demonstrate that for a f.d.v.s the dual basis is determined by the coordinates of a point in the standard basis. Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 3 / 17 Multilinear Functions Let V k = V × · · · × V be the k-product of a real vector space V . We say k f : V ! R is k-linear if is linear in each of its k-arguments i.e; f (::::; av + bw; :::) = af (::::; v; :::) + bf (::::; w; :::) A k-linear function on V is also called a k-tensor on V . We will denote by Lk (V ) the set of all k-tensors on V . If f is a k-tensor, we also say that f has degree k. Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 4 / 17 Example 2 2 2 Let v; w 2 TpR and let f = h·; ·i : TpR × TpR ! R defined by; X hv; wi = v k w k k 2 2 then f is a 2-linear or bilinear function on R . Here TpR denoted the 2 tangent space on R i.e v; w are understood to be vectors. Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 5 / 17 Symmetric and Alternating Tensors Let f 2 Lk (V ) then we say f is symmetric if; f (vσ(1); :::; vσ(k)) = f (v1; :::; vk ); 8σ 2 Sk We say f is alternating if; f (vσ(1); :::; vσ(k)) = sgn(σ) · f (v1; :::; vk ); 8σ 2 Sk Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 6 / 17 Example 2 (1) The inner product define in the last example is symmetric on TpR n and even in the extension to TpR due to the commutability of R. (2) A simple function f (x; y) = x + y is also symmetric 3 (3) The cross product ~v × w~ on R is alternating Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 7 / 17 Symmetrizing and Alternatrizing Operators Let f (v1; :::; vk ) 2 Lk (V ) and σ 2 Sk . Then we can define an action of a perm. on f by σf := f (vσ(1); :::; vσ(k)). We now demonstrate how to get an alternating and symmetric frunction from f ; X Sf = σf (symmetric) σ2Sk X Af = (sgnσ) σf (alternating) σ2Sk Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 8 / 17 Example Let f 2 L3(V ) and suppose v1; v2; v3 2 V then; X Af (v1; v2; v3) = (sgnσ) σf σ2S3 Recall that S3 = f(1); (12); (13); (123); (132)g and to determine their sign we observe that (123) = (13)(12); (132) = (12)(13) and so; Af = f (v1; v2; v3) − f (v2; v1; v3) − f (v3; v2; v1) + f (v3; v2; v1) + f (v2; v3; v1) (12)Af = f (v2; v1; v3)−f (v1; v2; v3)−f (v3; v1; v2)+f (v3; v1; v2)+f (v1; v3; v2) Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 9 / 17 Tensor Product Let f 2 Lk (V ) and g 2 Ll (V ) then we defined their tensor product; (f ⊗ g)(v1; :::; vk+l ) = f (v1; :::; vk )g(vk+1; :::; vk+l ) 2 Lk+l (V ) The operation above is associative i.e (f ⊗ g) ⊗ h = f ⊗ (g ⊗ h). Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 10 / 17 Example Let e1; :::; en be a basis for V and f = h·; ·i : V × V ! R. Define P i P j hei ; ej i = gij and v = i v ei ; w = j w ej then by the previous remarks, αi (v) = v i ; αj (w) = w j and so; X j i X j i hv; wi = gij (α ⊗ α )(w; v) ) h; i = gij α ⊗ α i;j i;j Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 11 / 17 The Wedge Product I Let f 2 Ak (V ) and g 2 Al (V ) then we define their exterior product or wedge product to be; 1 f ^ g = A(f ⊗ g) k!l! or explicitely; 1 X (f ^ g)(v) = (sgnσ) f (v ; :::; v )g v ; :::; v k!l! σ(1) σ(k) σ(k+1) σ(k+l) σ2Sk+l where v = (v1; :::; vk+l ) and dividing out by k!l! compensates for the repetition in the sum. Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 12 / 17 Discussion (1) If σ(k + j) = k + j; 8j = 1; :::; l then σf = sgn(σ) f and σg = g and so for each such σ you get fg in the amount of k! times. Similarly if τ(j) = j; 8j = 1; :::; k then you get fg in the amount of l! times. Now convince yourself that there are no other repetitions in the sum. (2) Let f 2 A0(V ) and g 2 Al (V ) then f is a constant function say c 2 R and; 1 X c ^ g(v ; :::; v ) = (sgnσ)2 cg(v ; :::; v ) = cg 1 l l! 1 l σ2Sl Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 13 / 17 The Wedge Product II Another way to compensate for the repeated terms in the sum for the tensor product is to arrange that σ(1) < ··· < σ(k) and σ(k + 1) < ··· < σ(k + l). If σ 2 Sk+l is such a permutation, we say that σ is a (k; l) shuffle. Therefore we have; X (f ^ g)(v) = (sgnσ) f (vσ(1); :::; vσ(k))g(vσk+1; :::; vσk+l ) (k;l)−shuffles σ The two definitions are not the same! Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 14 / 17 Example To demonstrate that the two definitions differ, we take f 2 A1(V ) and 3 g 2 A2(V ) then the permutation group in discussion is S , but the only (1; 2) shuffle is the identity. Now compute f ^ g by the original definition and see that they differ. Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 15 / 17 Properties of Wedge Product kl (1) If f 2 Ak (V ) and g 2 Al (V ) then f ^ g = (−1) g ^ f . (2) If f 2 A2k+1(V ) then f ^ f = 0 (3) (f ^ g) ^ h = f ^ (g ^ h); 8g; h; f 2 A∗(V ) i 1 k i (4) If α 2 L1(V ) and vi 2 V then; (α ^ · · · ^ α )(v1; :::; vk ) = det[α (vj )] Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 16 / 17 Basis for k-Covectors 1 n Lemma: Let e1; :::; en be a basis for a v.s V and α ; :::; α be its dual basis V in V . If I = (1 ≤ i1 < ··· < ik ≤ n) and J = (1 ≤ j1 < ··· < jk ≤ n) are strictly ascending multi-indices of length k then; I I α (eJ ) = δJ I Proposition: The alternating k-linear function α ; I = (i1 < ··· < ik ), form a basis for the space Ak (V ) of alternating k-linear functions on V i.e if f 2 Ak (V ) then; X I f = aI α I . Corollary: If k > dim(V ) then Ak (V ) = 0. Faraad M Armwood (North Dakota State) Exterior Algebra ! Differential Forms September 4, 2016 17 / 17.
Recommended publications
  • Common Course Outline MATH 257 Linear Algebra 4 Credits
    Common Course Outline MATH 257 Linear Algebra 4 Credits The Community College of Baltimore County Description MATH 257 – Linear Algebra is one of the suggested elective courses for students majoring in Mathematics, Computer Science or Engineering. Included are geometric vectors, matrices, systems of linear equations, vector spaces, linear transformations, determinants, eigenvectors and inner product spaces. 4 Credits: 5 lecture hours Prerequisite: MATH 251 with a grade of “C” or better Overall Course Objectives Upon successfully completing the course, students will be able to: 1. perform matrix operations; 2. use Gaussian Elimination, Cramer’s Rule, and the inverse of the coefficient matrix to solve systems of Linear Equations; 3. find the inverse of a matrix by Gaussian Elimination or using the adjoint matrix; 4. compute the determinant of a matrix using cofactor expansion or elementary row operations; 5. apply Gaussian Elimination to solve problems concerning Markov Chains; 6. verify that a structure is a vector space by checking the axioms; 7. verify that a subset is a subspace and that a set of vectors is a basis; 8. compute the dimensions of subspaces; 9. compute the matrix representation of a linear transformation; 10. apply notions of linear transformations to discuss rotations and reflections of two dimensional space; 11. compute eigenvalues and find their corresponding eigenvectors; 12. diagonalize a matrix using eigenvalues; 13. apply properties of vectors and dot product to prove results in geometry; 14. apply notions of vectors, dot product and matrices to construct a best fitting curve; 15. construct a solution to real world problems using problem methods individually and in groups; 16.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • CLIFFORD ALGEBRAS Property, Then There Is a Unique Isomorphism (V ) (V ) Intertwining the Two Inclusions of V
    CHAPTER 2 Clifford algebras 1. Exterior algebras 1.1. Definition. For any vector space V over a field K, let T (V ) = k k k Z T (V ) be the tensor algebra, with T (V ) = V V the k-fold tensor∈ product. The quotient of T (V ) by the two-sided⊗···⊗ ideal (V ) generated byL all v w + w v is the exterior algebra, denoted (V ).I The product in (V ) is usually⊗ denoted⊗ α α , although we will frequently∧ omit the wedge ∧ 1 ∧ 2 sign and just write α1α2. Since (V ) is a graded ideal, the exterior algebra inherits a grading I (V )= k(V ) ∧ ∧ k Z M∈ where k(V ) is the image of T k(V ) under the quotient map. Clearly, 0(V )∧ = K and 1(V ) = V so that we can think of V as a subspace of ∧(V ). We may thus∧ think of (V ) as the associative algebra linearly gener- ated∧ by V , subject to the relations∧ vw + wv = 0. We will write φ = k if φ k(V ). The exterior algebra is commutative | | ∈∧ (in the graded sense). That is, for φ k1 (V ) and φ k2 (V ), 1 ∈∧ 2 ∈∧ [φ , φ ] := φ φ + ( 1)k1k2 φ φ = 0. 1 2 1 2 − 2 1 k If V has finite dimension, with basis e1,...,en, the space (V ) has basis ∧ e = e e I i1 · · · ik for all ordered subsets I = i1,...,ik of 1,...,n . (If k = 0, we put { } k { n } e = 1.) In particular, we see that dim (V )= k , and ∅ ∧ n n dim (V )= = 2n.
    [Show full text]
  • Math 395. Tensor Products and Bases Let V and V Be Finite-Dimensional
    Math 395. Tensor products and bases Let V and V 0 be finite-dimensional vector spaces over a field F . Recall that a tensor product of V and V 0 is a pait (T, t) consisting of a vector space T over F and a bilinear pairing t : V × V 0 → T with the following universal property: for any bilinear pairing B : V × V 0 → W to any vector space W over F , there exists a unique linear map L : T → W such that B = L ◦ t. Roughly speaking, t “uniquely linearizes” all bilinear pairings of V and V 0 into arbitrary F -vector spaces. In class it was proved that if (T, t) and (T 0, t0) are two tensor products of V and V 0, then there exists a unique linear isomorphism T ' T 0 carrying t and t0 (and vice-versa). In this sense, the tensor product of V and V 0 (equipped with its “universal” bilinear pairing from V × V 0!) is unique up to unique isomorphism, and so we may speak of “the” tensor product of V and V 0. You must never forget to think about the data of t when you contemplate the tensor product of V and V 0: it is the pair (T, t) and not merely the underlying vector space T that is the focus of interest. In this handout, we review a method of construction of tensor products (there is another method that involved no choices, but is horribly “big”-looking and is needed when considering modules over commutative rings) and we work out some examples related to the construction.
    [Show full text]
  • Mathematics 552 Algebra II Spring 2017 the Exterior Algebra of K
    Mathematics 552 Algebra II Spring 2017 The exterior algebra of K-modules and the Cauchy-Binet formula Let K be a commutative ring, and let M be a K-module. Recall that the tensor ⊗i algebra T (M) = ⊕i=0M is an associative K-algebra and that if M is free of finite rank ⊗k m with basis v1; : : : ; vm then T (M) is a graded free K module with basis for M given ⊗k by the collection of vi1 ⊗ vi2 ⊗ · · · vik for 1 ≤ it ≤ m. Hence M is a free K-module of rank mk. We checked in class that the tensor algebra was functorial, in that when f : M ! N is a K-module homomorphism, then the induced map f ⊗i : M ⊗i ! N ⊗i gives a homomorphism of K-algebras T (f): T (M) ! T (N) and that this gives a functor from K-modules to K-algebras. The ideal I ⊂ T (M) generated by elements of the form x ⊗ x for x 2 M enables us to construct an algebra Λ(M) = T (M)=I. Since (v+w)⊗(v+w) = v⊗v+v⊗w+w⊗v+w⊗w we have that (modulo I) v ⊗ w = −w ⊗ v. The image of M ⊗i in Λ(M) is denoted Λi(M), the i-th exterior power of M. The product in Λ(M) derived from the product on T (M) is usually called the wedge product : v ^ w is the image of v ⊗ w in T (M)=I. Since making tensor algebra of modules is functorial, so is making the exterior algebra.
    [Show full text]
  • Concept of a Dyad and Dyadic: Consider Two Vectors a and B Dyad: It Consists of a Pair of Vectors a B for Two Vectors a a N D B
    1/11/2010 CHAPTER 1 Introductory Concepts • Elements of Vector Analysis • Newton’s Laws • Units • The basis of Newtonian Mechanics • D’Alembert’s Principle 1 Science of Mechanics: It is concerned with the motion of material bodies. • Bodies have different scales: Microscropic, macroscopic and astronomic scales. In mechanics - mostly macroscopic bodies are considered. • Speed of motion - serves as another important variable - small and high (approaching speed of light). 2 1 1/11/2010 • In Newtonian mechanics - study motion of bodies much bigger than particles at atomic scale, and moving at relative motions (speeds) much smaller than the speed of light. • Two general approaches: – Vectorial dynamics: uses Newton’s laws to write the equations of motion of a system, motion is described in physical coordinates and their derivatives; – Analytical dynamics: uses energy like quantities to define the equations of motion, uses the generalized coordinates to describe motion. 3 1.1 Vector Analysis: • Scalars, vectors, tensors: – Scalar: It is a quantity expressible by a single real number. Examples include: mass, time, temperature, energy, etc. – Vector: It is a quantity which needs both direction and magnitude for complete specification. – Actually (mathematically), it must also have certain transformation properties. 4 2 1/11/2010 These properties are: vector magnitude remains unchanged under rotation of axes. ex: force, moment of a force, velocity, acceleration, etc. – geometrically, vectors are shown or depicted as directed line segments of proper magnitude and direction. 5 e (unit vector) A A = A e – if we use a coordinate system, we define a basis set (iˆ , ˆj , k ˆ ): we can write A = Axi + Ay j + Azk Z or, we can also use the A three components and Y define X T {A}={Ax,Ay,Az} 6 3 1/11/2010 – The three components Ax , Ay , Az can be used as 3-dimensional vector elements to specify the vector.
    [Show full text]
  • Coordinatization
    MATH 355 Supplemental Notes Coordinatization Coordinatization In R3, we have the standard basis i, j and k. When we write a vector in coordinate form, say 3 v 2 , (1) “ »´ fi 5 — ffi – fl it is understood as v 3i 2j 5k. “ ´ ` The numbers 3, 2 and 5 are the coordinates of v relative to the standard basis ⇠ i, j, k . It has p´ q “p q always been understood that a coordinate representation such as that in (1) is with respect to the ordered basis ⇠. A little thought reveals that it need not be so. One could have chosen the same basis elements in a di↵erent order, as in the basis ⇠ i, k, j . We employ notation indicating the 1 “p q coordinates are with respect to the di↵erent basis ⇠1: 3 v 5 , to mean that v 3i 5k 2j, r s⇠1 “ » fi “ ` ´ 2 —´ ffi – fl reflecting the order in which the basis elements fall in ⇠1. Of course, one could employ similar notation even when the coordinates are expressed in terms of the standard basis, writing v for r s⇠ (1), but whenever we have coordinatization with respect to the standard basis of Rn in mind, we will consider the wrapper to be optional. r¨s⇠ Of course, there are many non-standard bases of Rn. In fact, any linearly independent collection of n vectors in Rn provides a basis. Say we take 1 1 1 4 ´ ´ » 0fi » 1fi » 1fi » 1fi u , u u u ´ . 1 “ 2 “ 3 “ 4 “ — 3ffi — 1ffi — 0ffi — 2ffi — ffi —´ ffi — ffi — ffi — 0ffi — 4ffi — 2ffi — 1ffi — ffi — ffi — ffi —´ ffi – fl – fl – fl – fl As per the discussion above, these vectors are being expressed relative to the standard basis of R4.
    [Show full text]