
Lecture 2: Differentiation and Linear Algebra Joan Lasenby [with thanks to: Chris Doran & Anthony Lasenby (book), Hugo Hadfield & Eivind Eide (code), Leo Dorst (book)........ and of course, David Hestenes] Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College, Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl July 2018 1 / 37 Contents: GA Course I, Session 2 The vector derivative and examples of its use. Multivector differentiation: examples. Linear algebra Geometric algebras with non-Euclidean metrics. The contents follow the notation and ordering of Geometric Algebra for Physicists [ C.J.L. Doran and A.N. Lasenby ] and the corresponding course the book was based on. 2 / 37 The Vector Derivative A vector x can be represented in terms of coordinates in two ways: k k x = x ek or x = xke (Summation implied). Depending on whether we expand in k terms of a given frame fekg or its reciprocal fe g. The coefficients in these two frames are therefore given by k k x = e ·x and xk = ek·x Now define the following derivative operator which we call the vector derivative ¶ ¶ r = k ≡ k ∑ e k e k k ¶x ¶x ..this is clearly a vector! 3 / 37 The Vector Derivative, cont... ¶ r = k ∑ e k k ¶x This is a definition so far, but we will now see how this form arises. Suppose we have a function acting on vectors, F(x). Using standard definitions of rates of change, we can define the directional derivative of F, evaluated at x, in the direction of a vector a as F(x + ea) − F(x) lim e!0 e 4 / 37 The Vector Derivative cont.... Now, suppose we want the directional derivative in the direction of one of our frame vectors, say e1, this is given by F((x1 + e)e + x2e + x3e ) − F(x1e + x2e + x3e ) lim 1 2 3 1 2 3 e!0 e which we recognise as ¶F(x) ¶x1 ie the derivative with respect to the first coordinate, keeping the second and third coordinates constant. 5 / 37 The Vector Derivative cont.... So, if we wish to define a gradient operator, r, such that (a·r)F(x) gives the directional derivative of F in the a direction, we clearly need: ¶ e ·r = for i = 1, 2, 3 i ¶xi e ·ej ¶ = ¶ ...which, since i ¶xj ¶xi , gives us the previous form of the vector derivative: ¶ r = k ∑ e k k ¶x 6 / 37 The Vector Derivative cont.... It follows now that if we dot r with a, we get the directional derivative in the a direction: F(x + ea) − F(x) a·r F(x) = lim e!0 e The definition of r is in fact independent of the choice of frame. 7 / 37 Operating on Scalar and Vector Fields Operating on: A Scalar Field f(x): it gives rf which is the gradient. A Vector Field J(x): it gives rJ. This is a geometric product Scalar part gives divergence Bivector part gives curl rJ = r·J + r^J Very important in electromagnetism. 8 / 37 The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t 9 / 37 The Multivector Derivative cont... Let feJg be a basis for X – ie if X is a bivector, then feJg will be the basis bivectors. With the definition on the previous slide, eJ ∗ ¶X is therefore the partial derivative in the eJ direction. Giving J ¶X ≡ ∑ e eJ ∗ ¶X J I [since eJ ∗ ¶X ≡ eJ ∗ fe (eI ∗ ¶Xg]. Key to using these definitions of multivector differentiation are several important results: 10 / 37 The Multivector Derivative cont... If PX(B) is the projection of B onto the grades of X (ie J PX(B) ≡ e heJBi), then our first important result is ¶XhXBi = PX(B) We can see this by going back to our definitions: h(X + teJ)Bi − hXBi hXBi + theJBi − hXBi eJ ∗ ¶XhXBi = lim = lim t!0 t t!0 t theJBi lim = heJBi t!0 t Therefore giving us J J ¶XhXBi = e (eJ ∗ ¶X)hXBi = e heJBi ≡ PX(B) 11 / 37 Other Key Results Some other useful results are listed here (proofs are similar to that on previous slide and are left as exercises): ¶XhXBi = PX(B) ¶XhXB˜ i = PX(B˜ ) ˜ ¶X˜ hXBi = PX˜ (B) = PX(B) −1 −1 −1 ¶yhMy i = −y Py(M)y X, B, M, y all general multivectors. 12 / 37 A Simple Example Suppose we wish to fit a set of points fXig to a plane F – where the Xi and F are conformal representations (vector and 4 vector respectively). One possible way forward is to find the plane that minimises the sum of the squared perpendicular distances of the points from the plane. X2 X3 X1 0 X1 0 X2 X0 3 13 / 37 Plane fitting example, cont.... Recall that FXF is the reflection of X in F, so that −X·(FXF) is the distance between the point and the plane. Thus we could take as our cost function: S = − ∑ Xi·(FXiF) i Now use the result ¶XhXBi = PX(B) to differentiate this expression wrt F ˙ ˙ ˙ ˙ ¶FS = − ∑ ¶FhXiFXiFi = − ∑ ¶FhXiFXiFi + ¶FhXiFXiFi i i = −2 ∑ PF(XiFXi) = −2 ∑ XiFXi i i =) solve (via linear algebra techniques) ∑i XiFXi = 0. 14 / 37 Differentiation cont.... Of course we can extend these ideas to other geometric fitting problems and also to those without closed form solutions, using gradient information to find solutions. Another example is differentiating wrt rotors or bivectors. Suppose we wished to create a Kalman filter-like system which tracked bivectors (not simply their components in some basis) – this might involve evaluating expressions such as L h i i ˜ i ¶Bn ∑ vnRnun−1Rn i=1 −B where Rn = e n , u, v s are vectors. 15 / 37 Differentiation cont.... Using just the standard results given, and a page of algebra later (but one only needs to do it once!) we find that 1 ˜ = − ( ) + h ( ) ˜ i ¶Bn vnRnun−1Rn G Bn 2 BnG Bn RnBnRn 2 jBnj sin(jB j) B G(B )R˜ B + n n n n n + ( ) ˜ 2 G Bn Rn jBnj jBnj 2 1 ˜ where G(Bn) = 2 [un−1^RnvnRn]Rn. 16 / 37 Linear Algebra A linear function,f, mapping vectors to vectors asatisfies f(la + mb) = lf(a) + mf(b) We can now extendf to act on any order blade by (outermorphism) f(a1^a2^...^an) = f(a1)^f(a2)^....^f(an) Note that the resulting blade has the same grade as the original blade. Thus, an important property is that these extended linear functions are grade preserving, ie f(Ar) = hf(Ar)ir 17 / 37 Linear Algebra cont.... Matrices are also linear functions which map vectors to vectors. IfF is the matrix corresponding to the linear functionf, we obtain the elements ofF via Fij = ei·f(ej) Where feig is the basis in which the vectors the matrix acts on are written. As with matrix multiplication, where we obtain a 3rd matrix (linear function) from combining two other matrices (linear functions), ieH = FG, we can also write h(a) = f[g(a)] = fg(a) The product of linear functions is associative. 18 / 37 Linear Algebra We now need to verify that h(A) = f[g(A)] = fg(A) for any multivector A. First take a blade a1^a2^...^ar and note that h(a1^a2^...^ar) = fg(a1)^fg(a2)^...^fg(ar) = f[g(a1)^g(a2)^.....^g(ar)] = f[g(a1^a2^...^ar)] from which we get the first result. 19 / 37 The Determinant Consider the action of a linear functionf on an orthogonal basis in 3d: The unit cube I = e1^e2^e3 is transformed to a parallelepiped, V V = f(e1)^f(e2)^f(e3) = f(I) 20 / 37 The Determinant cont.... So, sincef (I) is also a pseudoscalar, we see that if V is the magnitude ofV, then f(I) = VI Let us define the determinant of the linear functionf as the volume scale factor V. So that f(I) = det(f) I This enables us to find the form of the determinant explicitly (in terms of partial derivatives between coordinate frames) very easily in any dimension. 21 / 37 A Key Result As before, leth = fg, then h(I) = det(h) I = f(g(I)) = f(det(g) I) = det(g) f(I) = det(g) det(f)(I) So we have proved that det(fg) = det(f) det(g) A very easy proof! 22 / 37 The Adjoint/Transpose of a Linear Function For a matrixF and its transpose,F T we have (for any vectors a, b) aTFb = bTFTa = f (scalar) In GA we can write this in terms of linear functions as a·f(b) = f¯(a)·b This reverse linear function, f¯, is called the adjoint. 23 / 37 The Adjoint cont.... It is not hard to show that the adjoint extends to blades in the expected way f¯(a1^a2^...^an) = f¯(a1)^f¯(a2)^....^f¯(an) See exercises to show that a·f(b^c) = f[f¯(a)·(b^c)] This can now be generalised to Ar·f¯(Bs) = f¯[f(Ar)·Bs] r ≤ s f(Ar)·Bs = f[Ar·f¯(Bs)] r ≥ s 24 / 37 The Inverse Ar·f¯(Bs) = f¯[f(Ar)·Bs] r ≤ s Now put Bs = I in this formula: Ar·f¯(I) = Ar·det(f)(I) = det(f)(ArI) = f¯[f(Ar)·I] = f¯[f(Ar)I] We can now write this as −1 −1 Ar = f¯[f(Ar)I]I [det(f)] 25 / 37 The Inverse cont..
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages37 Page
-
File Size-