MAT 531
Geometry/Topology II
Introduction to Smooth Manifolds
Claude LeBrun Stony Brook University
April 9, 2020
1 Dual of a vector space:
2 Dual of a vector space: Let V be a real, finite-dimensional vector space.
3 Dual of a vector space: Let V be a real, finite-dimensional vector space. Then the dual vector space of V is defined to be
4 Dual of a vector space: Let V be a real, finite-dimensional vector space. Then the dual vector space of V is defined to be
∗ V := {Linear maps V → R}.
5 Dual of a vector space: Let V be a real, finite-dimensional vector space. Then the dual vector space of V is defined to be
∗ V := {Linear maps V → R}. ∗ Proposition. V is finite-dimensional vector space, too, and
6 Dual of a vector space: Let V be a real, finite-dimensional vector space. Then the dual vector space of V is defined to be
∗ V := {Linear maps V → R}. ∗ Proposition. V is finite-dimensional vector space, too, and ∗ dimV = dimV.
7 Dual of a vector space: Let V be a real, finite-dimensional vector space. Then the dual vector space of V is defined to be
∗ V := {Linear maps V → R}. ∗ Proposition. V is finite-dimensional vector space, too, and ∗ dimV = dimV.
∗ ∼ In particular, V = V as vector spaces.
8 Dual of a vector space: Let V be a real, finite-dimensional vector space. Then the dual vector space of V is defined to be
∗ V := {Linear maps V → R}. ∗ Proposition. V is finite-dimensional vector space, too, and ∗ dimV = dimV.
∗ ∼ In particular, V = V as vector spaces. But there is no natural, preferred isomorphism!
9 Dual of a vector space: Let V be a real, finite-dimensional vector space. Then the dual vector space of V is defined to be
∗ V := {Linear maps V → R}. ∗ Proposition. V is finite-dimensional vector space, too, and ∗ dimV = dimV.
∗ ∼ In particular, V = V as vector spaces. But there is no natural, preferred isomorphism! Let’s not confuse one twin for the other!
10 In fact, ∃ contravariant functor from category
11 In fact, ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps})
12 In fact, ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself, given by V V .
13 In fact, ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself, given by V V . Namely, if A : V → W any linear map,
14 In fact, ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself, given by V V . Namely, if A : V → W any linear map, ∗ ∗ ∗ ∃ adjoint map A : W → V , defined by
15 In fact, ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself, given by V V . Namely, if A : V → W any linear map, ∗ ∗ ∗ ∃ adjoint map A : W → V , defined by
(A∗ϕ)(v) = ϕ(A(v)).
16 In fact, ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself, given by V V . Namely, if A : V → W any linear map, ∗ ∗ ∗ ∃ adjoint map A : W → V , defined by
(A∗ϕ)(v) = ϕ(A(v)).
∗ ∗ A ∗ V ←− W
A V −→ W
17 In fact, ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself, given by V V . Namely, if A : V → W any linear map, ∗ ∗ ∗ ∃ adjoint map A : W → V , defined by
(A∗ϕ)(v) = ϕ(A(v)).
∗ ∗ A ∗ V ←− W
A V −→ W & ↓ R
18 ∼ n Example. Let V = R be the set of column vectors of length n:
19 ∼ n Example. Let V = R be the set of column vectors of length n:
a1 a2 V = . . an
20 ∼ n Example. Let V = R be the set of column vectors of length n:
a1 a2 V = . . an ∗ ∼ n Then V = R can be identified with the set of row vectors of length n
21 ∼ n Example. Let V = R be the set of column vectors of length n:
a1 a2 V = . . an ∗ ∼ n Then V = R can be identified with the set of row vectors of length n
∗ V = b1 b2 ··· bn
22 ∼ n Example. Let V = R be the set of column vectors of length n:
a1 a2 V = . . an ∗ ∼ n Then V = R can be identified with the set of row vectors of length n
∗ V = b1 b2 ··· bn because any linear map V → R takes the form
23 ∼ n Example. Let V = R be the set of column vectors of length n:
a1 a2 V = . . an ∗ ∼ n Then V = R can be identified with the set of row vectors of length n
∗ V = b1 b2 ··· bn because any linear map V → R takes the form
a1 a1 2 2 a a ~ 7−→ b1 b2 ··· bn ∃!b. . . an an
24 ∼ n Example. Let V = R be the set of column vectors of length n:
a1 a2 V = . . an ∗ ∼ n Then V = R can be identified with the set of row vectors of length n
∗ V = b1 b2 ··· bn because any linear map V → R takes the form
a1 a1 2 2 a a ~ 7−→ b1 b2 ··· bn ∃!b. . . an an
25 ∼ n ∼ m Example. If V = R , W = R , and
26 ∼ n ∼ m Example. If V = R , W = R , and A : V → W
27 ∼ n ∼ m Example. If V = R , W = R , and A : V → W given by left-multiplication
28 ∼ n ∼ m Example. If V = R , W = R , and A : V → W given by left-multiplication a1 a1 A ··· A a2 11 1n a2 7−→ . . . . A ··· A an m1 mn an
29 ∼ n ∼ m Example. If V = R , W = R , and A : V → W given by left-multiplication a1 a1 a2 a2 7−→ . A . an an
30 ∼ n ∼ m Example. If V = R , W = R , and A : V → W given by left-multiplication a1 a1 A ··· A a2 11 1n a2 7−→ . . . . A ··· A an m1 mn an
31 ∼ n ∼ m Example. If V = R , W = R , and A : V → W given by left-multiplication a1 a1 A ··· A a2 11 1n a2 7−→ . . . . A ··· A an m1 mn an then
∗ ∗ ∗ A : W → V
32 ∼ n ∼ m Example. If V = R , W = R , and A : V → W given by left-multiplication a1 a1 A ··· A a2 11 1n a2 7−→ . . . . A ··· A an m1 mn an then
∗ ∗ ∗ A : W → V given by right-multiplication
33 ∼ n ∼ m Example. If V = R , W = R , and A : V → W given by left-multiplication a1 a1 A ··· A a2 11 1n a2 7−→ . . . . A ··· A an m1 mn an then
∗ ∗ ∗ A : W → V given by right-multiplication A11 ··· A1n . . b1 b2 ··· bm 7−→ b1 b2 ··· bm . . Am1 ··· Amn
34 ∼ n ∼ m Example. If V = R , W = R , and A : V → W given by left-multiplication a1 a1 a2 a2 7−→ . A . an an then
∗ ∗ ∗ A : W → V given by right-multiplication
b1 b2 ··· bm 7−→ b1 b2 ··· bm A
35 Example. If we “artificially” identify column vec- tors with row vectors b1 t b2 b1 b2 ··· bm = . . bm then the action of A∗ becomes b1 b1 A11 ··· Am1 b2 . . b2 . 7→ . . . . . A1n ··· Amn bm bm so adjoint A∗ sometimes called transpose of A.
36 Example. If we “artificially” identify column vec- tors with row vectors b1 t b2 b1 b2 ··· bm = . . bm then the action of A∗ becomes b1 b1 b2 t b2 . 7→ . . A . bm bm so adjoint A∗ sometimes called transpose of A.
37 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V . If A : V → W any linear map, ∗ ∗ ∗ ∃ adjoint map A : W → V , defined by
(A∗ϕ)(v) = ϕ(A(v)).
∗ ∗ A ∗ V ←− W
A V −→ W
38 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V . If B : V → W any linear map, ∗ ∗ ∗ ∃ adjoint map B : W → V , defined by
I∗ = I.
∗ I ∗ V ←− V
I V −→ V
39 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V . If B : V → W any linear map, ∗ ∗ ∗ ∃ adjoint map B : W → V , defined by
(B ◦ A)∗ = A∗ ◦ B∗
∗ ∗ A ∗ B ∗ U ←− V ←− W
A B U −→ V −→ W
40 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V .
41 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V . ∗ But V and V are genuinely different, because
42 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V . ∗ But V and V are genuinely different, because this relationship reverses the direction of arrows!
43 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V . ∗ But V and V are genuinely different, because this relationship reverses the direction of arrows!
By contrast, there is a natural isomorphism
∗ ∗ (V ) = V which we usually take for granted.
44 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V . ∗ But V and V are genuinely different, because this relationship reverses the direction of arrows!
By contrast, there is a natural isomorphism
∗∗ V = V which we usually take for granted.
45 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V . ∗ But V and V are genuinely different, because this relationship reverses the direction of arrows!
By contrast, there is a natural isomorphism
∗∗ V = V which we usually take for granted.
∗ So V and V are like mirror images.
46 ∃ contravariant functor from category ({vector spaces over R}, {R-linear maps}) ∗ to itself given by V V . ∗ But V and V are genuinely different, because this relationship reverses the direction of arrows!
By contrast, there is a natural isomorphism
∗∗ V = V which we usually take for granted.
∗ So V and V are like mirror images. Or mirror twins!
47 Key Example. If M is a smooth n-manifold, and p ∈ M, the tangent space TpM has a dual vector space ∗ Tp M = Hom (TpM, R), called the cotangent space of M at p.
48 Key Example. If M is a smooth n-manifold, and p ∈ M, the tangent space TpM has a dual vector space ∗ Tp M = Hom (TpM, R), called the cotangent space of M at p.
∗ Elements ϕ ∈ Tp M are called covectors.
49 Key Example. If M is a smooth n-manifold, and p ∈ M, the tangent space TpM has a dual vector space ∗ Tp M = Hom (TpM, R), called the cotangent space of M at p.
∗ Elements ϕ ∈ Tp M are called covectors.
What does a covector look like?
50 ......
TpM ...... V ...... r
tangent vector
51 ...... ϕ ...... T M ...... p ...... r ......
cotangent vector
52 ...... −1 ...... ϕ (1) ...... ϕ ...... T M ...... p ...... r ......
cotangent vector
53 ...... −1 ...... ϕ (1) ...... ϕ ...... T M ...... p ...... r ......
cotangent vector
54 ...... −1 ...... ϕ (1) ...... ϕ ...... T M ...... p ...... r ......
cotangent vector
55 ...... ϕ ...... T M ...... p ...... r ......
cotangent vector
56 Key Example. If M is a smooth n-manifold, and p ∈ M, the tangent space TpM has a dual vector space ∗ Tp M = Hom (TpM, R), called the cotangent space of M at p.
∗ Elements ϕ ∈ Tp M are called covectors.
What does a covector look like?
57 Key Example. If M is a smooth n-manifold, and p ∈ M, the tangent space TpM has a dual vector space ∗ Tp M = Hom (TpM, R), called the cotangent space of M at p.
∗ Elements ϕ ∈ Tp M are called covectors.
What does a covector look like?
Hyperplane in TpM that misses the origin.
58 Key Example. If M is a smooth n-manifold, and p ∈ M, the tangent space TpM has a dual vector space ∗ Tp M = Hom (TpM, R), called the cotangent space of M at p.
∗ Elements ϕ ∈ Tp M are called covectors.
What does a covector look like?
Hyperplane in TpM that misses the origin. Zero covector ↔ ∅: “hyperplane at infinity.”
59 Key Example. If M is a smooth n-manifold, and p ∈ M, the tangent space TpM has a dual vector space ∗ Tp M = Hom (TpM, R), called the cotangent space of M at p.
∗ Elements ϕ ∈ Tp M are called covectors.
What does a covector look like?
Hyperplane in TpM that misses the origin. Zero covector ↔ ∅: “hyperplane at infinity.”
60 Differential of a function:
61 Differential of a function: If f ∈ C∞(M),
62 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → Tf(p)R
63 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → R
64 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → R ∗ So, thought of in this way, (df)p ∈ Tp M.
65 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → R ∗ So, thought of in this way, (df)p ∈ Tp M. Simplifying notation, we’ll write
66 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → R ∗ So, thought of in this way, (df)p ∈ Tp M. Simplifying notation, we’ll write
∗ (df)(p) ∈ Tp M
67 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → R ∗ So, thought of in this way, (df)p ∈ Tp M. Simplifying notation, we’ll write
∗ (df)(p) ∈ Tp M or even
68 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → R ∗ So, thought of in this way, (df)p ∈ Tp M. Simplifying notation, we’ll write
∗ (df)(p) ∈ Tp M or even
∗ df ∈ Tp M
69 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → R ∗ So, thought of in this way, (df)p ∈ Tp M. Simplifying notation, we’ll write
∗ (df)(p) ∈ Tp M or even
∗ df ∈ Tp M when p is clear from context.
70 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → R ∗ So, thought of in this way, df ∈ Tp M.
71 Differential of a function: If f ∈ C∞(M),
(df)p : TpM → R ∗ So, thought of in this way, df ∈ Tp M.
Thus, for v ∈ TpM, we have
(df)(v) := vf.
72 Key Example. If (x1,...,x n) are coordinates on a neighborhood of p ∈ Mn, then
1 n ∗ dx , . . . , dx ∈ Tp M ∗ provide a basis for Tp M.
Exactly dual basis of coordinate basis for TpM: ( ∂ 1 if j = k, (dxj) = δj = ∂xk k 0 if j =6 k.
73 Key Example. If (x1,...,x n) are coordinates on a neighborhood of p ∈ Mn, then
1 n ∗ dx , . . . , dx ∈ Tp M ∗ provide a basis for Tp M.
74 Key Example. If (x1,...,x n) are coordinates on a neighborhood of p ∈ Mn, then
1 n ∗ dx , . . . , dx ∈ Tp M ∗ provide a basis for Tp M.
In particular, df is linear combination of dxj: ∂f ∂f df = dx1 + ··· + dxn ∂x1 ∂xn
75 Key Example. If (x1,...,x n) are coordinates on a neighborhood of p ∈ Mn, then
1 n ∗ dx , . . . , dx ∈ Tp M ∗ provide a basis for Tp M.
76 ∞ Let Ip ⊂ C (M) be the ideal
77 ∞ Let Ip ⊂ C (M) be the ideal
{f ∈ C∞(M) | f(p) = 0}.
78 ∞ Let Ip ⊂ C (M) be the ideal
{f ∈ C∞(M) | f(p) = 0}. 2 Then Ip is the ideal generated by products f1f2, where f1,f 2 ∈ Ip.
79 ∞ Let Ip ⊂ C (M) be the ideal
{f ∈ C∞(M) | f(p) = 0}. 2 Then Ip is the ideal generated by products f1f2, where f1,f 2 ∈ Ip.
Proposition. There is a natural isomorphism 2 ∗ Ip/Ip = Tp M.
80 ∞ Let Ip ⊂ C (M) be the ideal
{f ∈ C∞(M) | f(p) = 0}. 2 Then Ip is the ideal generated by products f1f2, where f1,f 2 ∈ Ip.
Proposition. There is a natural isomorphism 2 ∗ Ip/Ip = Tp M.
In fact, the isomorphism is 2 ∗ Ip/Ip 3 [f] 7−→ (df)p ∈ Tp M.
81 ∞ Let Ip ⊂ C (M) be the ideal
{f ∈ C∞(M) | f(p) = 0}. 2 Then Ip is the ideal generated by products f1f2, where f1,f 2 ∈ Ip.
Proposition. There is a natural isomorphism 2 ∗ Ip/Ip = Tp M.
In fact, the isomorphism is 2 ∗ Ip/Ip 3 [f] 7−→ (df)p ∈ Tp M.
Why? Taylor expansion allows one to prove that
82 ∞ Let Ip ⊂ C (M) be the ideal
{f ∈ C∞(M) | f(p) = 0}. 2 Then Ip is the ideal generated by products f1f2, where f1,f 2 ∈ Ip.
Proposition. There is a natural isomorphism 2 ∗ Ip/Ip = Tp M.
In fact, the isomorphism is 2 ∗ Ip/Ip 3 [f] 7−→ (df)p ∈ Tp M.
Why? Taylor expansion allows one to prove that
2 ∞ Ip = {f ∈ C (M) | f(p) = 0, (df)p = 0}.
83 Cotangent Bundle.
84 Cotangent Bundle.
∗ a ∗ T M = Tp M p∈M
85 Cotangent Bundle.
∗ a ∗ T M = Tp M p∈M can be made into a rank n vector bundle over Mn.
86 Cotangent Bundle.
∗ a ∗ T M = Tp M p∈M can be made into a rank n vector bundle over Mn.
87 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p);
...... E ...... rr . . . . ↓$ ...... M . r ......
88 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p); satisfying the local triviality condition:
...... E ...... rr ...... ↓$ ...... M . r ......
89 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p); satisfying the local triviality condition: • each p ∈ M has a neighborhood U such that
...... E ...... rr ...... ↓$ ...... M . r ......
90 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p); satisfying the local triviality condition: • each p ∈ M has a neighborhood U such that −1 k $ (U) ≈ U × R
...... E ...... rr ...... ↓$ ...... M . r ......
91 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p); satisfying the local triviality condition: • each p ∈ M has a neighborhood U such that −1 k $ (U) ≈ U × R −1 k s.t. $ (q) ≈ {q}×R is isomorphism of vector spaces ∀q ∈ U, and such that the diagram
92 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p); satisfying the local triviality condition: • each p ∈ M has a neighborhood U such that −1 k $ (U) ≈ U × R
...... E ...... rr ...... ↓$ ...... M . r ......
93 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p); satisfying the local triviality condition: • each p ∈ M has a neighborhood U such that −1 k $ (U) ≈ U × R −1 k s.t. $ (q) ≈ {q}×R is isomorphism of vector spaces ∀q ∈ U, and such that the diagram
94 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p); satisfying the local triviality condition: • each p ∈ M has a neighborhood U such that −1 k $ (U) ≈ U × R −1 k s.t. $ (q) ≈ {q}×R is isomorphism of vector spaces ∀q ∈ U, and such that the diagram −1 ≈ k $ (U) → U × R $ ↓ . U commutes.
95 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p); satisfying the local triviality condition: • each p ∈ M has a neighborhood U such that −1 k $ (U) ≈ U × R
...... E ...... rr ...... ↓$ ...... M . r ......
96 Let M be a smooth n-manifold. A smooth rank-k vector bundle over M is • a smooth (n + k)-manifold E; • a smooth map $ : E → M; and • a real vector space structure on every “fiber” −1 Ep := $ (p); satisfying the local triviality condition: • each p ∈ M has a neighborhood U such that −1 k $ (U) ≈ U × R −1 k s.t. $ (q) ≈ {q}×R is isomorphism of vector spaces ∀q ∈ U, and such that the diagram −1 ≈ k $ (U) → U × R $ ↓ . U commutes.
97 Cotangent Bundle.
∗ a ∗ T M = Tp M p∈M can be made into a rank n vector bundle over Mn.
98 Cotangent Bundle.
∗ a ∗ T M = Tp M p∈M can be made into a rank n vector bundle over Mn.
Key observation. If (x1,...,x n) are coordinates on an open set U ⊂ M, then
∗ ∼ n T U = U × R 1 n ∗ by using dx , . . . , dx as basis for Tp M, ∀p ∈ U.
99 Transition functions.
100 Transition functions. Can build any smooth vector bundle as follows:
101 Transition functions. Can build any smooth vector bundle as follows: 1. Cover M with open sets:
102 Transition functions. Can build any smooth vector bundle as follows: 1. Cover M with open sets: [ M = Uα α
103 Transition functions. Can build any smooth vector bundle as follows: 1. Cover M with open sets: [ M = Uα α 2. Choose system of transition functions:
104 Transition functions. Can build any smooth vector bundle as follows: 1. Cover M with open sets: [ M = Uα α 2. Choose system of transition functions: C∞ τ αβ : Uα ∩ Uβ −→ GL(k, R)
105 Transition functions. Can build any smooth vector bundle as follows: 1. Cover M with open sets: [ M = Uα α 2. Choose system of transition functions: C∞ τ αβ : Uα ∩ Uβ −→ GL(k, R) with −1 τ αβ = τ βα
106 Transition functions. Can build any smooth vector bundle as follows: 1. Cover M with open sets: [ M = Uα α 2. Choose system of transition functions: C∞ τ αβ : Uα ∩ Uβ −→ GL(k, R) with −1 τ αβ = τ βα τ αβτ βγτ γα = I.
107 Transition functions. Can build any smooth vector bundle as follows: 1. Cover M with open sets: [ M = Uα α 2. Choose system of transition functions: C∞ τ αβ : Uα ∩ Uβ −→ GL(k, R) with −1 τ αβ = τ βα τ αβτ βγτ γα = I. Here GL(k, R) = {invertible real k × k matrices}.
108 Transition functions. Can build any smooth vector bundle as follows: 1. Cover M with open sets: [ M = Uα α 2. Choose system of transition functions: C∞ τ αβ : Uα ∩ Uβ −→ GL(k, R) with −1 τ αβ = τ βα τ αβτ βγτ γα = I. Here GL(k, R) = {invertible real k × k matrices}. 3. Set " # [ k E = (Uα × R ) / ∼ α
109 Transition functions. Can build any smooth vector bundle as follows: 1. Cover M with open sets: [ M = Uα α 2. Choose system of transition functions: C∞ τ αβ : Uα ∩ Uβ −→ GL(k, R) with −1 τ αβ = τ βα τ αβτ βγτ γα = I. Here GL(k, R) = {invertible real k × k matrices}. 3. Set " # [ k E = (Uα × R ) / ∼ α k k Uβ×R 3 (p, v) ∼ (p,τ αβ(p)v) ∈ Uα×R ∀p ∈ Uα∩Uβ.
110 ∗ Transition Functions for Tp M:
111 ∗ Transition Functions for Tp M: If (x1,...,x n) and (y1,...,y n) are two smooth local coordinate systems, then, using the Einstein summation convention,
112 ∗ Transition Functions for Tp M: If (x1,...,x n) and (y1,...,y n) are two smooth local coordinate systems, then, using the Einstein summation convention,
∂yj dyj = dxk. ∂xk
113 ∗ Transition Functions for Tp M: If (x1,...,x n) and (y1,...,y n) are two smooth local coordinate systems, then, using the Einstein summation convention,
∂yj dyj = dxk. ∂xk
Thus the transformation from components in the dy basis to components in the dx basis is the transpose of the Jacobian matrix J :
114 ∗ Transition Functions for Tp M: If (x1,...,x n) and (y1,...,y n) are two smooth local coordinate systems, then, using the Einstein summation convention,
∂yj dyj = dxk. ∂xk
Thus the transformation from components in the dy basis to components in the dx basis is the transpose of the Jacobian matrix J : ∂y1 ∂yn 1 ··· 1 ∂x ∂x t . . = J ∂y1 ∂yn ∂xn ··· ∂xn
115 Thus, passing from the x-coordinate basis for T ∗M to the y–coordinate basis for T ∗M is accomplished by multiplying by the inverse transpose of the cor- responding Jacobian matrix:
116 Thus, passing from the x-coordinate basis for T ∗M to the y–coordinate basis for T ∗M is accomplished by multiplying by the inverse transpose of the cor- responding Jacobian matrix:
1 n −1 −1 ∂y ∂y ! 1 ··· 1 t ∂x ∂x J = . . ∂y1 ∂yn ∂xn ··· ∂xn
117 Thus, passing from the x-coordinate basis for T ∗M to the y–coordinate basis for T ∗M is accomplished by multiplying by the inverse transpose of the cor- responding Jacobian matrix:
1 n −1 −1 ∂y ∂y ! 1 ··· 1 t ∂x ∂x J = . . ∂y1 ∂yn ∂xn ··· ∂xn By contrast, for the tangent bundle TM , passing from the x-coordinate basis to the y–coordinate ba- sis would instead just be accomplished by the Jaco- bian matrix:
118 Thus, passing from the x-coordinate basis for T ∗M to the y–coordinate basis for T ∗M is accomplished by multiplying by the inverse transpose of the cor- responding Jacobian matrix:
1 n −1 −1 ∂y ∂y ! 1 ··· 1 t ∂x ∂x J = . . ∂y1 ∂yn ∂xn ··· ∂xn By contrast, for the tangent bundle TM , passing from the x-coordinate basis to the y–coordinate ba- sis would instead just be accomplished by the Jaco- bian matrix: ∂y1 ∂y1 ··· n ∂x1 ∂x = . . J n n ∂y ··· ∂y ∂x1 ∂xn
119 ∗ Transition Functions for Tp M: Thus, if C∞ τ αβ : Uα ∩ Uβ −→ GL(n, R) is the system of transition functions used to build TM from a coordinate atlas for Mn, and if
C∞ τ˜αβ : Uα ∩ Uβ −→ GL(n, R) is the system of transition functions used to build T ∗M, then one system is obtained from the other by taking the transpose-inverses:
h ti−1 τ˜αβ = τ αβ .
120