Kernel Methodsmethods Simple Idea of Data Fitting

Total Page:16

File Type:pdf, Size:1020Kb

Kernel Methodsmethods Simple Idea of Data Fitting KernelKernel MethodsMethods Simple Idea of Data Fitting Given ( xi,y i) i=1,…,n xi is of dimension d Find the best linear function w (hyperplane) that fits the data Two scenarios y: real, regression y: {-1,1}, classification Two cases n>d, regression, least square n<d, ridge regression New sample: x, < x,w> : best fit (regression), best decision (classification) 2 Primary and Dual There are two ways to formulate the problem: Primary Dual Both provide deep insight into the problem Primary is more traditional Dual leads to newer techniques in SVM and kernel methods 3 Regression 2 w = arg min ∑(yi − wo − ∑ xij w j ) W i j w = arg min (y − Xw )T (y − Xw ) W d(y − Xw )T (y − Xw ) = 0 dw ⇒ XT (y − Xw ) = 0 w = [w , w ,L, w ]T , ⇒ T T o 1 d X Xw = X y L T x = ,1[ x1, , xd ] , T −1 T ⇒ w = (X X) X y y = [y , y ,L, y ]T ) 1 2 n T −1 T xT y =< x (, X X) X y > 1 xT X = 2 M xT 4 n n×xd Graphical Interpretation ) y = Xw = Hy = X(XT X)−1 XT y = X(XT X)−1 XT y d X= n FICA Income X is a n (sample size) by d (dimension of data) matrix w combines the columns of X to best approximate y Combine features (FICA, income, etc.) to decisions (loan) H projects y onto the space spanned by columns of X Simplify the decisions to fit the features 5 Problem #1 n=d, exact solution n>d, least square, (most likely scenarios) When n < d, there are not enough constraints to determine coefficients w uniquely d X= n W 6 Problem #2 If different attributes are highly correlated (income and FICA) The columns become dependent Coefficients are then poorly determined with high variance E.g., large positive coefficient on one can be canceled by a similarly large negative coefficient on its correlated cousin Size constraint is helpful Caveat: constraint is problem dependent 7 Ridge Regression Similar to regularization ridge 2 2 w = arg min ∑(yi − wo − ∑xij w j ) + λ ∑ w j W i j j w ridge = arg min (y − Xw )T (y − Xw ) + λwT w W d(y − Xw )T (y − Xw ) + λwT w = 0 dw ⇒ −XT (y − Xw ) + λw = 0 ⇒ XT y = XT Xw + λw ⇒ XT y = (XT X + λI)w ⇒ w ridge = (XT X + λI)−1 XT y ⇒ yˆ =< x (, XT X + λI)−1 XT y > 8 Ugly Math w ridge = (XT X + λI)−1 XT y X = UΣV T ) y = Xw ridge = X(XT X + λI)−1 XT y = UΣV T (VΣT UT UΣV T + λI)−1 VΣT UT y = UΣ(V −T )−1(VΣT ΣV T + λI)−1(V −1)−1 ΣT UT y = UΣ(V −1VΣT ΣV TV −T + V −1λIV −T )−1 ΣT UT y = UΣ(ΣT Σ + λI)−1 ΣT UT y σ 2 = u i u T y ∑ i 2 λ i i σ i + 9 How to Decipher This ) σ 2 y = u i u T y ∑ i 2 λ i i σ i + Red: best estimate (y hat) is composed of columns of U (“basis” features, recall U and X have the same column space) Green: how these basis columns are weighed Blue: projection of target (y) onto these columns Together: representing y in a body-fitted coordinate system ( ui) 10 Sidebar Recall that Trace (sum of the diagonals) of a matrix is the same as the sum of the eigenvalues Proof: every matrix has a standard Jordan form (an upper triangular matrix) where the eigenvalues appear on the diagonal (trace=sum of eigenvalues) Jordan form results from a similarity transform (PAP -1) which does not change eigenvalues Ax = λx ⇒ PAx = λPx ⇒ PAP −1Px = λPx ⇒ A J y = λy 11 Physical Interpretation Singular values of X represents the spread of data along different body-fitting dimensions (orthonormal columns) To estimate y (=<x,w ridge >) regularization minimizes the contribution from less spread-out dimensions Less spread-out dimensions usually have much larger variance (high dimension eigen modes) harder to estimate gradients reliably Trace X(XTX+λI)-1XT is called effective degrees of freedom 12 More Details Trace X(XTX+λI)-1XT is called effective degrees of freedom Controls how many eigen modes are actually used or active df (λ ) = d , λ = ,0 df (λ ) = ,0 λ → ∞ Different methods are possible Shrinking smoother: contributions are scaled Projection smoother: contributions are used (1) or not used (0) 13 Dual Formulation Weight vector can be expressed as a sum of the n training feature vectors T T w = (XT X)−1 XT y X y = X Xw + λw T T = XT X(XT X)−2 XT y λw = X y − X Xw T 1 T = X d×nα w = X (y − Xw ) n×1 λ T = ∑αixi α = X d×n n×1 i = ∑αixi i 14 Dual Formulation (cont.) 1 T T α = (y − X w ) X y = X Xw + λw n×1 λ n×d d×1 λw = XT y − XT Xw λα = y − Xw 1 α T α w = XT (y − Xw ) λ = y − XX λ (XX T + λI)α = y T = X d×nα n×1 α T −1 −1 = (Xn×d X d×n + λI) y n×1 = (G + λI) y = ∑αixi i g (x) =< w , x >= ∑ α i x i , x = ∑ α i x i , x x , x 1 x , x =< X T (XX T + λI) −1 y , x >= y T (XX T + λI) −1 2 M 15 x n , x In More Details Gram matrix −1 − xT − | | | − xT − 1 1 L L M L []y1 y2 yn 1×n − − x1 xn + λI − − x − xT − | | | − xT − n n×d d×n n n×d xT x + λ xT x . xT x 1 1 1 2 1 n − xT x − xT x xT x + λ . xT x 1 L 2 1 2 2 2 n L []y1 y2 yn 1×n − − . O . − xT x − T T T n n×1 xn x1 xn x2 xn xn + λ 16 Observations Primary Dual XTX is d by d Only inner products Training: Slow for are involved high feature dimension XX T is n by n Use: fast O(d) Training: Fast for high feature dimension Use: Slow O(nd) N inner product to evaluate, each requires g(x) =< x (, XT X + λI)−1 XT y > d multiplications x ,x d×1 d×1 1 T T −1 x2 ,x g(x) = y (XX + λI) 1×n M x ,x 17 n n×1 Graphical Interpretation d n n d= n n x j ,x j + λδ ij x ,x 1 x ,x g(x) =< w,x >= α x ,x = α x ,x = yT (XX T + λI)−1 2 ∑ i i ∑ i i M xn ,x 18 One Extreme – Perfect Uncorrelated d n n d= n x i , x i n xi,xj =δij x ,x 1 T T −1 x2 ,x xi ,x g(x) = y (XX + λI) = ∑ yi M x ,x + λ i i i xn ,x Orthogonal projection – no generalization 19 General Case T T T −1 T T yˆ 1×n = y 1×n (XX + λI) n×n X X X = U Σ V d×d n×d d×n n×d d×d T 2 T −1 T T = y 1×n (UΣ U + λI) n×n UΣV X d×n T 2 T −1 T T = y 1×n (U(Σ + λI)U ) n×n UΣV X d×n T 2 −1 −1 T T = y 1×nU(Σ + λI) U UΣV X d×n T 2 −1 T T = y 1×nU(Σ + λI) ΣV X d×n T T 2 −1 T T = (U d×ny ) 1×d (Σ + λI) ΣV X n×1 d×n T T Σ2 −1 Σ T Σ T = (U d×ny n×1) 1×d ( + λI) V V U T T Σ2 −1 Σ2 T = (U d×ny n×1) 1×d ( + λI) U How to interpret this? Does this still make sense? 20 Physical Meaning of SVD Assume that n > d X is of rank d at most U are the body (data)-fitted axes UT is a projection from n to d space Σ is the importance of the dimensions V is the representation of the X in the d space Σ T X = Un×d d×dV d×d 21 Interpretation ) 2 T T T 2 −1 2 T σ i T Σ Σ y = ∑ui ui y yˆ 1×n = (U d×ny n×1) 1×d ( + λI) U 2 λ i σ i + In the new, uncorrelated space, there are only d training vectors and d decisions Red: dx1 uncorrelated decision vector Green: weighting of the significance of the components in the uncorrelated decision vector Blue: transformed (uncorrelated) training samples Still the same interpretation: similarity measurement in a new space by Gram matrix Inner product of training samples and new sample 22 First Important Concept The computation involves only inner product For training samples in computing the Gram matrix For new sample in computing regression or classification results Similarity is measured in terms of angle , instead of distance 23 Second Important Concept Using angle or distance for similarity measurement doesn’t make problems easier or harder If you cannot separate data, it doesn’t matter what similarity measures you use “Massage” data Transform data (into higher – even infinite - dimensional space) Data become “more likely” to be linearly separable (caveat: choice of the kernel function is important) Cannot perform inner product efficiently Kernel trick – do not have to 24 Why? There are a lot more “bases” features now d X= n FICA Income >>d X= n φi(FICA, Income, …) 25 Example - Xor d=2 X= n X1 X2 >>d X= n X1*X2 26 Example (Doesn’t quite work yet) 2 2 3 φ : x = (x1, x2 ) → φ(x) = (x1, x2 , x1 + x2 ) ∈ F = R Need to keep the nice property of requiring only inner product in the computation (dual formulation) But what happens if the feature dimension is very high (or even infinitely high)? Inner product in the high (infinitely high) dimensional feature space can be calculated without explicit mapping through a kernel function 27 In More Details x ,x 1 x ,x g(x) =< w,x >= α x ,x = α x ,x = yT (XX T + λI)−1 2 ∑ i i ∑ i i M xn ,x −1 − φ(x )T − | | | − φ(x )T − 1 1 L L M L []y1 y2 yn 1×n − − φ(x1 ) φ(xn ) + λI − − φ(x) − φ(x )T − | | | − φ(x )T − n n×d d×n n n×d φ(x )T φ(x ) + λ φ(x )T φ(x ) .
Recommended publications
  • Kernel and Image
    Math 217 Worksheet 1 February: x3.1 Professor Karen E Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. T Definitions: Given a linear transformation V ! W between vector spaces, we have 1. The source or domain of T is V ; 2. The target of T is W ; 3. The image of T is the subset of the target f~y 2 W j ~y = T (~x) for some x 2 Vg: 4. The kernel of T is the subset of the source f~v 2 V such that T (~v) = ~0g. Put differently, the kernel is the pre-image of ~0. Advice to the new mathematicians from an old one: In encountering new definitions and concepts, n m please keep in mind concrete examples you already know|in this case, think about V as R and W as R the first time through. How does the notion of a linear transformation become more concrete in this special case? Think about modeling your future understanding on this case, but be aware that there are other important examples and there are important differences (a linear map is not \a matrix" unless *source and target* are both \coordinate spaces" of column vectors). The goal is to become comfortable with the abstract idea of a vector space which embodies many n features of R but encompasses many other kinds of set-ups. A. For each linear transformation below, determine the source, target, image and kernel. 2 3 x1 3 (a) T : R ! R such that T (4x25) = x1 + x2 + x3.
    [Show full text]
  • Math 120 Homework 3 Solutions
    Math 120 Homework 3 Solutions Xiaoyu He, with edits by Prof. Church April 21, 2018 [Note from Prof. Church: solutions to starred problems may not include all details or all portions of the question.] 1.3.1* Let σ be the permutation 1 7! 3; 2 7! 4; 3 7! 5; 4 7! 2; 5 7! 1 and let τ be the permutation 1 7! 5; 2 7! 3; 3 7! 2; 4 7! 4; 5 7! 1. Find the cycle decompositions of each of the following permutations: σ; τ; σ2; στ; τσ; τ 2σ. The cycle decompositions are: σ = (135)(24) τ = (15)(23)(4) σ2 = (153)(2)(4) στ = (1)(2534) τσ = (1243)(5) τ 2σ = (135)(24): 1.3.7* Write out the cycle decomposition of each element of order 2 in S4. Elements of order 2 are also called involutions. There are six formed from a single transposition, (12); (13); (14); (23); (24); (34), and three from pairs of transpositions: (12)(34); (13)(24); (14)(23). 3.1.6* Define ' : R× ! {±1g by letting '(x) be x divided by the absolute value of x. Describe the fibers of ' and prove that ' is a homomorphism. The fibers of ' are '−1(1) = (0; 1) = fall positive realsg and '−1(−1) = (−∞; 0) = fall negative realsg. 3.1.7* Define π : R2 ! R by π((x; y)) = x + y. Prove that π is a surjective homomorphism and describe the kernel and fibers of π geometrically. The map π is surjective because e.g. π((x; 0)) = x. The kernel of π is the line y = −x through the origin.
    [Show full text]
  • Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand
    Discrete Topological Transformations for Image Processing Michel Couprie, Gilles Bertrand To cite this version: Michel Couprie, Gilles Bertrand. Discrete Topological Transformations for Image Processing. Brimkov, Valentin E. and Barneva, Reneta P. Digital Geometry Algorithms, 2, Springer, pp.73-107, 2012, Lecture Notes in Computational Vision and Biomechanics, 978-94-007-4174-4. 10.1007/978-94- 007-4174-4_3. hal-00727377 HAL Id: hal-00727377 https://hal-upec-upem.archives-ouvertes.fr/hal-00727377 Submitted on 3 Sep 2012 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Chapter 3 Discrete Topological Transformations for Image Processing Michel Couprie and Gilles Bertrand Abstract Topology-based image processing operators usually aim at trans- forming an image while preserving its topological characteristics. This chap- ter reviews some approaches which lead to efficient and exact algorithms for topological transformations in 2D, 3D and grayscale images. Some transfor- mations which modify topology in a controlled manner are also described. Finally, based on the framework of critical kernels, we show how to design a topologically sound parallel thinning algorithm guided by a priority function. 3.1 Introduction Topology-preserving operators, such as homotopic thinning and skeletoniza- tion, are used in many applications of image analysis to transform an object while leaving unchanged its topological characteristics.
    [Show full text]
  • Abelian Categories
    Abelian Categories Lemma. In an Ab-enriched category with zero object every finite product is coproduct and conversely. π1 Proof. Suppose A × B //A; B is a product. Define ι1 : A ! A × B and π2 ι2 : B ! A × B by π1ι1 = id; π2ι1 = 0; π1ι2 = 0; π2ι2 = id: It follows that ι1π1+ι2π2 = id (both sides are equal upon applying π1 and π2). To show that ι1; ι2 are a coproduct suppose given ' : A ! C; : B ! C. It φ : A × B ! C has the properties φι1 = ' and φι2 = then we must have φ = φid = φ(ι1π1 + ι2π2) = ϕπ1 + π2: Conversely, the formula ϕπ1 + π2 yields the desired map on A × B. An additive category is an Ab-enriched category with a zero object and finite products (or coproducts). In such a category, a kernel of a morphism f : A ! B is an equalizer k in the diagram k f ker(f) / A / B: 0 Dually, a cokernel of f is a coequalizer c in the diagram f c A / B / coker(f): 0 An Abelian category is an additive category such that 1. every map has a kernel and a cokernel, 2. every mono is a kernel, and every epi is a cokernel. In fact, it then follows immediatly that a mono is the kernel of its cokernel, while an epi is the cokernel of its kernel. 1 Proof of last statement. Suppose f : B ! C is epi and the cokernel of some g : A ! B. Write k : ker(f) ! B for the kernel of f. Since f ◦ g = 0 the map g¯ indicated in the diagram exists.
    [Show full text]
  • 23. Kernel, Rank, Range
    23. Kernel, Rank, Range We now study linear transformations in more detail. First, we establish some important vocabulary. The range of a linear transformation f : V ! W is the set of vectors the linear transformation maps to. This set is also often called the image of f, written ran(f) = Im(f) = L(V ) = fL(v)jv 2 V g ⊂ W: The domain of a linear transformation is often called the pre-image of f. We can also talk about the pre-image of any subset of vectors U 2 W : L−1(U) = fv 2 V jL(v) 2 Ug ⊂ V: A linear transformation f is one-to-one if for any x 6= y 2 V , f(x) 6= f(y). In other words, different vector in V always map to different vectors in W . One-to-one transformations are also known as injective transformations. Notice that injectivity is a condition on the pre-image of f. A linear transformation f is onto if for every w 2 W , there exists an x 2 V such that f(x) = w. In other words, every vector in W is the image of some vector in V . An onto transformation is also known as an surjective transformation. Notice that surjectivity is a condition on the image of f. 1 Suppose L : V ! W is not injective. Then we can find v1 6= v2 such that Lv1 = Lv2. Then v1 − v2 6= 0, but L(v1 − v2) = 0: Definition Let L : V ! W be a linear transformation. The set of all vectors v such that Lv = 0W is called the kernel of L: ker L = fv 2 V jLv = 0g: 1 The notions of one-to-one and onto can be generalized to arbitrary functions on sets.
    [Show full text]
  • Positive Semigroups of Kernel Operators
    Positivity 12 (2008), 25–44 c 2007 Birkh¨auser Verlag Basel/Switzerland ! 1385-1292/010025-20, published online October 29, 2007 DOI 10.1007/s11117-007-2137-z Positivity Positive Semigroups of Kernel Operators Wolfgang Arendt Dedicated to the memory of H.H. Schaefer Abstract. Extending results of Davies and of Keicher on !p we show that the peripheral point spectrum of the generator of a positive bounded C0-semigroup of kernel operators on Lp is reduced to 0. It is shown that this implies con- vergence to an equilibrium if the semigroup is also irreducible and the fixed space non-trivial. The results are applied to elliptic operators. Mathematics Subject Classification (2000). 47D06, 47B33, 35K05. Keywords. Positive semigroups, Kernel operators, asymptotic behaviour, trivial peripheral spectrum. 0. Introduction Irreducibility is a fundamental notion in Perron-Frobenius Theory. It had been introduced in a direct way by Perron and Frobenius for matrices, but it was H. H. Schaefer who gave the definition via closed ideals. This turned out to be most fruit- ful and led to a wealth of deep and important results. For applications Ouhabaz’ very simple criterion for irreduciblity of semigroups defined by forms (see [Ouh05, Sec. 4.2] or [Are06]) is most useful. It shows that for practically all boundary con- ditions, a second order differential operator in divergence form generates a positive 2 N irreducible C0-semigroup on L (Ω) where Ωis an open, connected subset of R . The main question in Perron-Frobenius Theory, is to determine the asymp- totic behaviour of the semigroup. If the semigroup is bounded (in fact Abel bounded suffices), and if the fixed space is non-zero, then irreducibility is equiv- alent to convergence of the Ces`aro means to a strictly positive rank-one opera- tor, i.e.
    [Show full text]
  • The Kernel of a Linear Transformation Is a Vector Subspace
    The kernel of a linear transformation is a vector subspace. Given two vector spaces V and W and a linear transformation L : V ! W we define a set: Ker(L) = f~v 2 V j L(~v) = ~0g = L−1(f~0g) which we call the kernel of L. (some people call this the nullspace of L). Theorem As defined above, the set Ker(L) is a subspace of V , in particular it is a vector space. Proof Sketch We check the three conditions 1 Because we know L(~0) = ~0 we know ~0 2 Ker(L). 2 Let ~v1; ~v2 2 Ker(L) then we know L(~v1 + ~v2) = L(~v1) + L(~v2) = ~0 + ~0 = ~0 and so ~v1 + ~v2 2 Ker(L). 3 Let ~v 2 Ker(L) and a 2 R then L(a~v) = aL(~v) = a~0 = ~0 and so a~v 2 Ker(L). Math 3410 (University of Lethbridge) Spring 2018 1 / 7 Example - Kernels Matricies Describe and find a basis for the kernel, of the linear transformation, L, associated to 01 2 31 A = @3 2 1A 1 1 1 The kernel is precisely the set of vectors (x; y; z) such that L((x; y; z)) = (0; 0; 0), so 01 2 31 0x1 001 @3 2 1A @yA = @0A 1 1 1 z 0 but this is precisely the solutions to the system of equations given by A! So we find a basis by solving the system! Theorem If A is any matrix, then Ker(A), or equivalently Ker(L), where L is the associated linear transformation, is precisely the solutions ~x to the system A~x = ~0 This is immediate from the definition given our understanding of how to associate a system of equations to M~x = ~0: Math 3410 (University of Lethbridge) Spring 2018 2 / 7 The Kernel and Injectivity Recall that a function L : V ! W is injective if 8~v1; ~v2 2 V ; ((L(~v1) = L(~v2)) ) (~v1 = ~v2)) Theorem A linear transformation L : V ! W is injective if and only if Ker(L) = f~0g.
    [Show full text]
  • Quasi-Arithmetic Filters for Topology Optimization
    Quasi-Arithmetic Filters for Topology Optimization Linus Hägg Licentiate thesis, 2016 Department of Computing Science This work is protected by the Swedish Copyright Legislation (Act 1960:729) ISBN: 978-91-7601-409-7 ISSN: 0348-0542 UMINF 16.04 Electronic version available at http://umu.diva-portal.org Printed by: Print & Media, Umeå University, 2016 Umeå, Sweden 2016 Acknowledgments I am grateful to my scientific advisors Martin Berggren and Eddie Wadbro for introducing me to the fascinating subject of topology optimization, for sharing their knowledge in countless discussions, and for helping me improve my scientific skills. Their patience in reading and commenting on the drafts of this thesis is deeply appreciated. I look forward to continue with our work. Without the loving support of my family this thesis would never have been finished. Especially, I am thankful to my wife Lovisa for constantly encouraging me, and covering for me at home when needed. I would also like to thank my colleges at the Department of Computing Science, UMIT research lab, and at SP Technical Research Institute of Sweden for providing a most pleasant working environment. Finally, I acknowledge financial support from the Swedish Research Council (grant number 621-3706), and the Swedish strategic research programme eSSENCE. Linus Hägg Skellefteå 2016 iii Abstract Topology optimization is a framework for finding the optimal layout of material within a given region of space. In material distribution topology optimization, a material indicator function determines the material state at each point within the design domain. It is well known that naive formulations of continuous material distribution topology optimization problems often lack solutions.
    [Show full text]
  • How I Think About Math Part I: Linear Algebra
    Algebra davidad Relations Labels Composing Joining Inverting Commuting How I Think About Math Linearity Fields Part I: Linear Algebra “Linear” defined Vectors Matrices Tensors Subspaces David Dalrymple Image & Coimage [email protected] Kernel & Cokernel Decomposition Singular Value Decomposition Fundamental Theorem of Linear Algebra March 6, 2014 CP decomposition Algebra Chapter 1: Relations davidad 1 Relations Relations Labels Labels Composing Joining Composing Inverting Commuting Joining Linearity Inverting Fields Commuting “Linear” defined Vectors 2 Linearity Matrices Tensors Fields Subspaces “Linear” defined Image & Coimage Kernel & Cokernel Vectors Decomposition Matrices Singular Value Decomposition Tensors Fundamental Theorem of Linear Algebra 3 Subspaces CP decomposition Image & Coimage Kernel & Cokernel 4 Decomposition Singular Value Decomposition Fundamental Theorem of Linear Algebra CP decomposition Algebra A simple relation davidad Relations Labels Composing Relations are a generalization of functions; they’re actually more like constraints. Joining Inverting Here’s an example: Commuting Linearity Fields · “Linear” defined x 2 y Vectors Matrices Tensors Subspaces Image & Coimage Kernel & Cokernel Decomposition Singular Value Decomposition Fundamental Theorem of Linear Algebra CP decomposition Algebra A simple relation davidad Relations Labels Composing Relations are a generalization of functions; they’re actually more like constraints. Joining Inverting Here’s an example: Commuting Linearity Fields · “Linear” defined x 2 y Vectors
    [Show full text]
  • 7. Quotient Groups III We Know That the Kernel of a Group Homomorphism Is a Normal Subgroup
    7. Quotient groups III We know that the kernel of a group homomorphism is a normal subgroup. In fact the opposite is true, every normal subgroup is the kernel of a homomorphism: Theorem 7.1. If H is a normal subgroup of a group G then the map γ : G −! G=H given by γ(x) = xH; is a homomorphism with kernel H. Proof. Suppose that x and y 2 G. Then γ(xy) = xyH = xHyH = γ(x)γ(y): Therefore γ is a homomorphism. eH = H plays the role of the identity. The kernel is the inverse image of the identity. γ(x) = xH = H if and only if x 2 H. Therefore the kernel of γ is H. If we put all we know together we get: Theorem 7.2 (First isomorphism theorem). Let φ: G −! G0 be a group homomorphism with kernel K. Then φ[G] is a group and µ: G=H −! φ[G] given by µ(gH) = φ(g); is an isomorphism. If γ : G −! G=H is the map γ(g) = gH then φ(g) = µγ(g). The following triangle summarises the last statement: φ G - G0 - γ ? µ G=H Example 7.3. Determine the quotient group Z3 × Z7 Z3 × f0g Note that the quotient of an abelian group is always abelian. So by the fundamental theorem of finitely generated abelian groups the quotient is a product of abelian groups. 1 Consider the projection map onto the second factor Z7: π : Z3 × Z7 −! Z7 given by (a; b) −! b: This map is onto and the kernel is Z3 ×f0g.
    [Show full text]
  • Descriptive Set Theory and the Ergodic Theory of Countable Groups
    DESCRIPTIVE SET THEORY AND THE ERGODIC THEORY OF COUNTABLE GROUPS Thesis by Robin Daniel Tucker-Drob In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California 2013 (Defended April 17, 2013) ii c 2013 Robin Daniel Tucker-Drob All Rights Reserved iii Acknowledgements I would like to thank my advisor Alexander Kechris for his invaluable guidance and support, for his generous feedback, and for many (many!) discussions. In addition I would like to thank Miklos Abert,´ Lewis Bowen, Clinton Conley, Darren Creutz, Ilijas Farah, Adrian Ioana, David Kerr, Andrew Marks, Benjamin Miller, Jesse Peterson, Ernest Schimmerling, Miodrag Sokic, Simon Thomas, Asger Tornquist,¨ Todor Tsankov, Anush Tserunyan, and Benjy Weiss for many valuable conversations over the past few years. iv Abstract The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel re- ducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of resid- ually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence.
    [Show full text]
  • Invariant Equivalence Relations on Groups
    Modern Algebra 1 (MATH 6130) September 16, 2016 Left and right invariant equivalence relations on groups. Definition 1. An equivalence relation θ on a group G is left invariant (LI ) if for all a; b; c 2 G a ≡ b (mod θ) implies ca ≡ cb (mod θ): The concept of right invariance is defined the same way, but with c appearing on the right. We proved the following in class: Theorem 2. An equivalence relation θ on a group is the kernel of a homomorphism iff it is left and right invariant. So now let's understand which equivalence relations are left invariant. First, for a subgroup2 2 −1 H ≤ G define H θ = f(a; b) 2 G j a b 2 Hg: Theorem 3. For any H ≤ G, the relation H θ is a left invariant equivalence relation on G. Moreover, the H θ-class of 1 is H itself. Proof. H θ is −1 (1) (reflexive) (a; a) 2 H θ iff a a 2 H. ! −1 −1 −1 (2) (symmetric) If (a; b) 2 H θ, then x = a b 2 H, so x = b a 2 H, so (b; a) 2 H θ. ! −1 −1 −1 (3) (transitive) If (a; b); (b; c) 2 H θ, then x = a b; y = b c 2 H, so xy = a c 2 H, so (a; c) 2 H θ. ! −1 −1 −1 (4) (left invariant) If (a; b) 2 H θ, then a b 2 H, so (ca) (cb) = a b 2 H, so (ca; cb) 2 H θ.
    [Show full text]