<<

From Wikipedia, the free encyclopedia Contents

1 2 × 2 real matrices 1 1.1 Profile ...... 1 1.2 Equi-areal mapping ...... 2 1.3 Functions of 2 × 2 real matrices ...... 2 1.4 2 × 2 real matrices as complex numbers ...... 3 1.5 References ...... 4

2 Abelian 5 2.1 Definition ...... 5 2.2 Facts ...... 5 2.2.1 Notation ...... 5 2.2.2 Multiplication table ...... 6 2.3 Examples ...... 6 2.4 Historical remarks ...... 6 2.5 Properties ...... 6 2.6 Finite abelian groups ...... 7 2.6.1 Classification ...... 7 2.6.2 Automorphisms ...... 7 2.7 Infinite abelian groups ...... 8 2.7.1 Torsion groups ...... 9 2.7.2 Torsion-free and mixed groups ...... 9 2.7.3 Invariants and classification ...... 9 2.7.4 Additive groups of rings ...... 9 2.8 Relation to other mathematical topics ...... 10 2.9 A note on the typography ...... 10 2.10 See also ...... 10 2.11 Notes ...... 11 2.12 References ...... 11 2.13 External links ...... 11

3 Associative algebra 12 3.1 Formal definition ...... 12 3.1.1 From R-modules ...... 12

i ii CONTENTS

3.1.2 From rings ...... 13 3.2 Algebra homomorphisms ...... 13 3.3 Examples ...... 13 3.4 Constructions ...... 14 3.5 Associativity and the multiplication mapping ...... 15 3.6 Coalgebras ...... 15 3.7 Representations ...... 15 3.7.1 Motivation for a Hopf algebra ...... 16 3.7.2 Motivation for a Lie algebra ...... 16 3.8 See also ...... 17 3.9 References ...... 17

4 Bijection 18 4.1 Definition ...... 19 4.2 Examples ...... 19 4.2.1 Batting line-up of a baseball team ...... 19 4.2.2 Seats and students of a classroom ...... 19 4.3 More mathematical examples and some non-examples ...... 20 4.4 Inverses ...... 20 4.5 Composition ...... 20 4.6 Bijections and cardinality ...... 20 4.7 Properties ...... 21 4.8 Bijections and ...... 21 4.9 Generalization to partial functions ...... 22 4.10 Contrast with ...... 22 4.11 See also ...... 22 4.12 Notes ...... 22 4.13 References ...... 23 4.14 External links ...... 23

5 Category () 24 5.1 Definition ...... 25 5.2 History ...... 25 5.3 Small and large categories ...... 26 5.4 Examples ...... 26 5.5 Construction of new categories ...... 26 5.5.1 Dual category ...... 26 5.5.2 Product categories ...... 27 5.6 Types of morphisms ...... 27 5.7 Types of categories ...... 28 5.8 See also ...... 28 5.9 Notes ...... 28 CONTENTS iii

5.10 References ...... 28

6 Complete 30 6.1 Definition ...... 30 6.2 Examples ...... 30 6.3 Properties ...... 31 6.4 See also ...... 32 6.5 References ...... 32

7 33 7.1 Properties ...... 33 7.2 Geometry and ...... 33 7.3 Examples ...... 34 7.4 See also ...... 34 7.5 References ...... 34 7.6 External links ...... 34

8 Complete metric 35 8.1 Examples ...... 35 8.2 Some theorems ...... 36 8.3 Completion ...... 36 8.4 Topologically complete spaces ...... 37 8.5 Alternatives and generalizations ...... 37 8.6 See also ...... 37 8.7 Notes ...... 38 8.8 References ...... 38

9 Conjugacy class 39 9.1 Definition ...... 39 9.2 Examples ...... 39 9.3 Properties ...... 40 9.4 Conjugacy class equation ...... 40 9.4.1 Example ...... 41 9.5 Conjugacy of subgroups and general subsets ...... 41 9.6 Conjugacy as group action ...... 41 9.7 Geometric interpretation ...... 41 9.8 See also ...... 42 9.9 References ...... 42

10 43 10.1 Formal definition ...... 44 10.1.1 Connected components ...... 44 10.1.2 Disconnected spaces ...... 44 iv CONTENTS

10.2 Examples ...... 44 10.3 Path connectedness ...... 45 10.4 Arc connectedness ...... 46 10.5 Local connectedness ...... 46 10.6 Set operations ...... 46 10.7 Theorems ...... 49 10.8 Graphs ...... 49 10.9 Stronger forms of connectedness ...... 50 10.10See also ...... 50 10.11References ...... 50 10.11.1 Notes ...... 50 10.11.2 General references ...... 50

11 Connectivity () 51 11.1 Connected graph ...... 52 11.2 Definitions of components, cuts and connectivity ...... 52 11.3 Menger’s theorem ...... 53 11.4 Computational aspects ...... 54 11.5 Examples ...... 54 11.6 Bounds on connectivity ...... 54 11.7 Other properties ...... 54 11.8 See also ...... 55 11.9 References ...... 55

12 Continuous 56 12.1 History ...... 56 12.2 Real-valued continuous functions ...... 56 12.2.1 Definition ...... 56 12.2.2 Examples ...... 59 12.2.3 Non-examples ...... 62 12.2.4 Properties ...... 63 12.2.5 Directional and semi-continuity ...... 64 12.3 Continuous functions between metric spaces ...... 65 12.3.1 Uniform, Hölder and ...... 66 12.4 Continuous functions between topological spaces ...... 66 12.4.1 Alternative definitions ...... 68 12.4.2 Properties ...... 69 12.4.3 Homeomorphisms ...... 70 12.4.4 Defining via continuous functions ...... 70 12.5 Related notions ...... 70 12.6 See also ...... 71 12.7 Notes ...... 71 CONTENTS v

12.8 References ...... 72

13 Continuous graph 73 13.1 Definition ...... 73 13.2 Applications ...... 74 13.3 See also ...... 74 13.4 References ...... 74 13.5 Further reading ...... 74

14 (graph theory) 75 14.1 Handshaking lemma ...... 75 14.2 Degree sequence ...... 76 14.3 Special values ...... 77 14.4 Global properties ...... 77 14.5 See also ...... 78 14.6 Notes ...... 78 14.7 References ...... 78

15 Duality (mathematics) 79 15.1 Order-reversing dualities ...... 79 15.2 Dimension-reversing dualities ...... 80 15.3 Duality in logic and ...... 82 15.4 Dual objects ...... 83 15.5 Dual categories ...... 84 15.5.1 Opposite category and adjoint functors ...... 84 15.5.2 Examples ...... 85 15.6 Analytic dualities ...... 85 15.7 Poincaré-style dualities ...... 86 15.8 See also ...... 87 15.9 Notes ...... 87 15.10References ...... 88 15.10.1 Duality in general ...... 88 15.10.2 Duality in algebraic topology ...... 88 15.10.3 Specific dualities ...... 89

16 Equivalence relation 90 16.1 Notation ...... 90 16.2 Definition ...... 90 16.3 Examples ...... 90 16.3.1 Simple example ...... 90 16.3.2 Equivalence relations ...... 91 16.3.3 Relations that are not equivalences ...... 91 16.4 Connections to other relations ...... 91 vi CONTENTS

16.5 Well-definedness under an equivalence relation ...... 92 16.6 Equivalence class, quotient set, partition ...... 92 16.6.1 Equivalence class ...... 92 16.6.2 Quotient set ...... 92 16.6.3 Projection ...... 92 16.6.4 Equivalence kernel ...... 92 16.6.5 Partition ...... 92 16.7 Fundamental theorem of equivalence relations ...... 93 16.8 Comparing equivalence relations ...... 93 16.9 Generating equivalence relations ...... 93 16.10Algebraic structure ...... 94 16.10.1 ...... 94 16.10.2 Categories and groupoids ...... 95 16.10.3 Lattices ...... 95 16.11Equivalence relations and mathematical logic ...... 95 16.12Euclidean relations ...... 96 16.13See also ...... 96 16.14Notes ...... 97 16.15References ...... 97 16.16External links ...... 97

17 Exponential models 99 17.1 Background ...... 99 17.2 Definition ...... 99 17.3 References ...... 100 17.4 Further reading ...... 100

18 102 18.1 Definition ...... 102 18.2 Example ...... 103 18.3 Applications ...... 104 18.4 See also ...... 104 18.5 References ...... 105 18.6 Further reading ...... 105 18.7 External links ...... 105

19 Forbidden graph characterization 106 19.1 List of forbidden characterizations for graphs and ...... 106 19.2 See also ...... 107 19.3 References ...... 107

20 Fundamental group 109 20.1 Intuition ...... 109 CONTENTS vii

20.2 Definition ...... 109 20.3 Examples ...... 110 20.3.1 Trivial Fundamental Group ...... 110 20.3.2 Infinite Cyclic Fundamental Group ...... 110 20.3.3 Free Groups of Higher Rank ...... 110 20.3.4 Knot Theory ...... 110 20.4 Functoriality ...... 111 20.5 Fibrations ...... 111 20.6 Relationship to first homology group ...... 112 20.7 Universal covering space ...... 112 20.7.1 Examples ...... 112 20.8 Edge-path group of a simplicial complex ...... 113 20.9 Realizability ...... 113 20.10Related concepts ...... 114 20.10.1 Fundamental groupoid ...... 114 20.11See also ...... 114 20.12Notes ...... 114 20.13References ...... 114 20.14External links ...... 115

21 Geometric graph theory 116 21.1 Different types of geometric graphs ...... 116 21.2 See also ...... 117 21.3 References ...... 117

22 Glossary of graph theory 118 22.1 Basics ...... 118 22.1.1 Subgraphs ...... 120 22.1.2 Walks ...... 121 22.1.3 Trees ...... 122 22.1.4 Cliques ...... 122 22.1.5 Strongly connected ...... 123 22.1.6 Hypercubes ...... 124 22.1.7 Knots ...... 124 22.1.8 Minors ...... 124 22.1.9 Embedding ...... 125 22.2 Adjacency and degree ...... 125 22.2.1 Independence ...... 126 22.3 Complexity ...... 126 22.4 Connectivity ...... 126 22.5 ...... 127 22.6 ...... 127 viii CONTENTS

22.7 Weighted graphs and networks ...... 128 22.8 Direction ...... 128 22.8.1 Directed acyclic graphs ...... 129 22.9 Colouring ...... 129 22.10Various ...... 130 22.11See also ...... 130 22.12References ...... 130

23 Graph (mathematics) 132 23.1 Definitions ...... 133 23.1.1 Graph ...... 133 23.1.2 Adjacency relation ...... 133 23.2 Types of graphs ...... 133 23.2.1 Distinction in terms of the main definition ...... 133 23.2.2 Important graph classes ...... 135 23.3 Properties of graphs ...... 136 23.4 Examples ...... 137 23.5 Important graphs ...... 138 23.6 Operations on graphs ...... 138 23.7 Generalizations ...... 139 23.8 See also ...... 139 23.9 Notes ...... 140 23.10References ...... 140 23.11Further reading ...... 141 23.12External links ...... 141

24 Graph automorphism 142 24.1 Computational complexity ...... 142 24.2 Algorithms, software and applications ...... 142 24.3 Symmetry display ...... 143 24.4 Graph families defined by their automorphisms ...... 144 24.5 See also ...... 144 24.6 References ...... 144 24.7 External links ...... 145

25 Graph factorization 146 25.1 1-factorization ...... 147 25.1.1 Complete graphs ...... 148 25.1.2 1-factorization conjecture ...... 149 25.1.3 Perfect 1-factorization ...... 149 25.2 2-factorization ...... 149 25.3 Notes ...... 149 CONTENTS ix

25.4 References ...... 150 25.5 Further reading ...... 150

26 151 26.1 Definitions ...... 151 26.2 Properties ...... 151 26.3 Connection to coloring and ...... 151 26.4 Complexity ...... 152 26.5 See also ...... 152 26.6 Notes ...... 152 26.7 References ...... 152

27 Graph 153 27.1 Variations ...... 153 27.2 Motivation ...... 153 27.3 Whitney theorem ...... 154 27.4 Recognition of ...... 154 27.5 See also ...... 155 27.6 Notes ...... 155 27.7 References ...... 155

28 156 28.1 History ...... 156 28.2 Special cases ...... 156 28.2.1 Graceful labeling ...... 156 28.2.2 Edge-graceful labeling ...... 157 28.2.3 Harmonious labelings ...... 157 28.2.4 ...... 157 28.2.5 Lucky labeling ...... 157 28.3 References ...... 158

29 Graph minor 159 29.1 Definitions ...... 159 29.2 Example ...... 159 29.3 Major results and conjectures ...... 160 29.4 Minor-closed graph families ...... 161 29.5 Variations ...... 161 29.5.1 Topological minors ...... 161 29.5.2 Immersion minor ...... 161 29.5.3 Shallow minors ...... 162 29.6 Algorithms ...... 162 29.7 Notes ...... 162 29.8 References ...... 163 x CONTENTS

29.9 External links ...... 165

30 Graph theory 166 30.1 Definitions ...... 167 30.1.1 Graph ...... 167 30.2 Applications ...... 167 30.3 History ...... 169 30.4 ...... 170 30.5 Graph-theoretic data structures ...... 170 30.6 Problems in graph theory ...... 171 30.6.1 Enumeration ...... 171 30.6.2 Subgraphs, induced subgraphs, and minors ...... 171 30.6.3 Graph coloring ...... 171 30.6.4 Subsumption and unification ...... 172 30.6.5 Route problems ...... 172 30.6.6 Network flow ...... 172 30.6.7 Visibility problems ...... 172 30.6.8 Covering problems ...... 172 30.6.9 Decomposition problems ...... 173 30.6.10 Graph classes ...... 173 30.7 See also ...... 173 30.7.1 Related topics ...... 173 30.7.2 Algorithms ...... 174 30.7.3 Subareas ...... 175 30.7.4 Related of mathematics ...... 175 30.7.5 Generalizations ...... 175 30.7.6 Prominent graph theorists ...... 175 30.8 Notes ...... 176 30.9 References ...... 177 30.10External links ...... 177 30.10.1 Online textbooks ...... 177

31 Homotopy 178 31.1 Formal definition ...... 179 31.1.1 Properties ...... 180 31.2 Homotopy equivalence ...... 180 31.2.1 Null-homotopy ...... 180 31.3 Invariance ...... 180 31.4 Relative homotopy ...... 181 31.5 Groups ...... 181 31.6 Category ...... 181 31.7 Timelike ...... 181 CONTENTS xi

31.8 Lifting property ...... 181 31.9 Extension property ...... 182 31.10Isotopy ...... 182 31.11Applications ...... 182 31.12See also ...... 182 31.13References ...... 183 31.14Sources ...... 183

32 184 32.1 Terminology ...... 185 32.2 Bipartite graph model ...... 186 32.3 Acyclicity ...... 186 32.4 Isomorphism and equality ...... 186 32.4.1 Examples ...... 187 32.5 Symmetric hypergraphs ...... 188 32.6 Transversals ...... 188 32.7 Incidence ...... 188 32.8 Hypergraph coloring ...... 188 32.9 Partitions ...... 189 32.10Theorems ...... 189 32.11Hypergraph drawing ...... 189 32.12Generalizations ...... 190 32.13See also ...... 191 32.14Notes ...... 192 32.15References ...... 193

33 If and only if 194 33.1 Definition ...... 194 33.2 Usage ...... 194 33.2.1 Notation ...... 194 33.2.2 Proofs ...... 195 33.2.3 Origin of iff ...... 195 33.3 Distinction from “if” and “only if” ...... 195 33.4 More general usage ...... 196 33.5 See also ...... 196 33.6 Footnotes ...... 196 33.7 External links ...... 196

34 Isomorphism 197 34.1 Examples ...... 198 34.1.1 Logarithm and exponential ...... 198 34.1.2 Integers modulo 6 ...... 198 xii CONTENTS

34.1.3 Relation-preserving isomorphism ...... 198 34.2 Isomorphism vs. bijective morphism ...... 199 34.3 Applications ...... 199 34.4 Relation with equality ...... 200 34.5 See also ...... 201 34.6 Notes ...... 201 34.7 References ...... 202 34.8 Further reading ...... 202 34.9 External links ...... 202

35 Isomorphism class 203

36 (graph theory) 204 36.1 Degree ...... 205 36.2 Notes ...... 205 36.3 References ...... 205 36.4 External links ...... 205 36.5 See also ...... 205

37 Matrix (mathematics) 207 37.1 Definition ...... 208 37.1.1 Size ...... 208 37.2 Notation ...... 209 37.3 Basic operations ...... 209 37.3.1 Addition, scalar multiplication and transposition ...... 209 37.3.2 Matrix multiplication ...... 210 37.3.3 Row operations ...... 211 37.3.4 Submatrix ...... 211 37.4 Linear equations ...... 212 37.5 Linear transformations ...... 212 37.6 Square matrices ...... 212 37.6.1 Main types ...... 213 37.6.2 Main operations ...... 215 37.7 Computational aspects ...... 216 37.8 Decomposition ...... 217 37.9 Abstract algebraic aspects and generalizations ...... 218 37.9.1 Matrices with more general entries ...... 218 37.9.2 Relationship to linear maps ...... 219 37.9.3 Matrix groups ...... 219 37.9.4 Infinite matrices ...... 220 37.9.5 Empty matrices ...... 220 37.10Applications ...... 221 CONTENTS xiii

37.10.1 Graph theory ...... 221 37.10.2 Analysis and geometry ...... 221 37.10.3 Probability theory and statistics ...... 222 37.10.4 Symmetries and transformations in physics ...... 223 37.10.5 Linear combinations of quantum states ...... 224 37.10.6 Normal modes ...... 224 37.10.7 Geometrical optics ...... 225 37.10.8 Electronics ...... 225 37.11History ...... 225 37.11.1 Other historical usages of the word “matrix” in mathematics ...... 226 37.12See also ...... 226 37.13Notes ...... 227 37.14References ...... 230 37.14.1 Physics references ...... 232 37.14.2 Historical references ...... 233 37.15External links ...... 233

38 Natural number 235 38.1 History ...... 235 38.1.1 Modern definitions ...... 237 38.2 Notation ...... 237 38.3 Properties ...... 237 38.3.1 Addition ...... 237 38.3.2 Multiplication ...... 237 38.3.3 Relationship between addition and multiplication ...... 238 38.3.4 Order ...... 238 38.3.5 Division ...... 238 38.3.6 Algebraic properties satisfied by the natural numbers ...... 238 38.4 Generalizations ...... 239 38.5 Formal definitions ...... 239 38.5.1 Peano axioms ...... 239 38.5.2 Constructions based on set theory ...... 239 38.6 See also ...... 241 38.7 Notes ...... 241 38.8 References ...... 242 38.9 External links ...... 243

39 246 39.1 Background and history ...... 246 39.1.1 Department of Defense Initiatives ...... 247 39.2 Network properties ...... 248 39.2.1 Density ...... 248 xiv CONTENTS

39.2.2 Size ...... 248 39.2.3 Average degree ...... 248 39.2.4 Average path length ...... 249 39.2.5 Diameter of a network ...... 249 39.2.6 Clustering coefficient ...... 249 39.2.7 Connectedness ...... 249 39.2.8 Node ...... 249 39.3 Network models ...... 250 39.3.1 Erdős–Rényi Random Graph model ...... 250 39.3.2 Watts-Strogatz Small World model ...... 250 39.3.3 Barabási–Albert (BA) model ...... 251 39.4 Network analysis ...... 251 39.4.1 analysis ...... 252 39.4.2 Dynamic network analysis ...... 252 39.4.3 analysis ...... 253 39.4.4 Link analysis ...... 253 39.4.5 Centrality measures ...... 254 39.5 Spread of content in networks ...... 254 39.5.1 The SIR Model ...... 255 39.6 ...... 256 39.7 Network optimization ...... 256 39.8 Network science research centers ...... 256 39.9 Network analysis and visualization tools ...... 256 39.10See also ...... 257 39.11Further reading ...... 258 39.12External links ...... 259 39.13Notes ...... 259

40 Ordinal number 261 40.1 Ordinals extend the natural numbers ...... 262 40.2 Definitions ...... 264 40.2.1 Well-ordered sets ...... 264 40.2.2 Definition of an ordinal as an equivalence class ...... 264 40.2.3 Von Neumann definition of ordinals ...... 264 40.2.4 Other definitions ...... 265 40.3 Transfinite sequence ...... 265 40.4 Transfinite induction ...... 265 40.4.1 What is transfinite induction? ...... 265 40.4.2 Transfinite recursion ...... 266 40.4.3 Successor and limit ordinals ...... 266 40.4.4 Indexing classes of ordinals ...... 266 40.4.5 Closed unbounded sets and classes ...... 267 CONTENTS xv

40.5 Arithmetic of ordinals ...... 267 40.6 Ordinals and cardinals ...... 267 40.6.1 Initial ordinal of a cardinal ...... 268 40.6.2 Cofinality ...... 268 40.7 Some “large” countable ordinals ...... 268 40.8 Topology and ordinals ...... 269 40.9 Downward closed sets of ordinals ...... 269 40.10See also ...... 269 40.11Notes ...... 269 40.12References ...... 269 40.13External links ...... 270

41 271 41.1 Example ...... 271 41.2 Properties ...... 272 41.3 Representing subsets as functions ...... 272 41.4 Relation to binomial theorem ...... 273 41.5 Algorithms ...... 273 41.6 Subsets of limited cardinality ...... 273 41.7 Power object ...... 274 41.8 Functors and quantifiers ...... 274 41.9 See also ...... 274 41.10Notes ...... 274 41.11References ...... 275 41.12External links ...... 275

42 Quantum graph 276 42.1 Metric graphs ...... 276 42.2 Quantum graphs ...... 277 42.3 Theorems ...... 277 42.4 Applications ...... 279 42.5 See also ...... 280 42.6 References ...... 280

43 Quiver (mathematics) 281 43.1 Definition ...... 281 43.2 Category-theoretic definition ...... 281 43.3 Path algebra ...... 282 43.4 Representations of quivers ...... 282 43.5 Gabriel’s theorem ...... 283 43.6 See also ...... 283 43.7 References ...... 283 xvi CONTENTS

44 Random graph 284 44.1 Random graph models ...... 284 44.2 Terminology ...... 285 44.3 Properties of random graphs ...... 285 44.4 Coloring of Random Graphs ...... 285 44.5 Random trees ...... 286 44.6 Conditionally uniform random graphs ...... 286 44.7 History ...... 286 44.8 See also ...... 286 44.9 References ...... 287

45 288 45.1 History ...... 288 45.2 Commutative rings ...... 289 45.2.1 ...... 289 45.3 Noncommutative rings ...... 289 45.3.1 Representation theory ...... 289 45.4 Some useful theorems ...... 290 45.5 Structures and invariants of rings ...... 290 45.5.1 Dimension of a commutative ring ...... 290 45.5.2 Morita equivalence ...... 291 45.5.3 Finitely generated projective module over a ring and Picard group ...... 291 45.5.4 Structure of noncommutative rings ...... 291 45.6 Applications ...... 292 45.6.1 The ring of integers of a number field ...... 292 45.6.2 The coordinate ring of an algebraic variety ...... 292 45.6.3 Ring of invariants ...... 292 45.7 Notes ...... 292 45.8 References ...... 293

46 Robertson–Seymour theorem 294 46.1 Statement ...... 294 46.2 Forbidden minor characterizations ...... 295 46.3 Examples of minor-closed families ...... 295 46.4 Obstruction sets ...... 295 46.5 Polynomial time recognition ...... 296 46.6 Fixed-parameter tractability ...... 297 46.7 Finite form of the graph minor theorem ...... 297 46.8 See also ...... 297 46.9 Notes ...... 297 46.10References ...... 298 46.11External links ...... 298 CONTENTS xvii

47 Split-quaternion 299 47.1 Matrix representations ...... 300 47.2 Profile ...... 301 47.3 Pan-orthogonality ...... 303 47.4 Counter-sphere geometry ...... 303 47.5 Application to kinematics ...... 303 47.6 Historical notes ...... 304 47.7 Synonyms ...... 304 47.8 See also ...... 304 47.9 Notes ...... 304 47.10Further reading ...... 305

48 Statistical model 306 48.1 Formal definition ...... 306 48.2 An example ...... 306 48.3 General remarks ...... 307 48.4 Dimension of a model ...... 307 48.5 Nested models ...... 307 48.6 Comparing models ...... 308 48.7 See also ...... 308 48.8 Notes ...... 308 48.9 References ...... 309 48.10Further reading ...... 309

49 310 49.1 Graphs as topological spaces ...... 310 49.2 Example studies ...... 310 49.3 See also ...... 311 49.4 Notes ...... 311

50 Triangle graph 312 50.1 Properties ...... 312 50.2 See also ...... 312 50.3 References ...... 312

51 Triangle-free graph 313 51.1 Triangle finding problem ...... 313 51.2 Independence number and Ramsey theory ...... 313 51.3 Coloring triangle-free graphs ...... 314 51.4 See also ...... 314 51.5 References ...... 314 51.6 External links ...... 316 xviii CONTENTS

52 (graph theory) 317 52.1 Types of vertices ...... 318 52.2 See also ...... 318 52.3 References ...... 318 52.4 External links ...... 318

53 Wagner’s theorem 319 53.1 Definitions and theorem statement ...... 319 53.2 History and relation to Kuratowski’s theorem ...... 320 53.3 Implications ...... 321 53.4 References ...... 321 53.5 Text and sources, contributors, and licenses ...... 322 53.5.1 Text ...... 322 53.5.2 Images ...... 330 53.5.3 Content license ...... 336 Chapter 1

2 × 2 real matrices

In mathematics, the set of 2×2 real matrices is denoted by M(2, R). Two matrices p and q in M(2, R) have a sum p + q given by matrix addition. The product matrix p q is formed from the dot product of the rows and columns of its factors through matrix multiplication. For

( ) a b q = , c d let

( ) d −b q∗ = . −c a Then q q* = q* q = (ad − bc) I, where I is the 2×2 identity matrix. The ad − bc is called the of q. When ad − bc ≠ 0, q is an invertible matrix, and then q−1 = q∗ /(ad − bc). The collection of all such invertible matrices constitutes the general linear group GL(2, R). In terms of , M(2, R) with the associated addition and multiplication operations forms a ring, and GL(2, R) is its group of units. M(2, R) is also a four-dimensional vector space, so it is considered an associative algebra. It is ring-isomorphic to the coquaternions, but has a different profile. The 2×2 real matrices are in one-one correspondence with the linear mappings of the two-dimensional Cartesian coordinate system into itself by the rule

( ) ( )( ) ( ) x a b x ax + by 7→ = . y c d y cx + dy

1.1 Profile

Within M(2, R), the multiples by real numbers of the identity matrix I may be considered a real line. This real line is the place where all commutative subrings come together: Let Pm = {xI + ym : x, y ∈ R} where m2 ∈ { −I, 0, I }. Then Pm is a commutative subring and M(2, R) = ∪Pm where the union is over all m such that m2 ∈ { −I, 0, I }. To identify such m, first square the generic matrix:

( ) aa + bc ab + bd . ac + cd bc + dd

1 2 CHAPTER 1. 2 × 2 REAL MATRICES

When a + d = 0 this square is a diagonal matrix. Thus one assumes d = −a when looking for m to form commutative subrings. When mm = −I, then bc = −1 − aa, an equation describing a hyperbolic paraboloid in the space of parameters (a, b, c). Such an m serves as an imaginary unit. In this case Pm is isomorphic to the field of (ordinary) complex numbers. When mm = +I, m is an involutory matrix. Then bc = +1 − aa, also giving a hyperbolic paraboloid. If a matrix is an idempotent matrix, it must lie in such a Pm and in this case Pm is isomorphic to the ring of split-complex numbers. The case of a nilpotent matrix, mm = 0, arises when only one of b or c is non-zero, and the commutative subring Pm is then a copy of the . When M(2, R) is reconfigured with a change of basis, this profile changes to the profile of split-quaternions where the sets of square roots of I and −I take a symmetrical shape as hyperboloids.

1.2 Equi-areal mapping

Main article: Equiareal map

First transform one differential vector into another:

( ) ( )( ) ( ) du p r dx p dx + r dy = = . dv q s dy q dx + s dy

Areas are measured with density dx ∧ dy , a differential 2-form which involves the use of exterior algebra. The transformed density is

du ∧ dv = 0 + ps dx ∧ dy + qr dy ∧ dx + 0 = (ps − qr) dx ∧ dy = (det g) dx ∧ dy.

Thus the equi-areal mappings are identified with SL(2, R) = {g ∈ M(2, R) : det(g) = 1}, the special linear group. Given the profile above, every such g lies in a commutative subring Pm representing a type of complex plane according to the square of m. Since g g* = I, one of the following three alternatives occurs:

• mm = −I and g is on a circle of Euclidean rotations; or

• mm = I and g is on an hyperbola of squeeze mappings; or

• mm = 0 and g is on a line of shear mappings.

Writing about planar affine mapping, Rafael Artzy made a similar trichotomy of planar, linear mapping in his book Linear Geometry (1965).

1.3 Functions of 2 × 2 real matrices

The commutative subrings of M(2, R) determine the function theory; in particular the three types of subplanes have their own algebraic structures which set the value of algebraic expressions. Consideration of the square root function and the logarithm function serves to illustrate the constraints implied by the special properties of each type of subplane Pm described in the above profile. The concept of identity component of the group of units of Pm leads to the polar decomposition of elements of the group of units:

• If mm = −I, then z = ρ exp(θm).

• If mm = 0, then z = ρ exp(s m) or z = − ρ exp(s m).

• If mm = I, then z = ρ exp(a m) or z = −ρ exp(a m) or z = m ρ exp(a m) or z = −m ρ exp(a m). 1.4. 2 × 2 REAL MATRICES AS COMPLEX NUMBERS 3

In the first case exp(θ m) = cos(θ) + m sin(θ). In the case of the dual numbers exp(s m) = 1 + s m. Finally, in the case of split complex numbers there are four components in the group of units. The identity component is parameterized by ρ and exp(a m) = cosh a + m sinh a. √ √ Now ρ exp(am) = ρ exp(am/2) regardless of the subplane Pm, but the argument of the function must be taken from the identity component of its group of units. Half the plane is lost in the case of the dual number structure; three-quarters of the plane must be excluded in the case of the split-complex number structure. Similarly, if ρ exp(a m) is an element of the identity component of the group of units of a plane associated with 2×2 matrix m, then the logarithm function results in a value log ρ + a m. The domain of the logarithm function suffers the same constraints as does the square root function described above: half or three-quarters of Pm must be excluded in the cases mm = 0 or mm = I. Further function theory can be seen in the article complex functions for the C structure, or in the article motor variable for the split-complex structure.

1.4 2 × 2 real matrices as complex numbers

Every 2×2 real matrix can be interpreted as one of three types of (generalized[1]) complex numbers: standard complex numbers, dual numbers, and split-complex numbers. Above, the algebra of 2×2 matrices is profiled as a union of complex planes, all sharing the same real axis. These planes are presented as commutative subrings Pm. We can determine to which complex plane a given 2×2 matrix belongs as follows and classify which kind of complex number that plane represents. Consider the 2×2 matrix

( ) a b z = . c d We seek the complex plane Pm containing z. As noted above, the square of the matrix z is diagonal when a + d = 0. The matrix z must be expressed as the sum of a multiple of the identity matrix I and a matrix in the hyperplane a + d = 0. Projecting z alternately onto these subspaces of R4 yields

a + d z = xI + n, x = , n = z − xI. 2 Furthermore,

2 2 (a−d) n = pI where p = 4 + bc .

Now z is one of three types of complex number:

• If p < 0, then it is an ordinary complex number: √ √ Let q = 1/ −p, m = qn . Then m2 = −I, z = xI + m −p .

• If p = 0, then it is the dual number:

z = xI + n

• If p > 0, then z is a split-complex number: √ √ Let q = 1/ p, m = qn . Then m2 = +I, z = xI + m p .

Similarly, a 2×2 matrix can also be expressed in polar coordinates with the caveat that there are two connected components of the group of units in the dual number plane, and four components in the split-complex number plane. 4 CHAPTER 1. 2 × 2 REAL MATRICES

1.5 References

[1] Anthony A. Harkin & Joseph B. Harkin (2004) Geometry of Generalized Complex Numbers, Mathematics Magazine 77(2):118–29

• Rafael Artzy (1965) Linear Geometry, Chapter 2-6 Subgroups of the Plane Affine Group over the Real , p. 94, Addison-Wesley.

• Helmut Karzel & Gunter Kist (1985) “Kinematic Algebras and their Geometries”, found in • Rings and Geometry, R. Kaya, P. Plaumann, and K. Strambach editors, pp. 437–509, esp 449,50, D. Reidel ISBN 90-277-2112-2 . • Svetlana Katok (1992) Fuchsian groups, pp. 113ff, Press ISBN 0-226-42582-7 .

• Garret Sobczyk (2012). “Chapter 2: Complex and Hyperbolic Numbers”. New Foundations in Mathematics: The Geometric Concept of Number. Birkhäuser. ISBN 978-0-8176-8384-9. Chapter 2

Abelian group

For the group described by the archaic use of this term, see Symplectic group.

In abstract algebra, an abelian group, also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on the order in which they are written (the axiom of commutativity). Abelian groups generalize the arithmetic of addition of integers. They are named after Niels Henrik Abel.[1] The concept of an abelian group is one of the first concepts encountered in undergraduate abstract algebra, with many other basic objects, such as a module and a vector space, being its refinements. The theory of abelian groups is generally simpler than that of their non-abelian counterparts, and finite abelian groups are very well understood. On the other hand, the theory of infinite abelian groups is an of current research.

2.1 Definition

An abelian group is a set, A, together with an operation • that combines any two elements a and b to form another element denoted a • b. The symbol • is a general placeholder for a concretely given operation. To qualify as an abelian group, the set and operation, (A, •), must satisfy five requirements known as the abelian group axioms:

Closure For all a, b in A, the result of the operation a • b is also in A. Associativity For all a, b and c in A, the equation (a • b)• c = a •(b • c) holds. Identity element There exists an element e in A, such that for all elements a in A, the equation e • a = a • e = a holds. Inverse element For each a in A, there exists an element b in A such that a • b = b • a = e, where e is the identity element. Commutativity For all a, b in A, a • b = b • a.

More compactly, an abelian group is a commutative group. A group in which the group operation is not commutative is called a “non-abelian group” or “non-commutative group”.

2.2 Facts

2.2.1 Notation

See also: Additive group and Multiplicative group

There are two main notational conventions for abelian groups – additive and multiplicative.

5 6 CHAPTER 2. ABELIAN GROUP

Generally, the multiplicative notation is the usual notation for groups, while the additive notation is the usual notation for modules and rings. The additive notation may also be used to emphasize that a particular group is abelian, whenever both abelian and non-abelian groups are considered, some notable exceptions being near-rings and partially ordered groups, where an operation is written additively even when non-abelian.

2.2.2 Multiplication table

To verify that a finite group is abelian, a table (matrix) – known as a Cayley table – can be constructed in a similar fashion to a multiplication table. If the group is G = {g1 = e, g2, ..., gn} under the operation ⋅, the (i, j)th entry of this table contains the product gi ⋅ gj. The group is abelian if and only if this table is symmetric about the main diagonal. This is true since if the group is abelian, then gi ⋅ gj = gj ⋅ gi. This implies that the (i, j)th entry of the table equals the (j, i)th entry, thus the table is symmetric about the main diagonal.

2.3 Examples

• For the integers and the operation addition "+", denoted (Z, +), the operation + combines any two integers to form a third integer, addition is associative, zero is the additive identity, every integer n has an additive inverse, −n, and the addition operation is commutative since m + n = n + m for any two integers m and n.

• Every cyclic group G is abelian, because if x, y are in G, then xy = aman = am + n = an + m = anam = yx. Thus the integers, Z, form an abelian group under addition, as do the integers modulo n, Z/nZ.

• Every ring is an abelian group with respect to its addition operation. In a commutative ring the invertible elements, or units, form an abelian multiplicative group. In particular, the real numbers are an abelian group under addition, and the nonzero real numbers are an abelian group under multiplication.

• Every subgroup of an abelian group is normal, so each subgroup gives rise to a quotient group. Subgroups, quotients, and direct sums of abelian groups are again abelian.

• The concepts of abelian group and Z-module agree. More specifically, every Z-module is an abelian group with its operation of addition, and every abelian group is a module over the ring of integers Z in a unique way.

In general, matrices, even invertible matrices, do not form an abelian group under multiplication because matrix multiplication is generally not commutative. However, some groups of matrices are abelian groups under matrix multiplication – one example is the group of 2×2 rotation matrices.

2.4 Historical remarks

Abelian groups were named after Norwegian mathematician Niels Henrik Abel by Camille Jordan because Abel found that the commutativity of the group of a polynomial implies that the roots of the polynomial can be calculated by using radicals. See Section 6.5 of Cox (2004) for more information on the historical background.

2.5 Properties

If n is a natural number and x is an element of an abelian group G written additively, then nx can be defined as x + x + ... + x (n summands) and (−n)x = −(nx). In this way, G becomes a module over the ring Z of integers. In fact, the modules over Z can be identified with the abelian groups. Theorems about abelian groups (i.e. modules over the principal ideal domain Z) can often be generalized to theorems about modules over an arbitrary principal ideal domain. A typical example is the classification of finitely generated abelian groups which is a specialization of the structure theorem for finitely generated modules over a principal ideal domain. In the case of finitely generated abelian groups, this theorem guarantees that an abelian group splits as a 2.6. FINITE ABELIAN GROUPS 7

direct sum of a torsion group and a free abelian group. The former may be written as a direct sum of finitely many groups of the form Z/pkZ for p prime, and the latter is a direct sum of finitely many copies of Z. If f, g : G → H are two group homomorphisms between abelian groups, then their sum f + g, defined by (f + g)(x) = f(x) + g(x), is again a homomorphism. (This is not true if H is a non-abelian group.) The set Hom(G, H) of all group homomorphisms from G to H thus turns into an abelian group in its own right. Somewhat akin to the dimension of vector spaces, every abelian group has a rank. It is defined as the cardinality of the largest set of linearly independent elements of the group. The integers and the rational numbers have rank one, as well as every subgroup of the rationals.

2.6 Finite abelian groups

Cyclic groups of integers modulo n, Z/nZ, were among the first examples of groups. It turns out that an arbitrary finite abelian group is isomorphic to a direct sum of finite cyclic groups of prime power order, and these orders are uniquely determined, forming a complete system of invariants. The automorphism group of a finite abelian group can be described directly in terms of these invariants. The theory had been first developed in the 1879 paper of Georg Frobenius and Ludwig Stickelberger and later was both simplified and generalized to finitely generated modules over a principal ideal domain, forming an important chapter of linear algebra.

2.6.1 Classification

The fundamental theorem of finite abelian groups states that every finite abelian group G can be expressed as the direct sum of cyclic subgroups of prime-power order. This is a special case of the fundamental theorem of finitely generated abelian groups when G has zero rank. The cyclic group Zmn of order mn is isomorphic to the direct sum of Zm and Zn if and only if m and n are coprime. It follows that any finite abelian group G is isomorphic to a direct sum of the form

⊕u

Zki i=1 in either of the following canonical ways:

• the numbers k1, ..., ku are powers of primes

• k1 divides k2, which divides k3, and so on up to ku.

For example, Z15 can be expressed as the direct sum of two cyclic subgroups of order 3 and 5: Z15 ≅ {0, 5, 10} ⊕ {0, 3, 6, 9, 12}. The same can be said for any abelian group of order 15, leading to the remarkable conclusion that all abelian groups of order 15 are isomorphic.

For another example, every abelian group of order 8 is isomorphic to either Z8 (the integers 0 to 7 under addition modulo 8), Z4 ⊕ Z2 (the odd integers 1 to 15 under multiplication modulo 16), or Z2 ⊕ Z2 ⊕ Z2. See also list of small groups for finite abelian groups of order 16 or less.

2.6.2 Automorphisms

One can apply the fundamental theorem to count (and sometimes determine) the automorphisms of a given finite abelian group G. To do this, one uses the fact that if G splits as a direct sum H ⊕ K of subgroups of coprime order, then Aut(H ⊕ K) ≅ Aut(H) ⊕ Aut(K). Given this, the fundamental theorem shows that to compute the automorphism group of G it suffices to compute the automorphism groups of the Sylow p-subgroups separately (that is, all direct sums of cyclic subgroups, each with order a power of p). Fix a prime p and suppose the exponents ei of the cyclic factors of the Sylow p-subgroup are arranged in increasing order: 8 CHAPTER 2. ABELIAN GROUP

e1 ≤ e2 ≤ · · · ≤ en for some n > 0. One needs to find the automorphisms of

Zpe1 ⊕ · · · ⊕ Zpen . One special case is when n = 1, so that there is only one cyclic prime-power factor in the Sylow p-subgroup P. In this case the theory of automorphisms of a finite cyclic group can be used. Another special case is when n is arbitrary but ei = 1 for 1 ≤ i ≤ n. Here, one is considering P to be of the form

Zp ⊕ · · · ⊕ Zp, so elements of this subgroup can be viewed as comprising a vector space of dimension n over the finite field of p elements Fp. The automorphisms of this subgroup are therefore given by the invertible linear transformations, so

∼ Aut(P ) = GL(n, Fp), where GL is the appropriate general linear group. This is easily shown to have order

|Aut(P )| = (pn − 1) ··· (pn − pn−1). In the most general case, where the ei and n are arbitrary, the automorphism group is more difficult to determine. It is known, however, that if one defines

{ | } dk = max r er = ek and

{ | } ck = min r er = ek then one has in particular dk ≥ k, ck ≤ k, and

∏n ∏n ∏n − − − − |Aut(P )| = (pdk − pk 1) (pej )n dj (pei 1)n ci+1. k=1 j=1 i=1 One can check that this yields the orders in the previous examples as special cases (see [Hillar,Rhea]).

2.7 Infinite abelian groups

Тhe simplest infinite abelian group is the infinite cyclic group Z. Any finitely generated abelian group A is isomorphic to the direct sum of r copies of Z and a finite abelian group, which in turn is decomposable into a direct sum of finitely many cyclic groups of primary orders. Even though the decomposition is not unique, the number r, called the rank of A, and the prime powers giving the orders of finite cyclic summands are uniquely determined. By contrast, classification of general infinitely generated abelian groups is far from complete. Divisible groups, i.e. abelian groups A in which the equation nx = a admits a solution x ∈ A for any natural number n and element a of A, constitute one important class of infinite abelian groups that can be completely characterized. Every divisible group is isomorphic to a direct sum, with summands isomorphic to Q and Prüfer groups Qp/Zp for various prime numbers p, and the cardinality of the set of summands of each type is uniquely determined.[2] Moreover, if a divisible group A is a subgroup of an abelian group G then A admits a direct complement: a subgroup C of G such that G = A ⊕ C. Thus divisible groups are injective modules in the category of abelian groups, and conversely, every injective abelian group is divisible (Baer’s criterion). An abelian group without non-zero divisible subgroups is called reduced. Two important special classes of infinite abelian groups with diametrically opposite properties are torsion groups and torsion-free groups, exemplified by the groups Q/Z (periodic) and Q (torsion-free). 2.7. INFINITE ABELIAN GROUPS 9

2.7.1 Torsion groups

An abelian group is called periodic or torsion if every element has finite order. A direct sum of finite cyclic groups is periodic. Although the converse statement is not true in general, some special cases are known. The first and second Prüfer theorems state that if A is a periodic group and either it has bounded exponent, i.e. nA = 0 for some natural number n, or if A is countable and the p-heights of the elements of A are finite for each p, then A is isomorphic to a direct sum of finite cyclic groups.[3] The cardinality of the set of direct summands isomorphic to Z/pmZ in such a decomposition is an of A. These theorems were later subsumed in the Kulikov criterion. In a different direction, Helmut Ulm found an extension of the second Prüfer theorem to countable abelian p-groups with elements of infinite height: those groups are completely classified by means of their Ulm invariants.

2.7.2 Torsion-free and mixed groups

An abelian group is called torsion-free if every non-zero element has infinite order. Several classes of torsion-free abelian groups have been studied extensively:

• Free abelian groups, i.e. arbitrary direct sums of Z

• Cotorsion and algebraically compact torsion-free groups such as the p-adic integers

• Slender groups

An abelian group that is neither periodic nor torsion-free is called mixed. If A is an abelian group and T(A) is its torsion subgroup then the factor group A/T(A) is torsion-free. However, in general the torsion subgroup is not a direct summand of A, so A is not isomorphic to T(A) ⊕ A/T(A). Thus the theory of mixed groups involves more than simply combining the results about periodic and torsion-free groups.

2.7.3 Invariants and classification

One of the most basic invariants of an infinite abelian group A is its rank: the cardinality of the maximal linearly independent subset of A. Abelian groups of rank 0 are precisely the periodic groups, while torsion-free abelian groups of rank 1 are necessarily subgroups of Q and can be completely described. More generally, a torsion-free abelian group of finite rank r is a subgroup of Qr. On the other hand, the group of p-adic integers Zp is a torsion-free abelian group of infinite Z-rank and the groups Zpn with different n are non-isomorphic, so this invariant does not even fully capture properties of some familiar groups. The classification theorems for finitely generated, divisible, countable periodic, and rank 1 torsion-free abelian groups explained above were all obtained before 1950 and form a foundation of the classification of more general infinite abelian groups. Important technical tools used in classification of infinite abelian groups are pure and basic subgroups. Introduction of various invariants of torsion-free abelian groups has been one avenue of further progress. See the books by Irving Kaplansky, László Fuchs, Phillip Griffith, and David Arnold, as well as the proceedings of the conferences on Abelian Group Theory published in Lecture Notes in Mathematics for more recent results.

2.7.4 Additive groups of rings

The additive group of a ring is an abelian group, but not all abelian groups are additive groups of rings (with nontrivial multiplication). Some important topics in this area of study are:

• Corner’s results on countable torsion-free groups

• Shelah’s work to remove cardinality restrictions. 10 CHAPTER 2. ABELIAN GROUP

2.8 Relation to other mathematical topics

Many large abelian groups possess a natural topology, which turns them into topological groups. The collection of all abelian groups, together with the homomorphisms between them, forms the category Ab, the prototype of an . Nearly all well-known algebraic structures other than Boolean algebras are undecidable. Hence it is surprising that Tarski’s student Szmielew (1955) proved that the first of abelian groups, unlike its nonabelian counter- part, is decidable. This decidability, plus the fundamental theorem of finite abelian groups described above, highlight some of the successes in abelian group theory, but there are still many areas of current research:

• Amongst torsion-free abelian groups of finite rank, only the finitely generated case and the rank 1 case are well understood;

• There are many unsolved problems in the theory of infinite-rank torsion-free abelian groups;

• While countable torsion abelian groups are well understood through simple presentations and Ulm invariants, the case of countable mixed groups is much less mature.

• Many mild extensions of the first order theory of abelian groups are known to be undecidable.

• Finite abelian groups remain a topic of research in computational group theory.

Moreover, abelian groups of infinite order lead, quite surprisingly, to deep questions about the set theory commonly assumed to underlie all of mathematics. Take the Whitehead problem: are all Whitehead groups of infinite order also free abelian groups? In the 1970s, Saharon Shelah proved that the Whitehead problem is:

• Undecidable in ZFC (Zermelo–Fraenkel axioms), the conventional axiomatic set theory from which nearly all of present day mathematics can be derived. The Whitehead problem is also the first question in ordinary mathematics proved undecidable in ZFC;

• Undecidable even if ZFC is augmented by taking the generalized continuum hypothesis as an axiom;

• Positively answered if ZFC is augmented with the axiom of constructibility (see statements true in L).

2.9 A note on the typography

Among mathematical adjectives derived from the proper name of a mathematician, the word “abelian” is rare in that it is often spelled with a lowercase a, rather than an uppercase A, indicating how ubiquitous the concept is in modern mathematics.[4]

2.10 See also

• Abelianization

• Class field theory

• Commutator subgroup

• Dihedral group of order 6, the smallest non-Abelian group

• Elementary abelian group

• Pontryagin duality

• Pure injective module

• Pure projective module 2.11. NOTES 11

2.11 Notes

[1] Jacobson (2009), p. 41

[2] For example, Q/Z ≅ ∑p Qp/Zp.

[3] Countability assumption in the second Prüfer theorem cannot be removed: the torsion subgroup of the direct product of the cyclic groups Z/pmZ for all natural m is not a direct sum of cyclic groups.

[4] Abel Prize Awarded: The Mathematicians’ Nobel

2.12 References

• Cox, David (2004). Galois Theory. Wiley-Interscience. MR 2119052.

• Fuchs, László (1970). Infinite Abelian Groups. Pure and Applied Mathematics 36–I. Academic Press. MR 0255673.

• Fuchs, László (1973). Infinite Abelian Groups. Pure and Applied Mathematics. 36-II. Academic Press. MR 0349869.

• Griffith, Phillip A. (1970). Infinite Abelian group theory. Chicago Lectures in Mathematics. University of Chicago Press. ISBN 0-226-30870-7.

• Herstein, I. N. (1975). Topics in Algebra (2nd ed.). John Wiley & Sons. ISBN 0-471-02371-X. • Hillar, Christopher; Rhea, Darren (2007). “Automorphisms of finite abelian groups”. American Mathematical Monthly 114 (10): 917–923. arXiv:math/0605185. • Jacobson, Nathan (2009). Basic Algebra I (2nd ed.). Dover Publications. ISBN 978-0-486-47189-1.

• Szmielew, Wanda (1955). “Elementary properties of abelian groups”. Fundamenta Mathematicae 41: 203– 271.

2.13 External links

• Hazewinkel, Michiel, ed. (2001), “Abelian group”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4 Chapter 3

Associative algebra

This article is about a particular kind of algebra over a commutative ring. For other uses of the term “algebra”, see Algebra (disambiguation).

An associative algebra A is a ring (not necessarily unital) that has a compatible structure of a vector space over a certain field K or, more generally, of a module over a commutative ring R. Thus A is endowed with binary operations of addition and multiplication satisfying a number of axioms, including associativity of multiplication and distributivity, as well as compatible multiplication by the elements of the field K or the ring R. For example, a ring of square matrices over a field K is an associative K algebra. More generally, given a ring S with center C, S is an associative C algebra. In some areas of mathematics, associative algebras are typically assumed to have a multiplicative unit, denoted 1. To make this extra assumption clear, these associative algebras are called unital algebras. Additionally, some authors demand that all rings be unital; in this article, the word “ring” is intended to refer to potentially non-unital rings as well.

3.1 Formal definition

Let R be a fixed commutative ring. An associative R-algebra is an additive abelian group A which has the structure of both a ring and an R-module in such a way that ring multiplication is R-bilinear:

r · (xy) = (r · x)y = x(r · y) for all r ∈ R and x, y ∈ A. We say A is unital if it contains an element 1 such that

1x = x = x1 for all x ∈ A. Note that such an element 1 must be unique if it exists at all. If A itself is commutative (as a ring) then it is called a commutative R-algebra.

3.1.1 From R-modules

Starting with an R-module A, we get an associative R-algebra by equipping A with an R-bilinear mapping A × A → A such that x(yz) = (xy)z for all x, y, and z in A. This R-bilinear mapping then gives A the structure of a ring and an associative R-algebra. Every associative R-algebra arises this way.

12 3.2. ALGEBRA HOMOMORPHISMS 13

Moreover, the algebra A built this way will be unital if and only if there exists an element 1 of A such that every element x of A satisfies 1x = x1 = x. This definition is equivalent to the statement that a unital associative R-algebra is a monoid in R-Mod (the monoidal category of R-modules).

3.1.2 From rings

Starting with a ring A, we get a unital associative R-algebra by providing a ring homomorphism η : R → A whose image lies in the center of A. The algebra A can then be thought of as an R-module by defining

r · x = η(r)x

for all r ∈ R and x ∈ A. If A is commutative then the center of A is equal to A, so that a commutative unital R-algebra can be defined simply as a homomorphism η : R → A of commutative rings.

3.2 Algebra homomorphisms

A homomorphism between two associative R-algebras is an R-linear ring homomorphism. Explicitly, ϕ : A1 → A2 is an associative algebra homomorphism if

ϕ(r · x) = r · ϕ(x)

ϕ(x + y) = ϕ(x) + ϕ(y) ϕ(xy) = ϕ(x)ϕ(y) For a homomorphism of unital associative R-algebras, we also demand that

ϕ(1) = 1

The class of all unital associative R-algebras together with algebra homomorphisms between them form a category, sometimes denoted R-Alg. The subcategory of commutative R-algebras can be characterized as the coslice category R/CRing where CRing is the category of commutative rings.

3.3 Examples

The most basic example is a ring itself; it is an algebra over its center or any subring lying in the center. In particular, any commutative ring is an algebra over any of its subrings. Other examples abound both from algebra and other fields of mathematics. Algebra

• Any (unital) ring A can be considered as a unital Z-algebra. The unique ring homomorphism from Z to A is determined by the fact that it must send 1 to the identity in A. Therefore rings and unital Z-algebras are equivalent concepts, in the same way that abelian groups and Z-modules are equivalent. • Any ring of characteristic n is a (Z/nZ)-algebra in the same way. • Given an R-module M, the endomorphism ring of M, denoted EndR(M) is an R-algebra by defining (r·φ)(x) = r·φ(x). • Any ring of matrices with coefficients in a commutative ring R forms an R-algebra under matrix addition and multiplication. This coincides with the previous example when M is a finitely-generated, free R-module. 14 CHAPTER 3. ASSOCIATIVE ALGEBRA

• The square n-by-n matrices with entries from the field K form a unital associative algebra over K. In particular, the 2 × 2 real matrices form an associative algebra useful in plane mapping. • The complex numbers form a 2-dimensional unital associative algebra over the real numbers. • The quaternions form a 4-dimensional unital associative algebra over the reals (but not an algebra over the complex numbers, since if complex numbers are treated as a subset of the quaternions, complex numbers and quaternions do not commute). • The polynomials with real coefficients form a unital associative algebra over the reals.

• Every polynomial ring R[x1, ..., xn] is a commutative R-algebra. In fact, this is the free commutative R-algebra on the set {x1, ..., xn}. • The free R-algebra on a set E is an algebra of polynomials with coefficients in R and noncommuting indeter- minates taken from the set E. • The tensor algebra of an R-module is naturally an R-algebra. The same is true for quotients such as the exterior and symmetric algebras. Categorically speaking, the functor which maps an R-module to its tensor algebra is left adjoint to the functor which sends an R-algebra to its underlying R-module (forgetting the ring structure). • Given a commutative ring R and any ring A the tensor product R⊗ZA can be given the structure of an R-algebra by defining r·(s⊗a) = (rs⊗a). The functor which sends A to R⊗ZA is left adjoint to the functor which sends an R-algebra to its underlying ring (forgetting the module structure).

Analysis

• Given any X, the continuous linear operators A : X → X form a unital associative algebra (using composition of operators as multiplication); this is a . • Given any X, the continuous real- or complex-valued functions on X form a real or complex unital associative algebra; here the functions are added and multiplied pointwise. • An example of a non-unital associative algebra is given by the set of all functions f: R → R whose limit as x nears infinity is zero. • The set of semimartingales defined on the filtered probability space (Ω,F,(Ft)t ≥ ₀,P) forms a ring under stochastic integration.

Geometry and combinatorics

• The Clifford algebras, which are useful in geometry and physics. • Incidence algebras of locally finite partially ordered sets are unital associative algebras considered in combinatorics.

3.4 Constructions

Subalgebras A subalgebra of an R-algebra A is a subset of A which is both a subring and a submodule of A. That is, it must be closed under addition, ring multiplication, scalar multiplication, and it must contain the identity element of A. Quotient algebras Let A be an R-algebra. Any ring-theoretic ideal I in A is automatically an R-module since r·x = (r1A)x. This gives the quotient ring A/I the structure of an R-module and, in fact, an R-algebra. It follows that any ring homomorphic image of A is also an R-algebra. Direct products The direct product of a family of R-algebras is the ring-theoretic direct product. This becomes an R-algebra with the obvious scalar multiplication. Free products One can form a free product of R-algebras in a manner similar to the free product of groups. The free product is the coproduct in the category of R-algebras. Tensor products The tensor product of two R-algebras is also an R-algebra in a natural way. See tensor product of algebras for more details. 3.5. ASSOCIATIVITY AND THE MULTIPLICATION MAPPING 15

3.5 Associativity and the multiplication mapping

Associativity was defined above quantifying over all elements of A. It is possible to define associativity in a way that does not explicitly refer to elements. An algebra is defined as a vector space A with a bilinear map

M : A × A → A

(the multiplication map). An associative algebra is an algebra where the map M has the property

M ◦ (Id × M) = M ◦ (M × Id)

Here, the symbol ◦ refers to function composition, and Id : A → A is the identity map on A. To see the equivalence of the definitions, we need only understand that each side of the above equation is a function that takes three arguments. For example, the left-hand side acts as

(M ◦ (Id × M))(x, y, z) = M(x, M(y, z))

Similarly, a unital associative algebra can be defined as a vector space A endowed with a map M as above and, additionally, a linear map

η : K → A

(the unit map) which has the properties

M ◦ (Id × η) = s; M ◦ (η × Id) = t

Here, the unit map η takes an element k in K to the element k1 in A, where 1 is the unit element of A. The map t is just plain-old scalar multiplication: t : K × A → A, (k, a) 7→ ka ; the map s is similar: s : A × K → A, (a, k) 7→ ka .

3.6 Coalgebras

Main article: Coalgebra

An associative unital algebra over K is given by a K-vector space A endowed with a bilinear map A×A→A having 2 inputs (multiplicator and multiplicand) and one output (product), as well as a morphism K→A identifying the scalar multiples of the multiplicative identity. If the bilinear map A×A→A is reinterpreted as a linear map (i. e., morphism in the category of K-vector spaces) A⊗A→A (by the universal property of the tensor product), then we can view an associative unital algebra over K as a K-vector space A endowed with two morphisms (one of the form A⊗A→A and one of the form K→A) satisfying certain conditions which boil down to the algebra axioms. These two morphisms can be dualized using categorial duality by reversing all arrows in the commutative diagrams which describe the algebra axioms; this defines the structure of a coalgebra. There is also an abstract notion of F-coalgebra. This is vaguely related to the notion of coalgebra discussed above.

3.7 Representations

Main article: Algebra representation

A representation of a unital algebra A is a unital algebra homomorphism ρ: A → End(V) from A to the endomorphism algebra of some vector space (or module) V. The property of ρ being a unital algebra homomorphism means that ρ 16 CHAPTER 3. ASSOCIATIVE ALGEBRA

preserves the multiplicative operation (that is, ρ(xy)=ρ(x)ρ(y) for all x and y in A), and that ρ sends the unity of A to the unity of End(V) (that is, to the identity endomorphism of V). If A and B are two algebras, and ρ: A → End(V) and τ: B → End(W) are two representations, then it is easy to define a (canonical) representation A B → End(V W) of the tensor product algebra A B on the vector space V W. Note, however, that there is no natural way of defining a tensor product of two representations of a single associative algebra in such a way that the result is still a representation of that same algebra (not of its tensor product with itself), without somehow imposing additional conditions. Here, by tensor product of representations, the usual meaning is intended: the result should be a linear representation of the same algebra on the product vector space. Imposing such additional structure typically leads to the idea of a Hopf algebra or a Lie algebra, as demonstrated below.

3.7.1 Motivation for a Hopf algebra

Consider, for example, two representations σ : A → End(V ) and τ : A → End(W ) . One might try to form a tensor product representation ρ : x 7→ σ(x) ⊗ τ(x) according to how it acts on the product vector space, so that

ρ(x)(v ⊗ w) = (σ(x)(v)) ⊗ (τ(x)(w)).

However, such a map would not be linear, since one would have

ρ(kx) = σ(kx) ⊗ τ(kx) = kσ(x) ⊗ kτ(x) = k2(σ(x) ⊗ τ(x)) = k2ρ(x)

for k ∈ K. One can rescue this attempt and restore linearity by imposing additional structure, by defining an algebra homomorphism Δ: A → A ⊗ A, and defining the tensor product representation as

ρ = (σ ⊗ τ) ◦ ∆.

Such a homomorphism Δ is called a comultiplication if it satisfies certain axioms. The resulting structure is called a bialgebra. To be consistent with the definitions of the associative algebra, the coalgebra must be co-associative, and, if the algebra is unital, then the co-algebra must be co-unital as well. A Hopf algebra is a bialgebra with an additional piece of structure (the so-called antipode), which allows not only to define the tensor product of two representations, but also the Hom module of two representations (again, similarly to how it is done in the representation theory of groups).

3.7.2 Motivation for a Lie algebra

See also: Lie algebra representation

One can try to be more clever in defining a tensor product. Consider, for example,

x 7→ ρ(x) = σ(x) ⊗ IdW + IdV ⊗ τ(x) so that the action on the tensor product space is given by

ρ(x)(v ⊗ w) = (σ(x)v) ⊗ w + v ⊗ (τ(x)w)

This map is clearly linear in x, and so it does not have the problem of the earlier definition. However, it fails to preserve multiplication:

ρ(xy) = σ(x)σ(y) ⊗ IdW + IdV ⊗ τ(x)τ(y) 3.8. SEE ALSO 17

But, in general, this does not equal

ρ(x)ρ(y) = σ(x)σ(y) ⊗ IdW + σ(x) ⊗ τ(y) + σ(y) ⊗ τ(x) + IdV ⊗ τ(x)τ(y)

This shows that this definition of a tensor product is too naive. It can be used, however, to define the tensor product of two representations of a Lie algebra (rather than of an associative algebra).

3.8 See also

• Abstract algebra • Algebraic structure

• Algebra over a field

3.9 References

• Bourbaki, N. (1989). Algebra I. Springer. ISBN 3-540-64243-9.

• James Byrnie Shaw (1907) A Synopsis of Linear Associative Algebra, link from Cornell University Historical Math Monographs.

• Ross Street (1998) Quantum Groups: an entrée to modern algebra, an overview of index-free notation. Chapter 4

Bijection

X Y 1 D 2 B 3 C 4 A

A bijective function, f: X → Y, where set X is {1, 2, 3, 4} and set Y is {A, B, C, D}. For example, f(1) = D.

In mathematics, a bijection, bijective function or one-to-one correspondence is a function between the elements of two sets, where every element of one set is paired with exactly one element of the other set, and every element of the other set is paired with exactly one element of the first set. There are no unpaired elements. In mathematical

18 4.1. DEFINITION 19

terms, a bijective function f: X → Y is a one-to-one (injective) and onto (surjective) mapping of a set X to a set Y. A bijection from the set X to the set Y has an inverse function from Y to X. If X and Y are finite sets, then the existence of a bijection means they have the same number of elements. For infinite sets the picture is more complicated, leading to the concept of , a way to distinguish the various sizes of infinite sets. A bijective function from a set to itself is also called a permutation. Bijective functions are essential to many areas of mathematics including the definitions of isomorphism, homeomorphism, diffeomorphism, permutation group, and projective map.

4.1 Definition

For more details on notation, see function(mathematics) § notation.

For a pairing between X and Y (where Y need not be different from X) to be a bijection, four properties must hold:

1. each element of X must be paired with at least one element of Y,

2. no element of X may be paired with more than one element of Y,

3. each element of Y must be paired with at least one element of X, and

4. no element of Y may be paired with more than one element of X.

Satisfying properties (1) and (2) means that a bijection is a function with domain X. It is more common to see properties (1) and (2) written as a single statement: Every element of X is paired with exactly one element of Y. Functions which satisfy property (3) are said to be "onto Y " and are called surjections (or surjective functions). Functions which satisfy property (4) are said to be "one-to-one functions" and are called injections (or injective functions).[1] With this terminology, a bijection is a function which is both a surjection and an injection, or using other words, a bijection is a function which is both “one-to-one” and “onto”.

4.2 Examples

4.2.1 Batting line-up of a baseball team

Consider the batting line-up of a baseball team (or any list of all the players of any sports team). The set X will be the nine players on the team and the set Y will be the nine positions in the batting order (1st, 2nd, 3rd, etc.) The “pairing” is given by which player is in what position in this order. Property (1) is satisfied since each player is somewhere in the list. Property (2) is satisfied since no player bats in two (or more) positions in the order. Property (3) says that for each position in the order, there is some player batting in that position and property (4) states that two or more players are never batting in the same position in the list.

4.2.2 Seats and students of a classroom

In a classroom there are a certain number of seats. A bunch of students enter the room and the instructor asks them all to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set of students and the set of seats, where each student is paired with the seat they are sitting in. What the instructor observed in order to reach this conclusion was that:

1. Every student was in a seat (there was no one standing),

2. No student was in more than one seat,

3. Every seat had someone sitting there (there were no empty seats), and

4. No seat had more than one student in it. 20 CHAPTER 4. BIJECTION

The instructor was able to conclude that there were just as many seats as there were students, without having to count either set.

4.3 More mathematical examples and some non-examples

• For any set X, the identity function 1X: X → X, 1X(x) = x, is bijective. • The function f: R → R, f(x) = 2x + 1 is bijective, since for each y there is a unique x = (y − 1)/2 such that f(x) = y. In more generality, any linear function over the reals, f: R → R, f(x) = ax + b (where a is non-zero) is a bijection. Each real number y is obtained from (paired with) the real number x = (y - b)/a. • The function f: R → (-π/2, π/2), given by f(x) = arctan(x) is bijective since each real number x is paired with exactly one angle y in the interval (-π/2, π/2) so that tan(y) = x (that is, y = arctan(x)). If the codomain (-π/2, π/2) was made larger to include an integer multiple of π/2 then this function would no longer be onto (surjective) since there is no real number which could be paired with the multiple of π/2 by this arctan function. • The exponential function, g: R → R, g(x) = ex, is not bijective: for instance, there is no x in R such that g(x) = −1, showing that g is not onto (surjective). However if the codomain is restricted to the positive real numbers R+ ≡ (0, +∞) , then g becomes bijective; its inverse (see below) is the natural logarithm function ln. • The function h: R → R+, h(x) = x2 is not bijective: for instance, h(−1) = h(1) = 1, showing that h is not one- R+ ≡ ∞ to-one (injective). However, if the domain is restricted to 0 [0, + ) , then h becomes bijective; its inverse is the positive square root function.

4.4 Inverses

A bijection f with domain X (“functionally” indicated by f: X → Y) also defines a relation starting in Y and going to X (by turning the arrows around). The process of “turning the arrows around” for an arbitrary function does not usually yield a function, but properties (3) and (4) of a bijection say that this inverse relation is a function with domain Y. Moreover, properties (1) and (2) then say that this inverse function is a surjection and an injection, that is, the inverse function exists and is also a bijection. Functions that have inverse functions are said to be invertible. A function is invertible if and only if it is a bijection. Stated in concise mathematical notation, a function f: X → Y is bijective if and only if it satisfies the condition

for every y in Y there is a unique x in X with y = f(x).

Continuing with the baseball batting line-up example, the function that is being defined takes as input the name of one of the players and outputs the position of that player in the batting order. Since this function is a bijection, it has an inverse function which takes as input a position in the batting order and outputs the player who will be batting in that position.

4.5 Composition

The composition g ◦ f of two bijections f: X → Y and g: Y → Z is a bijection. The inverse of g ◦ f is (g ◦ f)−1 = (f −1) ◦ (g−1) .

Conversely, if the composition g ◦ f of two functions is bijective, we can only say that f is injective and g is surjective.

4.6 Bijections and cardinality

If X and Y are finite sets, then there exists a bijection between the two sets X and Y if and only if X and Y have the same number of elements. Indeed, in axiomatic set theory, this is taken as the definition of “same number of elements” (equinumerosity), and generalising this definition to infinite sets leads to the concept of cardinal number, a way to distinguish the various sizes of infinite sets. 4.7. PROPERTIES 21

X Y Z 1 D P

2 B Q

3 C R

A

A bijection composed of an injection (left) and a surjection (right).

4.7 Properties

• A function f: R → R is bijective if and only if its graph meets every horizontal and vertical line exactly once.

• If X is a set, then the bijective functions from X to itself, together with the operation of functional composition (∘), form a group, the symmetric group of X, which is denoted variously by S(X), SX, or X!(X factorial).

• Bijections preserve cardinalities of sets: for a subset A of the domain with cardinality |A| and subset B of the codomain with cardinality |B|, one has the following equalities:

|f(A)| = |A| and |f−1(B)| = |B|.

• If X and Y are finite sets with the same cardinality, and f: X → Y, then the following are equivalent:

1. f is a bijection. 2. f is a surjection. 3. f is an injection.

• For a finite set S, there is a bijection between the set of possible total orderings of the elements and the set of bijections from S to S. That is to say, the number of permutations of elements of S is the same as the number of total orderings of that set—namely, n!.

4.8 Bijections and category theory

Bijections are precisely the in the category Set of sets and set functions. However, the bijections are not always the isomorphisms for more complex categories. For example, in the category Grp of groups, the morphisms must be homomorphisms since they must preserve the group structure, so the isomorphisms are group isomorphisms which are bijective homomorphisms. 22 CHAPTER 4. BIJECTION

4.9 Generalization to partial functions

The notion of one-one correspondence generalizes to partial functions, where they are called partial bijections, although partial bijections are only required to be injective. The reason for this relaxation is that a (proper) partial function is already undefined for a portion of its domain; thus there is no compelling reason to constrain its inverse to be a total function, i.e. defined everywhere on its domain. The set of all partial bijections on a given base set is called the symmetric inverse semigroup.[2] Another way of defining the same notion is to say that a partial bijection from A to B is any relation R (which turns out to be a partial function) with the property that R is the graph of a bijection f:A′→B′, where A′ is a subset of A and likewise B′⊆B.[3] When the partial bijection is on the same set, it is sometimes called a one-to-one partial transformation.[4] An example is the Möbius transformation simply defined on the complex plane, rather than its completion to the extended complex plane.[5]

4.10 Contrast with

This list is incomplete; you can help by expanding it.

• Multivalued function

4.11 See also

• Injective function • Surjective function • Bijection, injection and surjection • Symmetric group • Bijective numeration • Bijective proof • Cardinality • Category theory • Ax–Grothendieck theorem

4.12 Notes

[1] There are names associated to properties (1) and (2) as well. A relation which satisfies property (1) is called a total relation and a relation satisfying (2) is a single valued relation.

[2] Christopher Hollings (16 July 2014). Mathematics across the Iron Curtain: A History of the Algebraic Theory of Semigroups. American Mathematical Society. p. 251. ISBN 978-1-4704-1493-1.

[3] Francis Borceux (1994). Handbook of Categorical Algebra: Volume 2, Categories and Structures. Cambridge University Press. p. 289. ISBN 978-0-521-44179-7.

[4] Pierre A. Grillet (1995). Semigroups: An Introduction to the Structure Theory. CRC Press. p. 228. ISBN 978-0-8247- 9662-4.

[5] John Meakin (2007). “Groups and semigroups: connections and contrasts”. In C.M. Campbell, M.R. Quick, E.F. Robert- son, G.C. Smith. Groups St Andrews 2005 Volume 2. Cambridge University Press. p. 367. ISBN 978-0-521-69470-4. preprint citing Lawson, M. V. (1998). “The Möbius Inverse Monoid”. Journal of Algebra 200 (2): 428. doi:10.1006/jabr.1997.7242. 4.13. REFERENCES 23

4.13 References

This topic is a basic concept in set theory and can be found in any text which includes an introduction to set theory. texts that deal with an introduction to writing proofs will include a section on set theory, so the topic may be found in any of these:

• Wolf (1998). Proof, Logic and Conjecture: A Mathematician’s Toolbox. Freeman.

• Sundstrom (2003). Mathematical Reasoning: Writing and Proof. Prentice-Hall. • Smith; Eggen; St.Andre (2006). A Transition to Advanced Mathematics (6th Ed.). Thomson (Brooks/Cole).

• Schumacher (1996). Chapter Zero: Fundamental Notions of Abstract Mathematics. Addison-Wesley. • O'Leary (2003). The Structure of Proof: With Logic and Set Theory. Prentice-Hall.

• Morash. to Abstract Mathematics. Random House. • Maddox (2002). Mathematical Thinking and Writing. Harcourt/ Academic Press.

• Lay (2001). Analysis with an introduction to proof. Prentice Hall. • Gilbert; Vanstone (2005). An Introduction to Mathematical Thinking. Pearson Prentice-Hall.

• Fletcher; Patty. Foundations of Higher Mathematics. PWS-Kent. • Iglewicz; Stoyle. An Introduction to Mathematical Reasoning. MacMillan.

• Devlin, Keith (2004). Sets, Functions, and Logic: An Introduction to Abstract Mathematics. Chapman & Hall/ CRC Press.

• D'Angelo; West (2000). Mathematical Thinking: Problem Solving and Proofs. Prentice Hall. • Cupillari. The Nuts and Bolts of Proofs. Wadsworth.

• Bond. Introduction to Abstract Mathematics. Brooks/Cole. • Barnier; Feldman (2000). Introduction to Advanced Mathematics. Prentice Hall.

• Ash. A Primer of Abstract Mathematics. MAA.

4.14 External links

• Hazewinkel, Michiel, ed. (2001), “Bijection”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608- 010-4

• Weisstein, Eric W., “Bijection”, MathWorld. • Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history of Injection and related terms. Chapter 5

Category (mathematics)

This is a category with a collection of objects A, B, C and collection of morphisms denoted f, g, g ∘ f, and the loops are the identity arrows. This category is typically denoted by a boldface 3.

In mathematics, a category is an algebraic structure that comprises “objects” that are linked by “arrows”. A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for

24 5.1. DEFINITION 25

each object. A simple example is the category of sets, whose objects are sets and whose arrows are functions. On the other hand, any monoid can be understood as a special sort of category, and so can any . In general, the objects and arrows may be abstract entities of any kind, and the notion of category provides a fundamental and abstract way to describe mathematical entities and their relationships. This is the central idea of category theory, a branch of mathematics which seeks to generalize all of mathematics in terms of objects and arrows, independent of what the objects and arrows represent. Virtually every branch of modern mathematics can be described in terms of categories, and doing so often reveals deep insights and similarities between seemingly different areas of mathematics. For more extensive motivational background and historical notes, see category theory and the list of category theory topics. Two categories are the same if they have the same collection of objects, the same collection of arrows, and the same associative method of composing any pair of arrows. Two categories may also be considered "equivalent" for purposes of category theory, even if they are not precisely the same. Well-known categories are denoted by a short capitalized word or abbreviation in bold or italics: examples include Set, the category of sets and set functions; Ring, the category of rings and ring homomorphisms; and Top, the category of topological spaces and continuous maps. All of the preceding categories have the identity map as identity arrow and composition as the associative operation on arrows. The classic and still much used text on category theory is Categories for the Working Mathematician by Saunders Mac Lane. Other references are given in the References below. The basic definitions in this article are contained within the first few chapters of any of these books.

5.1 Definition

There are many equivalent definitions of a category.[1] One commonly used definition is as follows. A category C consists of

• a class ob(C) of objects

• a class hom(C) of morphisms, or arrows, or maps, between the objects. Each morphism f has a unique source object a and target object b where a and b are in ob(C). We write f: a → b, and we say "f is a morphism from a to b". We write hom(a, b) (or homC(a, b) when there may be confusion about to which category hom(a, b) refers) to denote the hom-class of all morphisms from a to b. (Some authors write Mor(a, b) or simply C(a, b) instead.)

• for every three objects a, b and c, a binary operation hom(a, b) × hom(b, c) → hom(a, c) called composition of morphisms; the composition of f : a → b and g : b → c is written as g ∘ f or gf. (Some authors use “diagrammatic order”, writing f;g or fg.)

such that the following axioms hold:

• (associativity) if f : a → b, g : b → c and h : c → d then h ∘ (g ∘ f) = (h ∘ g) ∘ f, and

• (identity) for every object x, there exists a morphism 1x : x → x (some authors write idx) called the identity morphism for x, such that for every morphism f : a → x and every morphism g : x → b, we have 1x ∘ f = f and g ∘ 1x = g.

From these axioms, one can prove that there is exactly one identity morphism for every object. Some authors use a slight variation of the definition in which each object is identified with the corresponding identity morphism.

5.2 History

Category theory first appeared in a paper entitled “General Theory of Natural Equivalences”, written by Samuel Eilenberg and Saunders Mac Lane in 1945. 26 CHAPTER 5. CATEGORY (MATHEMATICS)

5.3 Small and large categories

A category C is called small if both ob(C) and hom(C) are actually sets and not proper classes, and large otherwise. A locally small category is a category such that for all objects a and b, the hom-class hom(a, b) is a set, called a homset. Many important categories in mathematics (such as the category of sets), although not small, are at least locally small.

5.4 Examples

The class of all sets together with all functions between sets, where composition is the usual function composition, forms a large category, Set. It is the most basic and the most commonly used category in mathematics. The category Rel consists of all sets, with binary relations as morphisms. Abstracting from relations instead of functions yields allegories, a special class of categories. Any class can be viewed as a category whose only morphisms are the identity morphisms. Such categories are called discrete. For any given set I, the discrete category on I is the small category that has the elements of I as objects and only the identity morphisms as morphisms. Discrete categories are the simplest kind of category. Any preordered set (P, ≤) forms a small category, where the objects are the members of P, the morphisms are arrows pointing from x to y when x ≤ y. Between any two objects there can be at most one morphism. The existence of identity morphisms and the composability of the morphisms are guaranteed by the reflexivity and the transitivity of the preorder. By the same argument, any partially ordered set and any equivalence relation can be seen as a small category. Any ordinal number can be seen as a category when viewed as an ordered set. Any monoid (any algebraic structure with a single associative binary operation and an identity element) forms a small category with a single object x. (Here, x is any fixed set.) The morphisms from x to x are precisely the elements of the monoid, the identity morphism of x is the identity of the monoid, and the categorical composition of morphisms is given by the monoid operation. Several definitions and theorems about monoids may be generalized for categories. Any group can be seen as a category with a single object in which every morphism is invertible (for every morphism f there is a morphism g that is both left and right inverse to f under composition) by viewing the group as acting on itself by left multiplication. A morphism which is invertible in this sense is called an isomorphism. A groupoid is a category in which every morphism is an isomorphism. Groupoids are generalizations of groups, group actions and equivalence relations. Any generates a small category: the objects are the vertices of the graph, and the morphisms are the paths in the graph (augmented with loops as needed) where composition of morphisms is concatenation of paths. Such a category is called the free category generated by the graph. The class of all preordered sets with monotonic functions as morphisms forms a category, Ord. It is a concrete category, i.e. a category obtained by adding some type of structure onto Set, and requiring that morphisms are functions that respect this added structure. The class of all groups with group homomorphisms as morphisms and function composition as the composition operation forms a large category, Grp. Like Ord, Grp is a concrete category. The category Ab, consisting of all abelian groups and their group homomorphisms, is a full subcategory of Grp, and the prototype of an abelian category. Other examples of concrete categories are given by the following table. Fiber bundles with bundle maps between them form a concrete category. The category Cat consists of all small categories, with functors between them as morphisms.

5.5 Construction of new categories

5.5.1 Dual category

Any category C can itself be considered as a new category in a different way: the objects are the same as those in the original category but the arrows are those of the original category reversed. This is called the dual or opposite category and is denoted Cop. 5.6. TYPES OF MORPHISMS 27

A directed graph.

5.5.2 Product categories

If C and D are categories, one can form the product category C × D: the objects are pairs consisting of one object from C and one from D, and the morphisms are also pairs, consisting of one morphism in C and one in D. Such pairs can be composed componentwise.

5.6 Types of morphisms

A morphism f : a → b is called

• a monomorphism (or monic) if fg1 = fg2 implies g1 = g2 for all morphisms g1, g2 : x → a.

• an epimorphism (or epic) if g1f = g2f implies g1 = g2 for all morphisms g1, g2 : b → x. • a bimorphism if it is both a monomorphism and an epimorphism. • a retraction if it has a right inverse, i.e. if there exists a morphism g : b → a with fg = 1b. • a section if it has a left inverse, i.e. if there exists a morphism g : b → a with gf = 1a. • an isomorphism if it has an inverse, i.e. if there exists a morphism g : b → a with fg = 1b and gf = 1a. • an endomorphism if a = b. The class of endomorphisms of a is denoted end(a). 28 CHAPTER 5. CATEGORY (MATHEMATICS)

• an automorphism if f is both an endomorphism and an isomorphism. The class of automorphisms of a is denoted aut(a).

Every retraction is an epimorphism. Every section is a monomorphism. The following three statements are equivalent:

• f is a monomorphism and a retraction;

• f is an epimorphism and a section;

• f is an isomorphism.

Relations among morphisms (such as fg = h) can most conveniently be represented with commutative diagrams, where the objects are represented as points and the morphisms as arrows.

5.7 Types of categories

• In many categories, e.g. Ab or VectK, the hom-sets hom(a, b) are not just sets but actually abelian groups, and the composition of morphisms is compatible with these group structures; i.e. is bilinear. Such a cate- gory is called preadditive. If, furthermore, the category has all finite products and coproducts, it is called an additive category. If all morphisms have a kernel and a cokernel, and all epimorphisms are cokernels and all monomorphisms are kernels, then we speak of an abelian category. A typical example of an abelian category is the category of abelian groups.

• A category is called complete if all limits exist in it. The categories of sets, abelian groups and topological spaces are complete.

• A category is called cartesian closed if it has finite direct products and a morphism defined on a finite product can always be represented by a morphism defined on just one of the factors. Examples include Set and CPO, the category of complete partial orders with Scott-continuous functions.

• A topos is a certain type of cartesian closed category in which all of mathematics can be formulated (just like classically all of mathematics is formulated in the category of sets). A topos can also be used to represent a logical theory.

5.8 See also

• Enriched category

• Higher category theory

• Quantaloid

• Table of mathematical symbols

5.9 Notes

[1] Barr & Wells, Chapter 1.

5.10 References

• Adámek, Jiří; Herrlich, Horst; Strecker, George E. (1990), Abstract and Concrete Categories (PDF), John Wiley & Sons, ISBN 0-471-60922-6 (now free on-line edition, GNU FDL).

• Asperti, Andrea; Longo, Giuseppe (1991), Categories, Types and Structures (PDF), MIT Press, ISBN 0-262- 01125-5. 5.10. REFERENCES 29

• Awodey, Steve (2006), Category theory, Oxford logic guides 49, Oxford University Press, ISBN 978-0-19- 856861-2. • Barr, Michael; Wells, Charles (2005), Toposes, Triples and Theories, Reprints in Theory and Applications of Categories 12 (revised ed.), MR 2178101. • Borceux, Francis (1994), “Handbook of Categorical Algebra”, Encyclopedia of Mathematics and its Applica- tions, 50–52, Cambridge: Cambridge University Press, ISBN 0-521-06119-9. • Hazewinkel, Michiel, ed. (2001), “Category”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608- 010-4 • Herrlich, Horst; Strecker, George E. (2007), Category Theory, Heldermann Verlag.

• Jacobson, Nathan (2009), Basic algebra (2nd ed.), Dover, ISBN 978-0-486-47187-7. • Lawvere, William; Schanuel, Steve (1997), Conceptual Mathematics: A First Introduction to Categories, Cam- bridge: Cambridge University Press, ISBN 0-521-47249-0.

• Mac Lane, Saunders (1998), Categories for the Working Mathematician, Graduate Texts in Mathematics 5 (2nd ed.), Springer-Verlag, ISBN 0-387-98403-8.

• Marquis, Jean-Pierre (2006), “Category Theory”, in Zalta, Edward N., Stanford Encyclopedia of Philosophy. • Sica, Giandomenico (2006), What is category theory?, Advanced studies in mathematics and logic 3, Polimet- rica, ISBN 978-88-7699-031-1. • category in nLab Chapter 6

Complete bipartite graph

In the mathematical field of graph theory, a or biclique is a special kind of bipartite graph where every vertex of the first set is connected to every vertex of the second set.[1][2] Graph theory itself is typically dated as beginning with 's 1736 work on the Seven Bridges of Königs- berg. However, drawings of complete bipartite graphs were already printed as early as 1669, in connection with an edition of the works of Ramon Llull edited by Athanasius Kircher.[3][4] Llull himself had made similar drawings of complete graphs three centuries earlier.[3]

6.1 Definition

A complete bipartite graph is a graph whose vertices can be partitioned into two subsets V1 and V2 such that no edge has both endpoints in the same subset, and every possible edge that could connect vertices in different subsets is part of the graph. That is, it is a bipartite graph (V1, V2, E) such that for every two vertices v1 ∈ V1 and v2 ∈ [1][2] V2, v1v2 is an edge in E. A complete bipartite graph with partitions of size |V1|=m and |V2|=n, is denoted K,; every two graphs with the same notation are isomorphic.

6.2 Examples

The star graphs S3, S4, S5 and S6.

• For any k, K₁,k is called a star.[2] All complete bipartite graphs which are trees are stars.

• The graph K₁,₃ is called a claw, and is used to define the claw-free graphs.[5]

• The graph K₃,₃ is called the utility graph. This usage comes from a standard mathematical puzzle in which three utilities must each be connected to three buildings; it is impossible to solve without crossings due to the nonplanarity of K₃,₃.[6]

30 6.3. PROPERTIES 31

The utility graph K3,3

6.3 Properties

• Given a bipartite graph, testing whether it contains a complete bipartite subgraph Ki,i for a parameter i is an NP-complete problem.[7] • A cannot contain K₃,₃ as a minor; an cannot contain K₃,₂ as a minor (These are not sufficient conditions for planarity and outerplanarity, but necessary). Conversely, every nonplanar graph [8] contains either K₃,₃ or the complete graph K5 as a minor; this is Wagner’s theorem. • Every complete bipartite graph. Kn,n is a Moore graph and a (n,4)-.[9] • The complete bipartite graphs Kn,n and Kn,n₊₁ have the maximum possible number of edges among all triangle-free graphs with the same number of vertices; this is Mantel’s theorem. Mantel’s result was gener- alized to k-partite graphs and graphs that avoid larger cliques as subgraphs in Turán’s theorem, and these two complete bipartite graphs are examples of Turán graphs, the extremal graphs for this more general problem.[10] • The complete bipartite graph Km,n has a vertex covering number of min{m,n} and an edge covering number of max{m,n}. • The complete bipartite graph Km,n has a maximum independent set of size max{m,n}. • The of a complete bipartite graph Km,n has eigenvalues √(nm), −√(nm) and 0; with multi- plicity 1, 1 and n+m−2 respectively.[11] • The Laplacian matrix of a complete bipartite graph Km,n has eigenvalues n+m, n, m, and 0; with multiplicity 1, m−1, n−1 and 1 respectively. 32 CHAPTER 6. COMPLETE BIPARTITE GRAPH

• A complete bipartite graph Km,n has mn−1 nm−1 spanning trees.[12]

• A complete bipartite graph Km,n has a maximum matching of size min{m,n}. • A complete bipartite graph Kn,n has a proper n-edge-coloring corresponding to a Latin square.[13]

6.4 See also

• Crown graph, a graph formed by removing a perfect matching from a complete bipartite graph

• Complete multipartite graph, a generalization of complete bipartite graphs to more than two sets of vertices

6.5 References

[1] Bondy, John Adrian; Murty, U. S. R. (1976), Graph Theory with Applications, North-Holland, p. 5, ISBN 0-444-19451-7.

[2] Diestel, Reinhard (2005), Graph Theory (3rd ed.), Springer, ISBN 3-540-26182-6. Electronic edition, page 17.

[3] Knuth, Donald E. (2013), “Two thousand years of combinatorics”, in Wilson, Robin; Watkins, John J., Combinatorics: Ancient and Modern, Oxford University Press, pp. 7–37.

[4] Read, Ronald C.; Wilson, Robin J. (1998), An Atlas of Graphs, Clarendon Press, p. ii, ISBN 9780198532897.

[5] Lovász, László; Plummer, Michael D. (2009), Matching theory, AMS Chelsea Publishing, Providence, RI, p. 109, ISBN 978-0-8218-4759-6, MR 2536865. Corrected reprint of the 1986 original.

[6] Gries, David; Schneider, Fred B. (1993), A Logical Approach to Discrete Math, Springer, p. 437, ISBN 9780387941158.

[7] Garey, Michael R.; Johnson, David S. (1979), "[GT24] Balanced complete bipartite subgraph”, Computers and Intractabil- ity: A Guide to the Theory of NP-Completeness, W. H. Freeman, p. 196, ISBN 0-7167-1045-5.

[8] Diestel, elect. ed. p. 105.

[9] Biggs, Norman (1993), Algebraic Graph Theory, Cambridge University Press, p. 181, ISBN 9780521458979.

[10] Bollobás, Béla (1998), Modern Graph Theory, Graduate Texts in Mathematics 184, Springer, p. 104, ISBN 9780387984889.

[11] Bollobás (1998), p. 266.

[12] Jungnickel, Dieter (2012), Graphs, Networks and Algorithms, Algorithms and Computation in Mathematic 5, Springer, p. 557, ISBN 9783642322785.

[13] Jensen, Tommy R.; Toft, Bjarne (2011), Graph Coloring Problems, Wiley Series in Discrete Mathematics and Optimization 39, John Wiley & Sons, p. 16, ISBN 9781118030745. Chapter 7

Complete graph

In the mathematical field of graph theory, a complete graph is a simple undirected graph in which every pair of distinct vertices is connected by a unique edge.A complete digraph is a directed graph in which every pair of distinct vertices is connected by a pair of unique edges (one in each direction). Graph theory itself is typically dated as beginning with Leonhard Euler's 1736 work on the Seven Bridges of Königs- berg. However, drawings of complete graphs, with their vertices placed on the points of a regular polygon, appeared already in the 13th century, in the work of Ramon Llull.[1] Such a drawing is sometimes referred to as a mystic rose.[2]

7.1 Properties

The complete graph on n vertices is denoted by Kn. Some sources claim that the letter K in this notation stands for the German word komplett,[3] but the German name for a complete graph, vollständiger Graph, does not contain the letter K, and other sources state that the notation honors the contributions of Kazimierz Kuratowski to graph theory.[4] Kn has n(n − 1)/2 edges (a triangular number), and is a of degree n − 1. All complete graphs are their own maximal cliques. They are maximally connected as the only vertex cut which disconnects the graph is the complete set of vertices. The complement graph of a complete graph is an empty graph. If the edges of a complete graph are each given an orientation, the resulting directed graph is called a tournament. The number of matchings of the complete graphs are given by the telephone numbers

1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496, ... (sequence A000085 in OEIS).

These numbers give the largest possible value of the Hosoya index for an n-vertex graph.[5] The number of perfect matchings of the complete graph Kn (with n even) is given by the double factorial (n − 1)!!.[6] The crossing numbers up to K27 are known, with K28 requiring either 7233 or 7234 crossings. Further values are collected by the Rectilinear Crossing Number project.[7] Crossing numbers for K5 through K18 are

1, 3, 9, 19, 36, 62, 102, 153, 229, 324, 447, 603, 798, 1029, ... (sequence A014540 in OEIS).

7.2 Geometry and topology

A complete graph with n nodes represents the edges of an (n − 1)-simplex. Geometrically K3 forms the edge set of a triangle, K4 a tetrahedron, etc. The Császár polyhedron, a nonconvex polyhedron with the topology of a torus, has the complete graph K7 as its skeleton. Every neighborly polytope in four or more dimensions also has a complete skeleton.

K1 through K4 are all planar graphs. However, every planar drawing of a complete graph with five or more vertices must contain a crossing, and the nonplanar complete graph K5 plays a key role in the characterizations of planar

33 34 CHAPTER 7. COMPLETE GRAPH

graphs: by Kuratowski’s theorem, a graph is planar if and only if it contains neither K5 nor the complete bipartite graph K₃,₃ as a subdivision, and by Wagner’s theorem the same result holds for graph minors in place of subdivisions. [8] As part of the , K6 plays a similar role as one of the forbidden minors for . In [9] other words, and as Conway and Gordon proved, every embedding of K6 is intrinsically linked, with at least one pair of linked triangles. Conway and Gordon also showed that any embedding of K7 contains a knotted Hamiltonian cycle.

7.3 Examples

Complete graphs on n vertices, for n between 1 and 12, are shown below along with the numbers of edges:

7.4 See also

• Complete bipartite graph • Shield of the Trinity (traditional Christian symbol which is a tetrahedral graph)

7.5 References

[1] Knuth, Donald E. (2013), “Two thousand years of combinatorics”, in Wilson, Robin; Watkins, John J., Combinatorics: Ancient and Modern, Oxford University Press, pp. 7–37.

[2] Mystic Rose, nrich.maths.org, retrieved 23 January 2012.

[3] Gries, David; Schneider, Fred B. (1993), A Logical Approach to Discrete Math, Springer-Verlag, p. 436.

[4] Pirnot, Thomas L. (2000), Mathematics All Around, Addison Wesley, p. 154, ISBN 9780201308150.

[5] Tichy, Robert F.; Wagner, Stephan (2005), “Extremal problems for topological indices in combinatorial chemistry” (PDF), Journal of Computational Biology 12 (7): 1004–1013, doi:10.1089/cmb.2005.12.1004.

[6] Callan, David (2009), A combinatorial survey of identities for the double factorial, arXiv:0906.1317.

[7] Oswin Aichholzer. “Rectilinear Crossing Number project”.

[8] Robertson, Neil; Seymour, P. D.; Thomas, Robin (1993), “Linkless embeddings of graphs in 3-space”, Bulletin of the Amer- ican Mathematical Society 28 (1): 84–89, arXiv:math/9301216, doi:10.1090/S0273-0979-1993-00335-5, MR 1164063.

[9] Conway, J. H.; Cameron Gordon (1983). “Knots and Links in Spatial Graphs”. J. Graph Th. 7 (4): 445–453. doi:10.1002/jgt.3190070410.

7.6 External links

• Weisstein, Eric W., “Complete Graph”, MathWorld. Chapter 8

Complete

“Cauchy completion” redirects here. For the use in category theory, see Karoubi envelope.

In , a metric space M is called complete (or a Cauchy space) if every of points in M has a limit that is also in M or, alternatively, if every Cauchy sequence in M converges in M. Intuitively, a space is complete if there are no “points missing” from it (inside or at the boundary). For instance, the set of rational numbers is not complete, because e.g. √2 is “missing” from it, even though one can construct a Cauchy sequence of rational numbers that converges to it. (See the examples below.) It is always possible to “fill all the holes”, leading to the completion of a given space, as explained below.

8.1 Examples

The space Q of rational numbers, with the standard metric given by the absolute value of the difference, is not x = 1 x = xn + 1 complete. Consider for instance the sequence defined by 1 and n+1 2 xn . This is a Cauchy sequence of rational numbers, but it does not converge towards any rational limit: If the sequence did have a limit x, then necessarily x2 = 2, yet no rational number has this property. However, considered as a sequence of real numbers, it does converge to the irrational number √2. The open interval (0,1), again with the absolute value metric, is not complete either. The sequence defined by xn = 1/n is Cauchy, but does not have a limit in the given space. However the closed interval [0,1] is complete; for example the given sequence does have a limit in this interval and the limit is zero. The space R of real numbers and the space C of complex numbers (with the metric given by the absolute value) are complete, and so is Rn, with the usual distance metric. In contrast, infinite-dimensional normed vector spaces may or may not be complete; those that are complete are Banach spaces. The space C[a, b] of continuous real-valued functions on a closed and bounded interval is a Banach space, and so a , with respect to the supremum norm. However, the supremum norm does not give a norm on the space C(a, b) of continuous functions on (a, b), for it may contain unbounded functions. Instead, with the topology of compact convergence, C(a, b) can be given the structure of a Fréchet space: a locally convex whose topology can be induced by a complete translation-invariant metric. The space Qp of p-adic numbers is complete for any prime number p. This space completes Q with the p-adic metric in the same way that R completes Q with the usual metric. If S is an arbitrary set, then the set SN of all sequences in S becomes a complete metric space if we define the distance between the sequences (xn) and (yn) to be 1/N, where N is the smallest index for which xN is distinct from yN, or 0 if there is no such index. This space is homeomorphic to the product of a countable number of copies of the discrete space S.

35 36 CHAPTER 8. COMPLETE METRIC SPACE

8.2 Some theorems

A metric space X is complete if and only if every decreasing sequence of non-empty closed subsets of X, with diameters tending to 0, has a non-empty intersection: if Fn is closed and non-empty, Fn ₊ ₁ ⊂ Fn for every n, and diam(Fn) → 0, then there is a point x ∈ X common to all sets Fn. Every compact metric space is complete, though complete spaces need not be compact. In fact, a metric space is compact if and only if it is complete and totally bounded. This is a generalization of the Heine–Borel theorem, which states that any closed and bounded subspace S of Rn is compact and therefore complete.[1] A closed subspace of a complete space is complete.[2] Conversely, a complete subset of a metric space is closed.[3] If X is a set and M is a complete metric space, then the set B(X, M) of all bounded functions f from X to M is a complete metric space. Here we define the distance in B(X, M) in terms of the distance in M with the supremum norm

d(f, g) ≡ sup {d[f(x), g(x)] : x ∈ X}

If X is a topological space and M is a complete metric space, then the set C(X, M) consisting of all continuous bounded functions f from X to M is a closed subspace of B(X, M) and hence also complete. The Baire category theorem says that every complete metric space is a Baire space. That is, the union of countably many nowhere dense subsets of the space has empty . The Banach fixed point theorem states that a contraction mapping on a complete metric space admits a fixed point. The fixed point theorem is often used to prove the on complete metric spaces such as Banach spaces.

The expansion constant of a metric space is the infimum of all constants µ such that whenever the family {B(xα, rα)} intersects pairwise, the intersection

∩ B(xα, µrα) α

is nonempty. A metric space is complete if and only if its expansion constant is ≤ 2.[4]

8.3 Completion

For any metric space M, one can construct a complete metric space M′ (which is also denoted as M), which contains M as a dense subspace. It has the following universal property: if N is any complete metric space and f is any uniformly from M to N, then there exists a unique uniformly continuous function f′ from M′ to N, which extends f. The space M' is determined up to isometry by this property, and is called the completion of M. The completion of M can be constructed as a set of equivalence classes of Cauchy sequences in M. For any two Cauchy sequences (xn)n and (yn)n in M, we may define their distance as

d(x, y) = lim d (xn, yn) n (This limit exists because the real numbers are complete.) This is only a pseudometric, not yet a metric, since two different Cauchy sequences may have the distance 0. But “having distance 0” is an equivalence relation on the set of all Cauchy sequences, and the set of equivalence classes is a metric space, the completion of M. The original space is embedded in this space via the identification of an element x of M with the equivalence class of sequences converging to x (i.e., the equivalence class containing the sequence with constant value x). This defines an isometry onto a dense subspace, as required. Notice, however, that this construction makes explicit use of the completeness of the real numbers, so completion of the rational numbers needs a slightly different treatment. Cantor's construction of the real numbers is similar to the above construction; the real numbers are the completion of the rational numbers using the ordinary absolute value to measure distances. The additional subtlety to contend with is 8.4. TOPOLOGICALLY COMPLETE SPACES 37

that it is not logically permissible to use the completeness of the real numbers in their own construction. Nevertheless, equivalence classes of Cauchy sequences are defined as above, and the set of equivalence classes is easily shown to be a field that has the rational numbers as a subfield. This field is complete, admits a natural total ordering, and is the unique totally ordered complete field (up to isomorphism). It is defined as the field of real numbers (see also Construction of the real numbers for more details). One way to visualize this identification with the real numbers as usually viewed is that the equivalence class consisting of those Cauchy sequences of rational numbers that “ought” to have a given real limit is identified with that real number. The truncations of the decimal expansion give just one choice of Cauchy sequence in the relevant equivalence class. For a prime p, the p-adic numbers arise by completing the rational numbers with respect to a different metric. If the earlier completion procedure is applied to a normed vector space, the result is a Banach space containing the original space as a dense subspace, and if it is applied to an inner product space, the result is a containing the original space as a dense subspace.

8.4 Topologically complete spaces

Note that completeness is a property of the metric and not of the topology, meaning that a complete metric space can be homeomorphic to a non-complete one. An example is given by the real numbers, which are complete but homeomorphic to the open interval (0,1), which is not complete. In topology one considers completely metrizable spaces, spaces for which there exists at least one complete metric inducing the given topology. Completely metrizable spaces can be characterized as those spaces that can be written as an intersection of countably many open subsets of some complete metric space. Since the conclusion of the Baire category theorem is purely topological, it applies to these spaces as well. Completely metrizable spaces are often called topologically complete. However, the latter term is somewhat arbitrary since metric is not the most general structure on a topological space for which one can talk about completeness (see the section Alternatives and generalizations). Indeed, some authors use the term topologically complete for a wider class of topological spaces, the completely uniformizable spaces.[5] A topological space homeomorphic to a separable complete metric space is called a Polish space.

8.5 Alternatives and generalizations

Since Cauchy sequences can also be defined in general topological groups, an alternative to relying on a metric structure for defining completeness and constructing the completion of a space is to use a group structure. This is most often seen in the context of topological vector spaces, but requires only the existence of a continuous “subtraction” operation. In this setting, the distance between two points x and y is gauged not by a real number ε via the metric d in the comparison d(x, y) < ε, but by an open neighbourhood N of 0 via subtraction in the comparison x − y ∈ N. A common generalisation of these definitions can be found in the context of a , where an entourage is a set of all pairs of points that are at no more than a particular “distance” from each other. It is also possible to replace Cauchy sequences in the definition of completeness by Cauchy nets or Cauchy filters. If every Cauchy (or equivalently every Cauchy filter) has a limit in X, then X is called complete. One can furthermore construct a completion for an arbitrary uniform space similar to the completion of metric spaces. The most general situation in which Cauchy nets apply is Cauchy spaces; these too have a notion of completeness and completion just like uniform spaces.

8.6 See also

• Knaster–Tarski theorem

• Completion (ring theory) 38 CHAPTER 8. COMPLETE METRIC SPACE

8.7 Notes

[1] Introduction to Metric and Topological Spaces, Wilson A. Sutherland, ISBN 978-0-19-853161-6

[2] http://planetmath.org/encyclopedia/AClosedSubsetOfACompleteMetricSpaceIsComplete.html

[3] http://planetmath.org/encyclopedia/ACompleteSubspaceOfAMetricSpaceIsClosed.html

[4] B. Grünbaum, Some applications of expansion constants. Pacific J. Math. Volume 10, Number 1 (1960), 193–201.

[5] Kelley, Problem 6.L, p. 208

8.8 References

• Kelley, John L. (1975). . Springer. ISBN 0-387-90125-6.

• Kreyszig, Erwin, Introductory functional analysis with applications (Wiley, New York, 1978). ISBN 0-471- 03729-X

• Lang, Serge, “Real and Functional Analysis” ISBN 0-387-94001-4 • Meise, Reinhold; Vogt, Dietmar; translated by Ramanujan, M.S. (1997). Introduction to functional analysis. Oxford: Clarendon Press; New York: Oxford University Press. ISBN 0-19-851485-9. Chapter 9

Conjugacy class

In mathematics, especially group theory, the elements of any group may be partitioned into conjugacy classes; members of the same conjugacy class share many properties, and study of conjugacy classes of non-abelian groups reveals many important features of their structure.[1][2] For an abelian group, each conjugacy class is a set containing one element (singleton set). Functions that are constant for members of the same conjugacy class are called class functions.

9.1 Definition

Suppose G is a group. Two elements a and b of G are called conjugate if there exists an element g in G with

gag−1 = b.

(In linear algebra, this is referred to as matrix similarity.) It can be easily shown that conjugacy is an equivalence relation and therefore partitions G into equivalence classes. (This means that every element of the group belongs to precisely one conjugacy class, and the classes Cl(a) and Cl(b) are equal if and only if a and b are conjugate, and disjoint otherwise.) The equivalence class that contains the element a in G is

Cl(a) = { b ∈ G | there exists g ∈ G with b = gag−1 }

and is called the conjugacy class of a. The class number of G is the number of distinct (nonequivalent) conjugacy classes. All elements belonging to the same conjugacy class have the same order. Conjugacy classes may be referred to by describing them, or more briefly by abbreviations such as “6A”, meaning “a certain conjugacy class of order 6 elements”, and “6B” would be a different conjugacy class of order 6 elements; the conjugacy class 1A is the conjugacy class of the identity. In some cases, conjugacy classes can be described in a uniform way – for example, in the symmetric group they can be described by cycle structure.

9.2 Examples

The symmetric group S3, consisting of all 6 permutations of three elements, has three conjugacy classes:

• no change (abc → abc)

• interchanging two (abc → acb, abc → bac, abc → cba)

• a cyclic permutation of all three (abc → bca, abc → cab)

39 40 CHAPTER 9. CONJUGACY CLASS

These three classes also correspond to the classification of the isometries of an equilateral triangle.

The symmetric group S4, consisting of all 24 permutations of four elements, has five conjugacy classes, listed with their cycle structures and orders:

• (1)4: no change (1 element: { {1, 2, 3, 4} } ) • (2): interchanging two (6 elements: { {1, 2, 4, 3}, {1, 4, 3, 2}, {1, 3, 2, 4}, {4, 2, 3, 1}, {3, 2, 1, 4}, {2, 1, 3, 4} }) • (3): a cyclic permutation of three (8 elements: { {1, 3, 4, 2}, {1, 4, 2, 3}, {3, 2, 4, 1}, {4, 2, 1, 3}, {4, 1, 3, 2}, {2, 4, 3, 1}, {3, 1, 2, 4}, {2, 3, 1, 4} } ) • (4): a cyclic permutation of all four (6 elements: { {2, 3, 4, 1}, {2, 4, 1, 3}, {3, 1, 4, 2}, {3, 4, 2, 1}, {4, 1, 2, 3}, {4, 3, 1, 2} } ) • (2)(2): interchanging two, and also the other two (3 elements: { {2, 1, 4, 3}, {4, 3, 2, 1}, {3, 4, 1, 2} } )

In general, the number of conjugacy classes in the symmetric group S is equal to the number of integer partitions of n. This is because each conjugacy class corresponds to exactly one partition of {1, 2, ..., n} into cycles, up to permutation of the elements of {1, 2, ..., n}. The proper rotations of the cube, which can be characterized by permutations of the body diagonals, are also described by conjugation in S4 . In general, the Euclidean group can be studied by conjugation of isometries in Euclidean space.

9.3 Properties

• The identity element is always in its own class, that is Cl(e) = {e} • If G is abelian, then gag−1 = a for all a and g in G; so Cl(a) = {a} for all a in G. • If two elements a and b of G belong to the same conjugacy class (i.e., if they are conjugate), then they have the same order. More generally, every statement about a can be translated into a statement about b=gag−1, because the map φ(x) = gxg−1 is an automorphism of G. • An element a of G lies in the center Z(G) of G if and only if its conjugacy class has only one element, a itself. More generally, if CG(a) denotes the centralizer of a in G, i.e., the subgroup consisting of all elements g such that ga = ag, then the index [G :CG(a)] is equal to the number of elements in the conjugacy class of a (by the orbit-stabilizer theorem). • If a and b are conjugate, then so are their powers ak and bk. (Proof: if a = gbg−1, then ak = (gbg−1)(gbg−1)...(gbg−1) = gbkg−1.) Thus taking kth powers gives a map on conjugacy classes, and one may consider which conjugacy classes are in its preimage. For example, in the symmetric group, the square of an element of type (3)(2) (a 3-cycle and a 2-cycle) is an element of type (3), therefore one of the power-up classes of (3) is the class (3)(2); the class (6) is another.

9.4 Conjugacy class equation

If G is a finite group, then for any group element a, the elements in the conjugacy class of a are in one-to-one correspondence with cosets of the centralizer CG(a). This can be seen by observing that any two elements b and c belonging to the same coset (and hence, b = cz for some z in the centralizer CG(a)) give rise to the same element when conjugating a: bab−1 = cza(cz)−1 = czaz−1c−1 = czz−1ac−1 = cac−1. Thus the number of elements in the conjugacy class of a is the index [G:CG(a)] of the centralizer CG(a) in G; hence the size of each conjugacy class divides the order of the group. Furthermore, if we choose a single representative element xi from every conjugacy class, we infer from the disjointness of the conjugacy classes that |G| = ∑i [G :CG(xi)], where CG(xi) is the centralizer of the element xi. Observing that each element of the center Z(G) forms a conjugacy class containing just itself gives rise to the class equation:[3] 9.5. CONJUGACY OF SUBGROUPS AND GENERAL SUBSETS 41

|G| = |Z(G)| + ∑i [G :CG(xi)] where the sum is over a representative element from each conjugacy class that is not in the center. Knowledge of the divisors of the group order |G| can often be used to gain information about the order of the center or of the conjugacy classes.

9.4.1 Example

Consider a finite p-group G (that is, a group with order pn, where p is a prime number and n > 0). We are going to prove that every finite p-group has a non-trivial center. Since the order of any conjugacy class of G must divide the order of G, it follows that each conjugacy class Hi also has order some power of pki, where 0 < ki < n. But then the class equation requires that |G| = pn = |Z(G)| + ∑i pki. From this we see that p must divide |Z(G)|, so |Z(G)| > 1.

9.5 Conjugacy of subgroups and general subsets

More generally, given any subset S of G (S not necessarily a subgroup), we define a subset T of G to be conjugate to S if there exists some g in G such that T = gSg−1. We can define Cl(S) as the set of all subsets T of G such that T is conjugate to S. A frequently used theorem is that, given any subset S of G, the index of N(S) (the normalizer of S) in G equals the order of Cl(S):

|Cl(S)| = [G : N(S)]

This follows since, if g and h are in G, then gSg−1 = hSh−1 if and only if g−1h is in N(S), in other words, if and only if g and h are in the same coset of N(S). Note that this formula generalizes the one given earlier for the number of elements in a conjugacy class (let S = {a}). The above is particularly useful when talking about subgroups of G. The subgroups can thus be divided into conjugacy classes, with two subgroups belonging to the same class if and only if they are conjugate. Conjugate subgroups are isomorphic, but isomorphic subgroups need not be conjugate. For example, an abelian group may have two different subgroups which are isomorphic, but they are never conjugate.

9.6 Conjugacy as group action

If we define

g . x = gxg−1

for any two elements g and x in G, then we have a group action of G on G. The orbits of this action are the conjugacy classes, and the stabilizer of a given element is the element’s centralizer.[4] Similarly, we can define a group action of G on the set of all subsets of G, by writing

g . S = gSg−1,

or on the set of the subgroups of G.

9.7 Geometric interpretation

Conjugacy classes in the fundamental group of a path-connected topological space can be thought of as equivalence classes of free loops under free homotopy. 42 CHAPTER 9. CONJUGACY CLASS

9.8 See also

• Topological conjugacy

• FC-group • Conjugacy-closed subgroup

9.9 References

[1] Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). John Wiley & Sons. ISBN 0-471-43334-9.

[2] Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Springer. ISBN 0-387-95385-X.

[3] Grillet (2007), p. 57

[4] Grillet (2007), p. 56

• Grillet, Pierre Antoine (2007). Abstract algebra. Graduate texts in mathematics 242 (2 ed.). Springer. ISBN 978-0-387-71567-4. Chapter 10

Connected space

For other uses, see Connection (disambiguation). Connected and disconnected subspaces of R²

A

B

C

D

E1 E3

E4 E2

From top to bottom: red space A, pink space B, yellow space C and orange space D are all connected, whereas green space E (made of subsets E1, E2, E3, and E4) is not connected. Furthermore, A and B are also simply connected (genus 0), while C and D are not: C has genus 1 and D has genus 4.

In topology and related branches of mathematics, a connected space is a topological space that cannot be represented as the union of two or more disjoint nonempty open subsets. Connectedness is one of the principal topological properties that is used to distinguish topological spaces. A stronger notion is that of a path-connected space, which is a space where any two points can be joined by a path.

43 44 CHAPTER 10. CONNECTED SPACE

A subset of a topological space X is a connected set if it is a connected space when viewed as a subspace of X. An example of a space that is not connected is a plane with an infinite line deleted from it. Other examples of disconnected spaces (that is, spaces which are not connected) include the plane with an annulus removed, as well as the union of two disjoint closed disks, where all examples of this paragraph bear the induced by two-dimensional Euclidean space.

10.1 Formal definition

A topological space X is said to be disconnected if it is the union of two disjoint nonempty open sets. Otherwise, X is said to be connected.A subset of a topological space is said to be connected if it is connected under its subspace topology. Some authors exclude the (with its unique topology) as a connected space, but this article does not follow that practice. For a topological space X the following conditions are equivalent:

1. X is connected.

2. X cannot be divided into two disjoint nonempty closed sets.

3. The only subsets of X which are both open and closed (clopen sets) are X and the empty set.

4. The only subsets of X with empty boundary are X and the empty set.

5. X cannot be written as the union of two nonempty separated sets (sets whose closures are disjoint).

6. All continuous functions from X to {0,1} are constant, where {0,1} is the two-point space endowed with the discrete topology.

10.1.1 Connected components

The maximal connected subsets (ordered by inclusion) of a nonempty topological space are called the connected components of the space. The components of any topological space X form a partition of X: they are disjoint, nonempty, and their union is the whole space. Every component is a closed subset of the original space. It follows that, in the case where their number is finite, each component is also an open subset. However, if their number is infinite, this might not be the case; for instance, the connected components of the set of the rational numbers are the one-point sets, which are not open. ′ Let Γx be the connected component of x in a topological space X, and Γx be the intersection of all clopen sets ⊂ ′ containing x (called quasi-component of x.) Then Γx Γx where the equality holds if X is compact Hausdorff or locally connected.

10.1.2 Disconnected spaces

A space in which all components are one-point sets is called totally disconnected. Related to this property, a space X is called totally separated if, for any two distinct elements x and y of X, there exist disjoint open neighborhoods U of x and V of y such that X is the union of U and V. Clearly any totally separated space is totally disconnected, but the converse does not hold. For example take two copies of the rational numbers Q, and identify them at every point except zero. The resulting space, with the quotient topology, is totally disconnected. However, by considering the two copies of zero, one sees that the space is not totally separated. In fact, it is not even Hausdorff, and the condition of being totally separated is strictly stronger than the condition of being Hausdorff.

10.2 Examples

• The closed interval [0, 2] in the standard subspace topology is connected; although it can, for example, be written as the union of [0, 1) and [1, 2], the second set is not open in the chosen topology of [0, 2]. 10.3. PATH CONNECTEDNESS 45

• The union of [0, 1) and (1, 2] is disconnected; both of these intervals are open in the standard topological space [0, 1) ∪ (1, 2].

• (0, 1) ∪ {3} is disconnected.

• A is connected; it is actually simply connected.

• A Euclidean plane excluding the origin, (0, 0), is connected, but is not simply connected. The three-dimensional Euclidean space without the origin is connected, and even simply connected. In contrast, the one-dimensional Euclidean space without the origin is not connected.

• A Euclidean plane with a straight line removed is not connected since it consists of two half-planes.

• ℝ, The space of real numbers with the usual topology, is connected.

• If even a single point is removed from ℝ, the remainder is disconnected. However, if even a countable infinity of points are removed from ℝn, where n≥2, the remainder is connected.

• Any topological vector space over a connected field is connected.

• Every discrete topological space with at least two elements is disconnected, in fact such a space is totally disconnected. The simplest example is the discrete two-point space.[1]

• On the other hand, a finite set might be connected. For example, the spectrum of a discrete valuation ring consists of two points and is connected. It is an example of a Sierpiński space.

• The Cantor set is totally disconnected; since the set contains uncountably many points, it has uncountably many components.

• If a space X is homotopy equivalent to a connected space, then X is itself connected.

• The topologist’s sine curve is an example of a set that is connected but is neither path connected nor locally connected.

• The general linear group GL(n, R) (that is, the group of n-by-n real, invertible matrices) consists of two con- nected components: the one with matrices of positive determinant and the other of negative determinant. In particular, it is not connected. In contrast, GL(n, C) is connected. More generally, the set of invertible bounded operators on a (complex) Hilbert space is connected.

• The spectra of commutative local ring and integral domains are connected. More generally, the following are equivalent[2]

1. The spectrum of a commutative ring R is connected 2. Every finitely generated projective module over R has constant rank. 3. R has no idempotent ≠ 0, 1 (i.e., R is not a product of two rings in a nontrivial way).

10.3 Path connectedness

A path from a point x to a point y in a topological space X is a continuous function f from the unit interval [0,1] to X with f(0) = x and f(1) = y.A path-component of X is an equivalence class of X under the equivalence relation which makes x equivalent to y if there is a path from x to y. The space X is said to be path-connected (or pathwise connected or 0-connected) if there is exactly one path-component, i.e. if there is a path joining any two points in X. Again, many authors exclude the empty space. Every path-connected space is connected. The converse is not always true: examples of connected spaces that are not path-connected include the extended long line L* and the topologist’s sine curve. However, subsets of the real line R are connected if and only if they are path-connected; these subsets are the intervals of R. Also, open subsets of Rn or Cn are connected if and only if they are path-connected. Additionally, connectedness and path-connectedness are the same for finite topological spaces. 46 CHAPTER 10. CONNECTED SPACE

This subspace of R² is path-connected, because a path can be drawn between any two points in the space.

10.4 Arc connectedness

A space X is said to be arc-connected or arcwise connected if any two distinct points can be joined by an arc, that is a path f which is a homeomorphism between the unit interval [0, 1] and its image f([0, 1]). It can be shown any Hausdorff space which is path-connected is also arc-connected. An example of a space which is path-connected but not arc-connected is provided by adding a second copy 0' of 0 to the nonnegative real numbers [0, ∞). One endows this set with a partial order by specifying that 0', that is one takes the open intervals (a, b) = {x | a < x < b} and the half-open intervals [0, a) = {x | 0 ≤ x < a}, [0', a) = {x | 0' ≤ x < a} as a base for the topology. The resulting space is a T1 space but not a Hausdorff space. Clearly 0 and 0' can be connected by a path but not by an arc in this space.

10.5 Local connectedness

Main article:

A topological space is said to be locally connected at a point x if every neighbourhood of x contains a connected open neighbourhood. It is locally connected if it has a base of connected sets. It can be shown that a space X is locally connected if and only if every component of every of X is open. The topologist’s sine curve is an example of a connected space that is not locally connected. Similarly, a topological space is said to be locally path-connected if it has a base of path-connected sets. An open subset of a locally path-connected space is connected if and only if it is path-connected. This generalizes the earlier statement about Rn and Cn, each of which is locally path-connected. More generally, any topological manifold is locally path-connected.

10.6 Set operations

The intersection of connected sets is not necessarily connected. 10.6. SET OPERATIONS 47

union union A A B B non connexe connexe intersection intersection

A A B B connexe non connexe

Examples of unions and intersections of connected sets

The union of connected sets is not necessarily connected. Consider a collection {Xi} of connected sets whose union is X = ∪iXi . If X is disconnected and U ∪ V is a separation of X (with U, V disjoint and open in X ), then each Xi must be entirely contained in either U or V , since otherwise, Xi ∩ U and Xi ∩ V (which are disjoint and open in Xi ) would be a separation of Xi , contradicting the assumption that it is connected.

This means that, if the union X is disconnected, then the collection {Xi} can be partitioned to two sub-collections, such that the unions of the sub-collections are disjoint and open in X (see picture). This implies that in several cases, a union of connected sets is necessarily connected. In particular:

1. If the common intersection of all sets is not empty ( ∩Xi ≠ ∅ ), then obviously they cannot be partitioned to collections with disjoint unions. Hence the union of connected sets with non-empty intersection is connected.

2. If the intersection of each pair of sets is not empty ( ∀i, j : Xi ∩Xj ≠ ∅ ) then again they cannot be partitioned to collections with disjoint unions, so their union must be connected.

3. If the sets can be ordered as a “linked chain”, i.e. indexed by integer indices and ∀i : Xi ∩ Xi+1 ≠ ∅ , then again their union must be connected.

4. If the sets are pairwise-disjoint and the quotient space X/{Xi} is connected, then X must be connected. Otherwise, if U ∪ V is a separation of X then q(U) ∪ q(V ) is a separation of the quotient space (since q(U), q(V ) are disjoint and open in the quotient space).[3] 48 CHAPTER 10. CONNECTED SPACE

Each ellipse is a connected set, but the union is not connected, since it can be partitioned to two disjoint open sets U and V.

Two connected sets whose difference is not connected

The set difference of connected sets is not necessarily connected. However, if X⊇Y and their difference X\Y is disconnected (and thus can be written as a union of two open sets X1 and X2), then the union of Y with each such component is connected (i.e. Y∪Xi is connected for all i). Proof:[4] By contradiction, suppose Y∪X1 is not connected. So it can be written as the union of two disjoint open sets, e.g. Y∪X1 = Z1∪Z2. Because Y is connected, it must be 10.7. THEOREMS 49

entirely contained in one of these components, say Z1, and thus Z2 is contained in X1. Now we know that:

X = (Y∪X1)∪X2 = (Z1∪Z2)∪X2 = (Z1∪X2)∪(Z2∩X1)

The two sets in the last union are disjoint and open in X, so there is a separation of X, contradicting the fact that X is connected.

10.7 Theorems

“Main theorem of connectedness” redirects to here.

• Main theorem: Let X and Y be topological spaces and let f : X → Y be a continuous function. If X is (path- )connected then the image f(X) is (path-)connected. This result can be considered a generalization of the intermediate value theorem.

• Every path-connected space is connected.

• Every locally path-connected space is locally connected.

• A locally path-connected space is path-connected if and only if it is connected.

• The of a connected subset is connected.

• The connected components are always closed (but in general not open)

• The connected components of a locally connected space are also open.

• The connected components of a space are disjoint unions of the path-connected components (which in general are neither open nor closed).

• Every quotient of a connected (resp. locally connected, path-connected, locally path-connected) space is con- nected (resp. locally connected, path-connected, locally path-connected).

• Every product of a family of connected (resp. path-connected) spaces is connected (resp. path-connected).

• Every open subset of a locally connected (resp. locally path-connected) space is locally connected (resp. locally path-connected).

• Every manifold is locally path-connected.

10.8 Graphs

Graphs have path connected subsets, namely those subsets for which every pair of points has a path of edges joining them. But it is not always possible to find a topology on the set of points which induces the same connected sets. The 5- (and any n-cycle with n>3 odd) is one such example. As a consequence, a notion of connectedness can be formulated independently of the topology on a space. To wit, there is a category of connective spaces consisting of sets with collections of connected subsets satisfying connectivity axioms; their morphisms are those functions which map connected sets to connected sets (Muscat & Buhagiar 2006). Topological spaces and graphs are special cases of connective spaces; indeed, the finite connective spaces are precisely the finite graphs. However, every graph can be canonically made into a topological space, by treating vertices as points and edges as copies of the unit interval (see topological graph theory#Graphs as topological spaces). Then one can show that the graph is connected (in the graph theoretical sense) if and only if it is connected as a topological space. 50 CHAPTER 10. CONNECTED SPACE

10.9 Stronger forms of connectedness

There are stronger forms of connectedness for topological spaces, for instance:

• If there exist no two disjoint non-empty open sets in a topological space, X, X must be connected, and thus hyperconnected spaces are also connected.

• Since a simply connected space is, by definition, also required to be path connected, any simply connected space is also connected. Note however, that if the “path connectedness” requirement is dropped from the definition of simple connectivity, a simply connected space does not need to be connected.

• Yet stronger versions of connectivity include the notion of a contractible space. Every contractible space is path connected and thus also connected.

In general, note that any path connected space must be connected but there exist connected spaces that are not path connected. The deleted comb space furnishes such an example, as does the above-mentioned topologist’s sine curve.

10.10 See also

• uniformly connected space

• locally connected space • connected component (graph theory)

• n-connected • Connectedness locus

• Extremally disconnected space

10.11 References

10.11.1 Notes

[1] George F. Simmons (1968). Introduction to Topology and Modern Analysis. McGraw Hill Book Company. p. 144. ISBN 0-89874-551-9.

[2] Charles Weibel, The K-book: An introduction to algebraic K-theory

[3] Credit: Saaqib Mahmuud and Henno Brandsma at Math StackExchange.

[4] Credit: Marek at Math StackExchange

10.11.2 General references

• Munkres, James R. (2000). Topology, Second Edition. Prentice Hall. ISBN 0-13-181629-2.

• Weisstein, Eric W., “Connected Set”, MathWorld. • V. I. Malykhin (2001), “Connected space”, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 • Muscat, J; Buhagiar, D (2006). “Connective Spaces” (PDF). Mem. Fac. Sci. Eng. Shimane Univ., Series B: Math. Sc. 39: 1–13.. Chapter 11

Connectivity (graph theory)

This graph becomes disconnected when the right-most node in the gray area on the left is removed

In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the

51 52 CHAPTER 11. CONNECTIVITY (GRAPH THEORY)

This graph becomes disconnected when the dashed edge is removed.

minimum number of elements (nodes or edges) that need to be removed to disconnect the remaining nodes from each other.[1] It is closely related to the theory of network flow problems. The connectivity of a graph is an important measure of its robustness as a network.

11.1 Connected graph

A graph is connected when there is a path between every pair of vertices. In a connected graph, there are no unreachable vertices. A graph that is not connected is disconnected. A graph with just one vertex is connected. An edgeless graph with two or more vertices is disconnected.

11.2 Definitions of components, cuts and connectivity

In an undirected graph G, two vertices u and v are called connected if G contains a path from u to v. Otherwise, they are called disconnected. If the two vertices are additionally connected by a path of length 1, i.e. by a single edge, the vertices are called adjacent.A graph is said to be connected if every pair of vertices in the graph is connected. A connected component is a maximal connected subgraph of G. Each vertex belongs to exactly one connected com- ponent, as does each edge. A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. It is connected if it contains a directed path from u to v or a directed path from v to u for every pair of vertices u, v. It is strongly connected or strong if it contains a directed path from u to v and a directed path from v to u for every pair of vertices u, v. The strong components are the maximal strongly connected subgraphs. A cut, vertex cut, or separating set of a connected graph G is a set of vertices whose removal renders G disconnected. The connectivity or vertex connectivity κ(G) (where G is not a complete graph) is the size of a minimal vertex cut. A graph is called k-connected or k-vertex-connected if its vertex connectivity is k or greater. More precisely, any graph G (complete or not) is said to be k-connected if it contains at least k+1 vertices, but does 11.3. MENGER’S THEOREM 53

With vertex 0 this graph is disconnected, the rest of the graph is connected.

not contain a set of k − 1 vertices whose removal disconnects the graph; and κ(G) is defined as the largest k such that G is k-connected. In particular, a complete graph with n vertices, denoted K, has no vertex cuts at all, but κ(Kn) = n − 1. A vertex cut for two vertices u and v is a set of vertices whose removal from the graph disconnects u and v. The local connectivity κ(u, v) is the size of a smallest vertex cut separating u and v. Local connectivity is symmetric for undirected graphs; that is, κ(u, v) = κ(v, u). Moreover, except for complete graphs, κ(G) equals the minimum of κ(u, v) over all nonadjacent pairs of vertices u, v. 2-connectivity is also called biconnectivity and 3-connectivity is also called triconnectivity. A graph G which is con- nected but not 2-connected is sometimes called separable. Analogous concepts can be defined for edges. In the simple case in which cutting a single, specific edge would disconnect the graph, that edge is called a bridge. More generally, the edge cut of G is a group of edges whose total removal renders the graph disconnected. The edge-connectivity λ(G) is the size of a smallest edge cut, and the local edge-connectivity λ(u, v) of two vertices u, v is the size of a smallest edge cut disconnecting u from v. Again, local edge-connectivity is symmetric. A graph is called k-edge-connected if its edge connectivity is k or greater.

11.3 Menger’s theorem

Main article: Menger’s theorem

One of the most important facts about connectivity in graphs is Menger’s theorem, which characterizes the connec- tivity and edge-connectivity of a graph in terms of the number of independent paths between vertices. If u and v are vertices of a graph G, then a collection of paths between u and v is called independent if no two of them share a vertex (other than u and v themselves). Similarly, the collection is edge-independent if no two paths in it share an edge. The number of mutually independent paths between u and v is written as κ′(u, v), and the number of mutually edge-independent paths between u and v is written as λ′(u, v). Menger’s theorem asserts that the local connectivity κ(u, v) equals κ′(u, v) and the local edge-connectivity λ(u, v) equals λ′(u, v) for every pair of vertices u and v.[2][3] This fact is actually a special case of the max-flow min-cut theorem. 54 CHAPTER 11. CONNECTIVITY (GRAPH THEORY)

11.4 Computational aspects

The problem of determining whether two vertices in a graph are connected can be solved efficiently using a search algorithm, such as breadth-first search. More generally, it is easy to determine computationally whether a graph is connected (for example, by using a disjoint-set data structure), or to count the number of connected components. A simple algorithm might be written in pseudo-code as follows:

1. Begin at any arbitrary node of the graph, G

2. Proceed from that node using either depth-first or breadth-first search, counting all nodes reached.

3. Once the graph has been entirely traversed, if the number of nodes counted is equal to the number of nodes of G, the graph is connected; otherwise it is disconnected.

By Menger’s theorem, for any two vertices u and v in a connected graph G, the numbers κ(u, v) and λ(u, v) can be determined efficiently using the max-flow min-cut algorithm. The connectivity and edge-connectivity of G can then be computed as the minimum values of κ(u, v) and λ(u, v), respectively. In computational complexity theory, SL is the class of problems log-space reducible to the problem of determining whether two vertices in a graph are connected, which was proved to be equal to L by Omer Reingold in 2004.[4] Hence, undirected graph connectivity may be solved in O(log n) space. The problem of computing the probability that a Bernoulli random graph is connected is called network reliability and the problem of computing whether two given vertices are connected the ST-reliability problem. Both of these are #P-hard.[5]

11.5 Examples

• The vertex- and edge-connectivities of a disconnected graph are both 0.

• 1-connectedness is equivalent to connectedness.

• The complete graph on n vertices has edge-connectivity equal to n − 1. Every other simple graph on n vertices has strictly smaller edge-connectivity.

• In a , the local edge-connectivity between every pair of vertices is 1.

11.6 Bounds on connectivity

• The vertex-connectivity of a graph is less than or equal to its edge-connectivity. That is, κ(G) ≤ λ(G). Both are less than or equal to the minimum degree of the graph, since deleting all neighbors of a vertex of minimum degree will disconnect that vertex from the rest of the graph.[1]

• For a vertex-transitive graph of degree d, we have: 2(d + 1)/3 ≤ κ(G) ≤ λ(G) = d.[6]

• For a vertex-transitive graph of degree d ≤ 4, or for any (undirected) minimal Cayley graph of degree d, or for any symmetric graph of degree d, both kinds of connectivity are equal: κ(G) = λ(G) = d.[7]

11.7 Other properties

• Connectedness is preserved by graph homomorphisms.

• If G is connected then its line graph L(G) is also connected.

• A graph G is 2-edge-connected if and only if it has an orientation that is strongly connected. 11.8. SEE ALSO 55

• Balinski’s theorem states that the polytopal graph (1-skeleton) of a k-dimensional convex polytope is a k-vertex- connected graph.[8] Steinitz's previous theorem that any 3-vertex-connected planar graph is a polytopal graph (Steinitz theorem) gives a partial converse.

• According to a theorem of G. A. Dirac, if a graph is k-connected for k ≥ 2, then for every set of k vertices in the graph there is a cycle that passes through all the vertices in the set.[9][10] The converse is true when k = 2.

11.8 See also

• Algebraic connectivity

• Cheeger constant (graph theory) • Expander graph

• Scale-free network

• Small-world networks, Six degrees of separation, Small world phenomenon • Strength of a graph (graph theory)

11.9 References

[1] Diestel, R., Graph Theory, Electronic Edition, 2005, p 12.

[2] Gibbons, A. (1985). Algorithmic Graph Theory. Cambridge University Press.

[3] Nagamochi, H., Ibaraki, T. (2008). Algorithmic Aspects of Graph Connectivity. Cambridge University Press.

[4] Reingold, Omer (2008). “Undirected connectivity in log-space”. Journal of the ACM 55 (4): Article 17, 24 pages. doi:10.1145/1391289.1391291

[5] Provan, J. Scott; Ball, Michael O. (1983), “The complexity of counting cuts and of computing the probability that a graph is connected”, SIAM Journal on Computing 12 (4): 777–788, doi:10.1137/0212053, MR 721012.

[6] Godsil, C.; Royle, G. (2001). Algebraic Graph Theory. Springer Verlag.

[7] Babai, L. (1996). Automorphism groups, isomorphism, reconstruction. Technical Report TR-94-10. University of Chicago. Chapter 27 of The Handbook of Combinatorics.

[8] Balinski, M. L. (1961). “On the graph structure of convex polyhedra in n-space”. Pacific Journal of Mathematics 11 (2): 431–434. doi:10.2140/pjm.1961.11.431.

[9] Dirac, Gabriel Andrew (1960). “In abstrakten Graphen vorhandene vollständige 4-Graphen und ihre Unterteilungen”. Mathematische Nachrichten 22: 61–85. doi:10.1002/mana.19600220107. MR 0121311.

[10] Flandrin, Evelyne; Li, Hao; Marczyk, Antoni; Woźniak, Mariusz (2007). “A generalization of Dirac’s theorem on cycles through k vertices in k-connected graphs”. Discrete Mathematics 307 (7–8): 878–884. doi:10.1016/j.disc.2005.11.052. MR 2297171. Chapter 12

Continuous function

In mathematics, a continuous function is, roughly speaking, a function for which small changes in the input result in small changes in the output. Otherwise, a function is said to be a discontinuous function. A continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the core concepts of topology, which is treated in full generality below. The intro- ductory portion of this article focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the more general case of functions between two metric spaces. In order theory, especially in domain theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article. As an example, consider the function h(t), which describes the height of a growing flower at time t. This function is continuous. By contrast, if M(t) denotes the amount of money in a bank account at time t, then the function jumps whenever money is deposited or withdrawn, so the function M(t) is discontinuous.

12.1 History

A form of this epsilon-delta definition of continuity was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy defined continuity of y = f(x) as follows: an infinitely small increment α of the independent variable x always produces an infinitely small change f(x + α) − f(x) of the dependent variable y (see e.g., Cours d'Analyse, p. 34). Cauchy defined infinitely small quantities in terms of variable quantities, and his definition of continuity closely parallels the infinitesimal definition used today (see microcontinuity). The formal definition and the distinction between pointwise continuity and were first given by Bolzano in the 1830s but the work wasn't published until the 1930s. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854.[1]

12.2 Real-valued continuous functions

12.2.1 Definition

A function from the set of real numbers to the real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve with no “holes” or “jumps”. There are several ways to make this definition mathematically rigorous. These definitions are equivalent to one an- other, so the most convenient definition can be used to determine whether a given function is continuous or not. In the definitions below,

f : I → R.

is a function defined on a subset I of the set R of real numbers. This subset I is referred to as the domain of f. Some possible choices include I=R, the whole set of real numbers, an open interval

56 12.2. REAL-VALUED CONTINUOUS FUNCTIONS 57

I = (a, b) = {x ∈ R | a < x < b}, or a closed interval

I = [a, b] = {x ∈ R | a ≤ x ≤ b}. Here, a and b are real numbers.

Definition in terms of limits of functions

The function f is continuous at some point c of its domain if the limit of f(x) as x approaches c through the domain of f exists and is equal to f(c).[2] In mathematical notation, this is written as

lim f(x) = f(c). x→c In detail this means three conditions: first, f has to be defined at c. Second, the limit on the left hand side of that equation has to exist. Third, the value of this limit must equal f(c). If the point c in the domain of f is not a limit point of the domain, then the above condition is vacuously true, since x cannot approach c through values not equal to c. Thus, for example, function whose domain is the set of all integers is continuous at every point of its domain. The function f is said to be continuous if it is continuous at every point of its domain; otherwise, it is discontinuous.

Definition in terms of limits of sequences

One can instead require that for any sequence (xn)n∈N of points in the domain which converges to c, the corre- ∀ ⊂ ⇒ sponding sequence (f(xn))n∈N converges to f(c). In mathematical notation, (xn)n∈N I : limn→∞ xn = c limn→∞ f(xn) = f(c) .

Weierstrass definition (epsilon–delta) of continuous functions

Explicitly including the definition of the , we obtain a self-contained definition: Given a function f as above and an element c of the domain I, f is said to be continuous at the point c if the following holds: For any number ε > 0, however small, there exists some number δ > 0 such that for all x in the domain of f with c − δ < x < c + δ, the value of f(x) satisfies f(c) − ε < f(x) < f(c) + ε. Alternatively written, continuity of f : I → R at c ∈ I means that for every ε > 0 there exists a δ > 0 such that for all x ∈ I,:

|x − c| < δ ⇒ |f(x) − f(c)| < ε. More intuitively, we can say that if we want to get all the f(x) values to stay in some small neighborhood around f(c), we simply need to choose a small enough neighborhood for the x values around c, and we can do that no matter how small the f(x) neighborhood is; f is then continuous at c. Note. It does not necessarily mean that when x and y are getting closer and closer to each other then so do f(x) and f(y). For instance, if f is the reciprocal function on reals that are not equal zero (so f is a continuous function) and x asserts consecutive values of 1/n and y asserts consecutive values of −1/n, where n diverges to infinity, then x and y are both in the domain of f and are getting closer and closer to each other but the distance between f(x) = 1/x = n and f(y) = 1/y = -n diverges to infinity. In modern terms, this is generalized by the definition of continuity of a function with respect to a basis for the topology, here the metric topology. 58 CHAPTER 12. CONTINUOUS FUNCTION y

4 f(2)+ε f(2)−ε 3

2

1 2 − δ 2+ δ x 1 2 3 4 Illustration of the ε-δ-definition: for ε=0.5, c=2, the value δ=0.5 satisfies the condition of the definition.

Definition using oscillation

Continuity can also be defined in terms of oscillation: a function f is continuous at a point x0 if and only if its [3] oscillation at that point is zero; in symbols, ωf (x0) = 0. A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point. This definition is useful in descriptive set theory to study the set of discontinuities and continuous points – the con- tinuous points are the intersection of the sets where the oscillation is less than ε (hence a Gδ set) – and gives a very quick proof of one direction of the Lebesgue integrability condition.[4] The oscillation is equivalent to the ε-δ definition by a simple re-arrangement, and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given ε0 there is no δ that satisfies the ε-δ definition, then the oscillation is at least ε0, and conversely if for every ε there is a desired δ, the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space.

Definition using the hyperreals

Cauchy defined continuity of a function in the following intuitive terms: an infinitesimal change in the independent 12.2. REAL-VALUED CONTINUOUS FUNCTIONS 59

f(b)

0 p

f(a)

The failure of a function to be continuous at a point is quantified by its oscillation. variable corresponds to an infinitesimal change of the dependent variable (see Cours d'analyse, page 34). Non- standard analysis is a way of making this mathematically rigorous. The real line is augmented by the addition of infinite and infinitesimal numbers to form the hyperreal numbers. In , continuity can be defined as follows.

A real-valued function f is continuous at x if its natural extension to the hyperreals has the property that for all infinitesimal dx, f(x+dx) − f(x) is infinitesimal[5]

(see microcontinuity). In other words, an infinitesimal increment of the independent variable always produces to an infinitesimal change of the dependent variable, giving a modern expression to Augustin-Louis Cauchy's definition of continuity.

12.2.2 Examples

All polynomial functions, such as f(x) = x3 + x2 - 5x + 3 (pictured), are continuous. This is a consequence of the fact that, given two continuous functions 60 CHAPTER 12. CONTINUOUS FUNCTION

10

5

−4−3−2−1 0 1 2

−5

−10

−15

2 y=(x+3)(x−1) −20

−25

The graph of a cubic function has no jumps or holes. The function is continuous.

f, g : I → R defined on the same domain I, then the sum f + g, and the product fg of the two functions are continuous (on the same domain I). Moreover, the function f f(x) : {x ∈ I|g(x) ≠ 0} → R, x 7→ g g(x) is continuous. (The points where g(x) is zero are discarded, as they are not in the domain of f/g.) For example, the function (pictured)

2x − 1 f(x) = x + 2 is defined for all real numbers x ≠ −2 and is continuous at every such point. Thus it is a continuous function. The 12.2. REAL-VALUED CONTINUOUS FUNCTIONS 61

Y

y = (2x-1)/(x+2)

2

1-2 X

The graph of a continuous rational function. The function is not defined for x=−2. The vertical and horizontal lines are asymptotes. question of continuity at x = −2 does not arise, since x = −2 is not in the domain of f. There is no continuous function F: R → R that agrees with f(x) for all x ≠ −2. The sinc function g(x) = (sin x)/x, defined for all x≠0 is continuous at these points. Thus it is a continuous function, too. However, unlike the on of the previous example, this one can be extended to a continuous function on all real numbers, namely

{ sin(x) if x ≠ 0 G(x) = x 1 if x = 0, since the limit of g(x), when x approaches 0, is 1. Therefore, the point x=0 is called a removable singularity of g. Given two continuous functions f : I → J(⊂ R), g : J → R, the composition g ◦ f : I → R, x 7→ g(f(x)) 62 CHAPTER 12. CONTINUOUS FUNCTION

is continuous.

12.2.3 Non-examples

An example of a discontinuous function is the function f defined by f(x) = 1 if x > 0, f(x) = 0 if x ≤ 0. Pick for 1 instance ε = ⁄2. There is no δ-neighborhood around x = 0 that will force all the f(x) values to be within ε of f(0). Intuitively we can think of this type of discontinuity as a sudden jump in function values. Similarly, the signum or sign function y 1

x

−1

Plot of the signum function. The hollow dots indicate that sgn(x) is 1 for all x>0 and −1 for all x<0.

 1 if x > 0 sgn(x) = 0 if x = 0  −1 if x < 0

is discontinuous at x = 0 but continuous everywhere else. Yet another example: the function

{ ( ) sin 1 if x ≠ 0 f(x) = x2 0 if x = 0

is continuous everywhere apart from x = 0. Thomae’s function, 12.2. REAL-VALUED CONTINUOUS FUNCTIONS 63

Plot of Thomae’s function for the domain 0

 1 if x = 0 f(x) = 1 if x = p number rational a is terms) lowest (in  q q 0 if xirrational is . is continuous at all irrational numbers and discontinuous at all rational numbers. In a similar vein, Dirichlet’s function

{ 0 if x irrational is (∈ R \ Q) D(x) = 1 if x rational is (∈ Q) is nowhere continuous.

12.2.4 Properties

Intermediate value theorem

The intermediate value theorem is an existence theorem, based on the real number property of completeness, and states:

If the real-valued function f is continuous on the closed interval [a, b] and k is some number between f(a) and f(b), then there is some number c in [a, b] such that f(c) = k.

For example, if a child grows from 1 m to 1.5 m between the ages of two and six years, then, at some time between two and six years of age, the child’s height must have been 1.25 m. As a consequence, if f is continuous on [a, b] and f(a) and f(b) differ in sign, then, at some point c in [a, b], f(c) must equal zero.

Extreme value theorem

The extreme value theorem states that if a function f is defined on a closed interval [a,b] (or any closed and bounded set) and is continuous there, then the function attains its maximum, i.e. there exists c ∈ [a,b] with f(c) ≥ f(x) for all x ∈ [a,b]. The same is true of the minimum of f. These statements are not, in general, true if the function is defined on an open interval (a,b) (or any set that is not both closed and bounded), as, for example, the continuous function f(x) = 1/x, defined on the open interval (0,1), does not attain a maximum, being unbounded above. 64 CHAPTER 12. CONTINUOUS FUNCTION

Relation to differentiability and integrability

Every differentiable function

f :(a, b) → R

is continuous, as can be shown. The converse does not hold: for example, the absolute value function

{ x if x ≥ 0 f(x) = |x| = −x if x < 0

is everywhere continuous. However, it is not differentiable at x = 0 (but is so everywhere else). Weierstrass’s function is also everywhere continuous but nowhere differentiable. The derivative f′(x) of a differentiable function f(x) need not be continuous. If f′(x) is continuous, f(x) is said to be continuously differentiable. The set of such functions is denoted C1((a, b)). More generally, the set of functions

f :Ω → R

(from an open interval (or open subset of R) Ω to the reals) such that f is n times differentiable and such that the n-th derivative of f is continuous is denoted Cn(Ω). See differentiability class. In the field of computer graphics, these three levels are sometimes called G0 (continuity of position), G1 (continuity of tangency), and G2 (continuity of curvature). Every continuous function

f :[a, b] → R

is integrable (for example in the sense of the Riemann integral). The converse does not hold, as the (integrable, but discontinuous) sign function shows.

Pointwise and uniform limits

Given a sequence

f1, f2,... : I → R

of functions such that the limit

f(x) := lim fn(x) n→∞ exists for all x in I, the resulting function f(x) is referred to as the pointwise limit of the sequence of functions (fn)n∈N. The pointwise limit function need not be continuous, even if all functions fn are continuous, as the animation at the right shows. However, f is continuous when the sequence converges uniformly, by the uniform convergence theorem. This theorem can be used to show that the exponential functions, logarithms, square root function, trigonometric functions are continuous.

12.2.5 Directional and semi-continuity

• A right-continuous function • A left-continuous function 12.3. CONTINUOUS FUNCTIONS BETWEEN METRIC SPACES 65

A sequence of continuous functions f(x) whose (pointwise) limit function f(x) is discontinuous. The convergence is not uniform.

Discontinuous functions may be discontinuous in a restricted way, giving rise to the concept of directional continuity (or right and left continuous functions) and semi-continuity. Roughly speaking, a function is right-continuous if no jump occurs when the limit point is approached from the right. More formally, f is said to be right-continuous at the point c if the following holds: For any number ε > 0 however small, there exists some number δ > 0 such that for all x in the domain with c < x < c + δ, the value of f(x) will satisfy

|f(x) − f(c)| < ε.

This is the same condition as for continuous functions, except that it is required to hold for x strictly larger than c only. Requiring it instead for all x with c − δ < x < c yields the notion of left-continuous functions. A function is continuous if and only if it is both right-continuous and left-continuous. A function f is lower semi-continuous if, roughly, any jumps that might occur only go down, but not up. That is, for any ε > 0, there exists some number δ > 0 such that for all x in the domain with |x − c| < δ, the value of f(x) satisfies

f(x) ≥ f(c) − ϵ.

The reverse condition is upper semi-continuity.

12.3 Continuous functions between metric spaces

The concept of continuous real-valued functions can be generalized to functions between metric spaces. A metric space is a set X equipped with a function (called metric) dX, that can be thought of as a measurement of the distance of any two elements in X. Formally, the metric is a function

dX : X × X → R that satisfies a number of requirements, notably the triangle inequality. Given two metric spaces (X, dX) and (Y, dY) and a function 66 CHAPTER 12. CONTINUOUS FUNCTION

f : X → Y

then f is continuous at the point c in X (with respect to the given metrics) if for any positive real number ε, there exists a positive real number δ such that all x in X satisfying dX(x, c) < δ will also satisfy dY(f(x), f(c)) < ε. As in the case of real functions above, this is equivalent to the condition that for every sequence (xn) in X with limit lim xn = c, we have lim f(xn) = f(c). The latter condition can be weakened as follows: f is continuous at the point c if and only if for every convergent sequence (xn) in X with limit c, the sequence (f(xn)) is a Cauchy sequence, and c is in the domain of f. The set of points at which a function between metric spaces is continuous is a Gδ set – this follows from the ε-δ definition of continuity. This notion of continuity is applied, for example, in functional analysis. A key statement in this area says that a linear operator

T : V → W

between normed vector spaces V and W (which are vector spaces equipped with a compatible norm, denoted ||x||) is continuous if and only if it is bounded, that is, there is a constant K such that

∥T (x)∥ ≤ K∥x∥

for all x in V.

12.3.1 Uniform, Hölder and Lipschitz continuity

The concept of continuity for functions between metric spaces can be strengthened in various ways by limiting the way δ depends on ε and c in the definition above. Intuitively, a function f as above is uniformly continuous if the δ does not depend on the point c. More precisely, it is required that for every real number ε > 0 there exists δ > 0 such that for every c, b ∈ X with dX(b, c) < δ, we have that dY(f(b), f(c)) < ε. Thus, any uniformly continuous function is continuous. The converse does not hold in general, but holds when the domain space X is compact. Uniformly continuous maps can be defined in the more general situation of uniform spaces.[6] A function is Hölder continuous with exponent α (a real number) if there is a constant K such that for all b and c in X, the inequality

α dY (f(b), f(c)) ≤ K · (dX (b, c))

holds. Any Hölder continuous function is uniformly continuous. The particular case α = 1 is referred to as Lipschitz continuity. That is, a function is Lipschitz continuous if there is a constant K such that the inequality

dY (f(b), f(c)) ≤ K · dX (b, c)

holds any b, c in X.[7] The Lipschitz condition occurs, for example, in the Picard–Lindelöf theorem concerning the solutions of ordinary differential equations.

12.4 Continuous functions between topological spaces

Another, more abstract, notion of continuity is continuity of functions between topological spaces in which there generally is no formal notion of distance, as there is in the case of metric spaces. A topological space is a set X together with a topology on X, which is a set of subsets of X satisfying a few requirements with respect to their unions and intersections that generalize the properties of the open balls in metric spaces while still allowing to talk about 12.4. CONTINUOUS FUNCTIONS BETWEEN TOPOLOGICAL SPACES 67

For a Lipschitz continuous function, there is a double cone (shown in white) whose vertex can be translated along the graph, so that the graph always remains entirely outside the cone.

X Y

V U f f(U) x f(x)

Continuity of a function at a point. 68 CHAPTER 12. CONTINUOUS FUNCTION

the neighbourhoods of a given point. The elements of a topology are called open subsets of X (with respect to the topology). A function

f : X → Y between two topological spaces X and Y is continuous if for every open set V ⊆ Y, the inverse image

f −1(V ) = {x ∈ X | f(x) ∈ V } is an open subset of X. That is, f is a function between the sets X and Y (not on the elements of the topology TX), but the continuity of f depends on the topologies used on X and Y. This is equivalent to the condition that the preimages of the closed sets (which are the complements of the open subsets) in Y are closed in X. An extreme example: if a set X is given the discrete topology (in which every subset is open), all functions f : X → T to any topological space T are continuous. On the other hand, if X is equipped with the indiscrete topology (in which the only open subsets are the empty set and X) and the space T set is at least T0, then the only continuous functions are the constant functions. Conversely, any function whose range is indiscrete is continuous.

12.4.1 Alternative definitions

Several equivalent definitions for a topological structure exist and thus there are several equivalent ways to define a continuous function.

Neighborhood definition

Neighborhoods continuity for functions between topological spaces (X, TX ) and (Y, TY ) at a point may be defined: A function f : X → Y is continuous at a point x ∈ X iff for any neighborhood of its image f(x) ∈ Y the preimage −1 is again a neighborhood of that point: ∀N ∈ Nf(x) : f (N) ∈ Mx According to the property that neighborhood systems being upper sets this can be restated as follows: −1 ∀N ∈ Nf(x)∃M ∈ Mx : M ⊆ f (N) ∀N ∈ Nf(x)∃M ∈ Mx : f(M) ⊆ N The second one being a restatement involving the image rather than the preimage. Literally, this means no matter how small the neighborhood is chosen one can always find a neighborhood mapped into it. Besides, there’s a simplification involving only open neighborhoods. In fact, they're equivalent: −1 ∀V ∈ TY , f(x) ∈ V ∃U ∈ TX , x ∈ U : U ⊆ f (V ) ∀V ∈ TY , f(x) ∈ V ∃U ∈ TX , x ∈ U : f(U) ⊆ V The second one again being a restatement using images rather than preimages. If X and Y are metric spaces, it is equivalent to consider the neighborhood system of open balls centered at x and f(x) instead of all neighborhoods. This gives back the above δ-ε definition of continuity in the context of metric spaces. However, in general topological spaces, there is no notion of nearness or distance. Note, however, that if the target space is Hausdorff, it is still true that f is continuous at a if and only if the limit of f as x approaches a is f(a). At an isolated point, every function is continuous.

Sequences and nets

In several contexts, the topology of a space is conveniently specified in terms of limit points. In many instances, this is accomplished by specifying when a point is the limit of a sequence, but for some spaces that are too large in some 12.4. CONTINUOUS FUNCTIONS BETWEEN TOPOLOGICAL SPACES 69 sense, one specifies also when a point is the limit of more general sets of points indexed by a , known as nets. A function is (Heine-)continuous only if it takes limits of sequences to limits of sequences. In the former case, preservation of limits is also sufficient; in the latter, a function may preserve all limits of sequences yet still fail to be continuous, and preservation of nets is a necessary and sufficient condition. In detail, a function f: X → Y is sequentially continuous if whenever a sequence (xn) in X converges to a limit x, the sequence (f(xn)) converges to f(x). Thus sequentially continuous functions “preserve sequential limits”. Every continuous function is sequentially continuous. If X is a first-countable space and countable choice holds, then the converse also holds: any function preserving sequential limits is continuous. In particular, if X is a metric space, sequential continuity and continuity are equivalent. For non first-countable spaces, sequential continuity might be strictly weaker than continuity. (The spaces for which the two properties are equivalent are called sequential spaces.) This motivates the consideration of nets instead of sequences in general topological spaces. Continuous functions preserve limits of nets, and in fact this property characterizes continuous functions.

Closure operator definition

Instead of specifying the open subsets of a topological space, the topology can also be determined by a closure operator (denoted cl) which assigns to any subset A ⊆ X its closure, or an interior operator (denoted int), which assigns to any subset A of X its interior. In these terms, a function f :(X, cl) → (X′, cl′) between topological spaces is continuous in the sense above if and only if for all subsets A of X f(cl(A)) ⊆ cl′(f(A)).

That is to say, given any element x of X that is in the closure of any subset A, f(x) belongs to the closure of f(A). This is equivalent to the requirement that for all subsets A' of X' f −1(cl′(A′)) ⊇ cl(f −1(A′)).

Moreover, f :(X, int) → (X′, int′) is continuous if and only if f −1(int′(A′)) ⊆ int(f −1(A′)) for any subset A' of Y.

12.4.2 Properties

If f: X → Y and g: Y → Z are continuous, then so is the composition g ∘ f: X → Z. If f: X → Y is continuous and

• X is compact, then f(X) is compact. • X is connected, then f(X) is connected. • X is path-connected, then f(X) is path-connected. • X is Lindelöf, then f(X) is Lindelöf. • X is separable, then f(X) is separable. 70 CHAPTER 12. CONTINUOUS FUNCTION

The possible topologies on a fixed set X are partially ordered: a topology τ1 is said to be coarser than another topology τ2 (notation: τ1 ⊆ τ2) if every open subset with respect to τ1 is also open with respect to τ2. Then, the identity map

idX:(X, τ2) → (X, τ1)

is continuous if and only if τ1 ⊆ τ2 (see also comparison of topologies). More generally, a continuous function

(X, τX ) → (Y, τY )

stays continuous if the topology τY is replaced by a coarser topology and/or τX is replaced by a finer topology.

12.4.3 Homeomorphisms

Symmetric to the concept of a continuous map is an open map, for which images of open sets are open. In fact, if an open map f has an inverse function, that inverse is continuous, and if a continuous map g has an inverse, that inverse is open. Given a bijective function f between two topological spaces, the inverse function f−1 need not be continuous. A bijective continuous function with continuous inverse function is called a homeomorphism. If a continuous bijection has as its domain a and its codomain is Hausdorff, then it is a homeomorphism.

12.4.4 Defining topologies via continuous functions

Given a function

f : X → S,

where X is a topological space and S is a set (without a specified topology), the final topology on S is defined by letting the open sets of S be those subsets A of S for which f−1(A) is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is coarser than the final topology on S. Thus the final topology can be characterized as the finest topology on S that makes f continuous. If f is surjective, this topology is canonically identified with the quotient topology under the equivalence relation defined by f. Dually, for a function f from a set S to a topological space, the initial topology on S has as open subsets A of S those subsets for which f(A) is open in X. If S has an existing topology, f is continuous with respect to this topology if and only if the existing topology is finer than the initial topology on S. Thus the initial topology can be characterized as the coarsest topology on S that makes f continuous. If f is injective, this topology is canonically identified with the subspace topology of S, viewed as a subset of X. More generally, given a set S, specifying the set of continuous functions

S → X

into all topological spaces X defines a topology. Dually, a similar idea can be applied to maps

X → S.

This is an instance of a universal property.

12.5 Related notions

Various other mathematical domains use the concept of continuity in different, but related meanings. For example, in order theory, an order-preserving function f: X → Y between two complete lattices X and Y (particular types of partially ordered sets) is continuous if for each subset A of X, we have sup(f(A)) = f(sup(A)). Here sup is the 12.6. SEE ALSO 71 supremum with respect to the orderings in X and Y, respectively. Applying this to the complete lattice consisting of the open subsets of a topological space, this gives back the notion of continuity for maps between topological spaces. In category theory, a functor

F : C → D between two categories is called continuous, if it commutes with small limits. That is to say,

∼ ←−lim F (Ci) = F (←−lim Ci) i∈I i∈I for any small (i.e., indexed by a set I, as opposed to a class) diagram of objects in C . A continuity space is a generalization of metric spaces and posets,[8][9] which uses the concept of quantales, and that can be used to unify the notions of metric spaces and domains.[10]

12.6 See also

• Absolute continuity • Classification of discontinuities • Coarse function • Continuous stochastic process • Dini continuity • Discrete function • Equicontinuity • Normal function • Piecewise • Symmetrically continuous function

12.7 Notes

[1] Rusnock, P.; Kerr-Lawson, A. (2005), “Bolzano and uniform continuity”, Historia Mathematica 32 (3): 303–311, doi:10.1016/j.hm.2004.11.003 [2] Lang, Serge (1997), Undergraduate analysis, Undergraduate Texts in Mathematics (2nd ed.), Berlin, New York: Springer- Verlag, ISBN 978-0-387-94841-6, section II.4 [3] Introduction to Real Analysis, updated April 2010, William F. Trench, Theorem 3.5.2, p. 172 [4] Introduction to Real Analysis, updated April 2010, William F. Trench, 3.5 “A More Advanced Look at the Existence of the Proper Riemann Integral”, pp. 171–177 [5] http://www.math.wisc.edu/~{}keisler/calc.html [6] Gaal, Steven A. (2009), Point set topology, New York: Dover Publications, ISBN 978-0-486-47222-5, section IV.10 [7] Searcóid, Mícheál Ó (2006), Metric spaces, Springer undergraduate mathematics series, Berlin, New York: Springer- Verlag, ISBN 978-1-84628-369-7, section 9.4 [8] Flagg, R. C. (1997). “Quantales and continuity spaces”. Algebra Universalis. CiteSeerX: 10 .1 .1 .48 .851. [9] Kopperman, R. (1988). “All topologies come from generalized metrics”. American Mathematical Monthly 95 (2): 89–97. doi:10.2307/2323060. [10] Flagg, B.; Kopperman, R. (1997). “Continuity spaces: Reconciling domains and metric spaces”. Theoretical Computer Science 177 (1): 111–138. doi:10.1016/S0304-3975(97)00236-3. 72 CHAPTER 12. CONTINUOUS FUNCTION

12.8 References

• Hazewinkel, Michiel, ed. (2001), “Continuous function”, Encyclopedia of Mathematics, Springer, ISBN 978- 1-55608-010-4 • Visual by Lawrence S. Husch, University of Tennessee (2001). Chapter 13

Continuous graph

This article is about sets of vertices and edges (graphs) defined on a continuous space. For graphs of continuous functions, see Continuous function. For connected graphs, see Connectivity (graph theory). For graphs where each edge is a continuous line or curve, see Geometric graph theory, Topological graph theory, and Quantum graph.

In graph theory, a continuous graph is a graph whose set of vertices is a continuous space X. Continuous graphs are used as models for real-world graphs, as an alternative to other graph models such as for instance exponential random graph models.

13.1 Definition

Edges, being unordered pairs of vertices, are defined in a continuous graph by a symmetric relation[1] (i.e. subset) of the cartesian product X2 or equivalently by a symmetric function from X2 to the set {0, 1} . This could represent 1 for an edge between two vertices, and 0 for no edge, or it could represent a complete graph with a 2-color . In this context, the set {0,1} is often denoted by 2, so we have f(X2)→2. For multi-colorings of edges we would have f(X2)→n.[2][3][4] The value of the function f(x,y) for x=y, i.e. whether the relation is reflexive determines whether the graph has loops or not but this isn't usually considered as it doesn't make much difference to the theory.[1] In descriptive set theory the spaces of interest are perfect separable Polish spaces and related spaces.[5][6] Given a finite graph H and a continuous or discrete graph G, the homomorphism density t(H,G) is defined to be the proportion of injective maps from the vertex set of H to vertex set of G which is a graph homomorphism. For instance, if H consists of two vertices joined by a single edge, t(H,G) is the edge density of G. A sequence of finite (dense) graphs Gn is said to be convergent if, for each fixed finite graph H, the homomorphism densities t(H,Gn) are a convergent sequence of numbers. A continuous graph G is said to be a limit of such a sequence if t(H,Gn) converges to t(H,G) for each H, in which case we refer to G as a graphon. Such a limit is a symmetric measurable function in two variables,[7] that can often be written f(X2)→[0,1] which is the same as a complete continuous graph where the edges have values in the interval [0,1]. It can be shown that any sequence of dense graphs has a convergent subsequence, whose limit is a graphon which is unique up to measure rearrangement.[8] A key tool used in the proof of this claim is the Szemerédi regularity lemma. For instance, for each natural number n, let Gn be a complete bipartite graph between two sets of n vertices. Then in the limit n → ∞ , Gn converges to the graphon described by the function f([0,1]2)→[0,1] defined by setting f(x,y)=1 when x ∈ [0, 1/2), y ∈ [1/2, 1] or y ∈ [0, 1/2), x ∈ [1/2, 1] , and f(x,y)=0 otherwise. Graphons can be used to establish results in the property testing of graphs.[9] For any sets X and Y, the two-variable symmetric function f(X2)→Y is a complete graph with edges labelled with elements of Y. For multi-variable symmetric functions we have f(Xn)→Y for the complete hypergraph with edges labelled with elements of Y.[10] Given a discrete-time dynamical system, the trajectories, or orbits (state space) of all the points form a (possibly disconnected) directed graph which is a continuous graph if the system is defined on a continuous space. The tra- jectories of a continuous-time dynamical system would form a collection of curved paths (phase space) rather than a

73 74 CHAPTER 13. CONTINUOUS GRAPH collection of piece-wise linear paths and so is not a graph in the traditional sense.

13.2 Applications

As any graph model, continuous graphs can be used to model many different types of real-world graphs. One arbitrary example is given by peer-to-peer systems.[11]

13.3 See also

• Combinatorial set theory • Tree (set theory)

• Petri Net#Discrete, continuous, and hybrid Petri nets

13.4 References

[1] Webster, J. (1996). “Continuum theory in the digital setting”. CiteSeerX: 10 .1 .1 .49 .4540.

[2] Deaconu, Valentin (July 23–26, 1998). “Continuous graphs and C*-algebras”. Operator theoretical methods. 17th Interna- tional Conference on Operator Theory. Timișoara (Romania). ISBN 978-973-99097-2-3. CiteSeerX: 10 .1 .1 .147 .1638.

[3] Geschke, Stefan; Goldstern, Martin; Kojman, Menachem (2004). “Continuous Ramsey Theory on Polish Spaces and covering the plane by functions”. Journal of Mathematical Logic. CiteSeerX: 10 .1 .1 .74 .6434.

[4] Geschke, Stefan. “Infinite Ramsey Theory”.

[5] Nahum, Ronny; Zafrany, Samy (1994). “A Topological Linking Theorem in Simple Graphs”.

[6] Nahum, R.; Zafrany, S. (1994). “Topological complexity of graphs and their spanning trees”. CiteSeerX: 10 .1 .1 .22 .1949.

[7] Borgs, Christian; Chayes, Jennifer; Lovász, László (2006). “Graph Limits and Parameter Testing”. Proceedings of the thirty-eighth annual ACM symposium on Theory of computing. CiteSeerX: 10 .1 .1 .92 .2788.

[8] Lovász, László; Szegedy, Balasz (2006). “Limits of sequences”. J. Combin. Theory Ser. B 96 (6): 933–957. doi:10.1016/j.jctb.2006.05.002.

[9] Lovász, László; Szegedy, Balasz (2010). “Testing properties of graphs and functions”. Israel J. Math. 178: 113–156. doi:10.1007/s11856-010-0060-7.

[10] Austin, T.; Tao, T. (2010). “Testability and repair of hereditary hypergraph properties”. Random Structures and Algorithms. arXiv:0801.2179.

[11] Naor, M.; Wieder, U. (2007). “Novel architectures for P2P applications: the continuous-discrete approach”. ACM Trans- actions on Algorithms (TALG). CiteSeerX: 10 .1 .1 .64 .8030.

13.5 Further reading

• Uncountable Graphs and Invariant Measures on the Set of Universal Countable Graphs, F. Petrov, A. Vershik, 2008 Chapter 14

Degree (graph theory)

A graph with vertices labeled by degree

In graph theory, the degree (or valency) of a vertex of a graph is the number of edges incident to the vertex, with loops counted twice.[1] The degree of a vertex v is denoted deg(v) or deg v . The maximum degree of a graph G, denoted by Δ(G), and the minimum degree of a graph, denoted by δ(G), are the maximum and minimum degree of its vertices. In the graph on the right, the maximum degree is 5 and the minimum degree is 0. In a regular graph, all degrees are the same, and so we can speak of the degree of the graph.

14.1 Handshaking lemma

Main article: handshaking lemma

The degree sum formula states that, given a graph G = (V,E) ,

∑ deg(v) = 2|E| . v∈V

75 76 CHAPTER 14. DEGREE (GRAPH THEORY)

The formula implies that in any graph, the number of vertices with odd degree is even. This statement (as well as the degree sum formula) is known as the handshaking lemma. The latter name comes from a popular mathematical problem, to prove that in any group of people the number of people who have shaken hands with an odd number of other people from the group is even.

14.2 Degree sequence

Two non-isomorphic graphs with the same degree sequence (3, 2, 2, 2, 2, 1, 1, 1).

The degree sequence of an undirected graph is the non-increasing sequence of its vertex degrees;[2] for the above graph it is (5, 3, 3, 2, 2, 1, 0). The degree sequence is a graph invariant so isomorphic graphs have the same degree sequence. However, the degree sequence does not, in general, uniquely identify a graph; in some cases, non-isomorphic graphs have the same degree sequence. The degree sequence problem is the problem of finding some or all graphs with the degree sequence being a given non-increasing sequence of positive integers. (Trailing zeroes may be ignored since they are trivially realized by adding an appropriate number of isolated vertices to the graph.) A sequence which is the degree sequence of some graph, i.e. for which the degree sequence problem has a solution, is called a graphic or graphical sequence. As a consequence of the degree sum formula, any sequence with an odd sum, such as (3, 3, 1), cannot be realized as the degree sequence of a graph. The converse is also true: if a sequence has an even sum, it is the degree sequence 14.3. SPECIAL VALUES 77 of a . The construction of such a graph is straightforward: connect vertices with odd degrees in pairs by a matching, and fill out the remaining even degree counts by self-loops. The question of whether a given degree sequence can be realized by a simple graph is more challenging. This problem is also called graph realization problem and can either be solved by the Erdős–Gallai theorem or the Havel–Hakimi algorithm. The problem of finding or estimating the number of graphs with a given degree sequence is a problem from the field of graph enumeration.

14.3 Special values

An undirected graph with leaf nodes 4, 5, 6, 7, 10, 11, and 12

• A vertex with degree 0 is called an isolated vertex.

• A vertex with degree 1 is called a leaf vertex or end vertex, and the edge incident with that vertex is called a pendant edge. In the graph on the right, {3,5} is a pendant edge. This terminology is common in the study of trees in graph theory and especially trees as data structures.

• A vertex with degree n − 1 in a graph on n vertices is called a dominating vertex.

14.4 Global properties

• If each vertex of the graph has the same degree k the graph is called a k-regular graph and the graph itself is said to have degree k. Similarly, a bipartite graph in which every two vertices on the same side of the bipartition as each other have the same degree is called a biregular graph.

• An undirected, connected graph has an if and only if it has either 0 or 2 vertices of odd degree. If it has 0 vertices of odd degree, the Eulerian path is an Eulerian circuit.

• A directed graph is a if and only if every vertex has outdegree at most 1. A functional graph is a special case of a pseudoforest in which every vertex has outdegree exactly 1. 78 CHAPTER 14. DEGREE (GRAPH THEORY)

• By Brooks’ theorem, any graph other than a or an odd cycle has chromatic number at most Δ, and by Vizing’s theorem any graph has chromatic index at most Δ + 1. • A k-degenerate graph is a graph in which each subgraph has a vertex of degree at most k.

14.5 See also

• Indegree, outdegree for digraphs

• degree sequence for bipartite graphs

14.6 Notes

[1] Diestel p.5

[2] Diestel p.278

14.7 References

• Diestel, Reinhard (2005), Graph Theory (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540- 26183-4.

• Erdős, P.; Gallai, T. (1960), “Gráfok előírt fokszámú pontokkal”, Matematikai Lapok (in Hungarian) 11: 264– 274.

• Havel, Václav (1955), “A remark on the existence of finite graphs”, Časopis pro pěstování matematiky (in Czech) 80: 477–480

• Hakimi, S. L. (1962), “On realizability of a set of integers as degrees of the vertices of a linear graph. I”, Journal of the Society for Industrial and Applied Mathematics 10: 496–506, MR 0148049.

• Sierksma, Gerard; Hoogeveen, Han (1991), “Seven criteria for integer sequences being graphic”, Journal of Graph Theory 15 (2): 223–231, doi:10.1002/jgt.3190150209, MR 1106533. Chapter 15

Duality (mathematics)

In mathematics, a duality, generally speaking, translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often (but not always) by means of an involution operation: if the dual of A is B, then the dual of B is A. Such involutions sometimes have fixed points, so that the dual of A is A itself. For example, Desargues’ theorem in projective geometry is self-dual in this sense. In mathematical contexts, duality has numerous meanings[1] although it is “a very pervasive and important con- cept in (modern) mathematics”[2] and “an important general theme that has manifestations in almost every area of mathematics”.[3] Many mathematical dualities between objects of two types correspond to pairings, bilinear functions from an object of one type and another object of the second type to some family of scalars. For instance, linear algebra duality corresponds in this way to bilinear maps from pairs of vector spaces to scalars, the duality between distributions and the associated test functions corresponds to the pairing in which one integrates a distribution against a test function, and Poincaré duality corresponds similarly to , viewed as a pairing between submanifolds of a given manifold.[4] Duality can also be seen as a functor, at least in the realm of vector spaces. There it is allowed to assign to each space its dual space and the pullback construction allows to assign for each arrow f: V → W, its dual f∗: W∗ → V∗.

15.1 Order-reversing dualities

Main article: Duality (order theory)

A particularly simple form of duality comes from order theory. The dual of a poset P = (X, ≤) is the poset Pd = (X, ≥) comprising the same ground set but the converse relation. Familiar examples of dual partial orders include

• the subset and superset relations ⊂ and ⊃ on any collection of sets,

• the divides and multiple-of relations on the integers, and

• the descendant-of and ancestor-of relations on the set of humans.

A concept defined for a partial order P will correspond to a dual concept on the dual poset Pd. For instance, a minimal element of P will be a maximal element of Pd: minimality and maximality are dual concepts in order theory. Other pairs of dual concepts are upper and lower bounds, lower sets and upper sets, and ideals and filters. A particular order reversal of this type occurs in the family of all subsets of some set S: if A¯ = S \ A denotes the complement set, then A ⊂ B if and only if B¯ ⊂ A¯ . In topology, open sets and closed sets are dual concepts: the complement of an open set is closed, and vice versa. In matroid theory, the family of sets complementary to the independent sets of a given matroid themselves form another matroid, called the dual matroid. In logic, one may represent a truth assignment to the variables of an unquantified formula as a set, the variables that are true for the assignment. A truth assignment satisfies the formula if and only if the complementary truth assignment satisfies the De Morgan dual of its formula. The existential and universal quantifiers in logic are similarly dual.

79 80 CHAPTER 15. DUALITY (MATHEMATICS)

A partial order may be interpreted as a category in which there is an arrow from x to y in the category if and only if x ≤ y in the partial order. The order-reversing duality of partial orders can be extended to the concept of a dual category, the category formed by reversing all the arrows in a given category. Many of the specific dualities described later are dualities of categories in this sense. According to Artstein-Avidan and Milman,[5][6] a duality transform is just an involutive antiautomorphism T of a partially ordered set S, that is, an order-reversing involution T : S → S. Surprisingly, in several important cases these simple properties determine the transform uniquely up to some simple symmetries. If T1, T2 are two duality transforms then their composition is an order automorphism of S; thus, any two duality transforms differ only by an order automorphism. For example, all order automorphisms of a power set S = 2R are induced by permutations of R. The papers cited above treat only sets S of functions on Rn satisfying some condition of convexity and prove that all order automorphisms are induced by linear or affine transformations of Rn.

15.2 Dimension-reversing dualities

The features of the cube and its dual octahedron correspond one-for-one with dimensions reversed.

There are many distinct but interrelated dualities in which geometric or topological objects correspond to other objects of the same type, but with a reversal of the dimensions of the features of the objects. A classical example of this is the duality of the platonic solids, in which the cube and the octahedron form a dual pair, the dodecahedron and 15.2. DIMENSION-REVERSING DUALITIES 81 the icosahedron form a dual pair, and the tetrahedron is self-dual. The dual polyhedron of any of these polyhedra may be formed as the convex hull of the center points of each face of the primal polyhedron, so the vertices of the dual correspond one-for-one with the faces of the primal. Similarly, each edge of the dual corresponds to an edge of the primal, and each face of the dual corresponds to a vertex of the primal. These correspondences are incidence- preserving: if two parts of the primal polyhedron touch each other, so do the corresponding two parts of the dual polyhedron. More generally, using the concept of polar reciprocation, any convex polyhedron, or more generally any convex polytope, corresponds to a dual polyhedron or dual polytope, with an i-dimensional feature of an n-dimensional polytope corresponding to an (n − i − 1)-dimensional feature of the dual polytope. The incidence-preserving nature of the duality is reflected in the fact that the face lattices of the primal and dual polyhedra or polytopes are themselves order-theoretic duals. Duality of polytopes and order-theoretic duality are both involutions: the dual polytope of the dual polytope of any polytope is the original polytope, and reversing all order-relations twice returns to the original order. Choosing a different center of polarity leads to geometrically different dual polytopes, but all have the same combinatorial structure.

A planar graph in blue, and its dual graph in red.

From any three-dimensional polyhedron, one can form a planar graph, the graph of its vertices and edges. The dual polyhedron has a dual graph, a graph with one vertex for each face of the polyhedron and with one edge for every two adjacent faces. The same concept of planar graph duality may be generalized to graphs that are drawn in the plane but that do not come from a three-dimensional polyhedron, or more generally to graph embeddings on surfaces of higher genus: one may draw a dual graph by placing one vertex within each region bounded by a cycle of edges in the embedding, and drawing an edge connecting any two regions that share a boundary edge. An important example of this type comes from computational geometry: the duality for any finite set S of points in the plane between the Delaunay triangulation of S and the Voronoi diagram of S. As with dual polyhedra and dual polytopes, the duality of graphs on surfaces is a dimension-reversing involution: each vertex in the primal embedded graph corresponds to a region of the dual embedding, each edge in the primal is crossed by an edge in the dual, and each region of the primal corresponds to a vertex of the dual. The dual graph depends on how the primal graph is embedded: different planar embeddings of a single graph may lead to different dual graphs. Matroid duality is an algebraic extension of planar graph duality, in the sense that the dual matroid of the of a planar graph is isomorphic to the graphic matroid of the dual graph. In topology, Poincaré duality also reverses dimensions; it corresponds to the fact that, if a topological manifold is represented as a cell complex, then the dual of the complex (a higher-dimensional generalization of the planar graph 82 CHAPTER 15. DUALITY (MATHEMATICS)

dual) represents the same manifold. In Poincaré duality, this homeomorphism is reflected in an isomorphism of the kth homology group and the (n − k)th cohomology group.

The complete quadrangle, a configuration of four points and six lines in the projective plane (left) and its dual configuration, the complete quadrilateral, with four lines and six points (right).

Another example of a dimension-reversing duality arises in projective geometry.[7] In the projective plane, it is possible to find geometric transformations that map each point of the projective plane to a line, and each line of the projective plane to a point, in an incidence-preserving way: in terms of the of the points and lines in the plane, this operation is just that of forming the transpose. Transformations of this type exist also in any higher dimension; one way to construct them is to use the same polar transformations that generate polyhedron and polytope duality. Due to this ability to replace any configuration of points and lines with a corresponding configuration of lines and points, there arises a general principle of duality in projective geometry: given any theorem in plane projective geometry, exchanging the terms “point” and “line” everywhere results in a new, equally valid theorem.[8] A simple example is that the statement “two points determine a unique line, the line passing through these points” has the dual statement that “two lines determine a unique point, the intersection point of these two lines”. For further examples, see Dual theorems. The points, lines, and higher-dimensional subspaces n-dimensional may be interpreted as describing the linear subspaces of an (n + 1)-dimensional vector space; if this vector space is supplied with an inner product the transformation from any linear subspace to its perpendicular subspace is an example of a projective duality. The Hodge dual extends this duality within an inner product space by providing a canonical correspondence between the elements of the exterior algebra. A kind of geometric duality also occurs in optimization theory, but not one that reverses dimensions. A linear pro- gram may be specified by a system of real variables (the coordinates for a point in Euclidean space Rn), a system of linear constraints (specifying that the point lie in a halfspace; the intersection of these halfspaces is a convex poly- tope, the feasible region of the program), and a linear function (what to optimize). Every linear program has a dual problem with the same optimal solution, but the variables in the dual problem correspond to constraints in the primal problem and vice versa.

15.3 Duality in logic and set theory

In logic, functions or relations A and B are considered dual if A(¬x) = ¬B(x), where ¬ is logical negation. The basic duality of this type is the duality of the ∃ and ∀ quantifiers in classical logic. These are dual because ∃x.¬P(x) and ¬∀x.P(x) are equivalent for all predicates P in classical logic: if there exists an x for which P fails to hold, then it is false that P holds for all x (but the converse does not hold constructively). From this fundamental logical duality follow several others:

• A formula is said to be satisfiable in a certain model if there are assignments to its free variables that render it true; it is valid if every assignment to its free variables makes it true. Satisfiability and validity are dual because 15.4. DUAL OBJECTS 83

the invalid formulas are precisely those whose negations are satisfiable, and the unsatisfiable formulas are those whose negations are valid. This can be viewed as a special case of the previous item, with the quantifiers ranging over interpretations.

• In classical logic, the ∧ and ∨ operators are dual in this sense, because (¬x ∧ ¬y) and ¬ (x ∨ y) are equivalent. This means that for every theorem∧ of classical∨ logic there is an equivalent dual theorem. De Morgan’s laws are examples. More generally, (¬xi) = ¬ xi . The left side is true if and only if ∀i.¬xi, and the right side if and only if ¬∃i.xi.

• In modal logic, □ p means that the proposition p is “necessarily” true, and ♢p that p is “possibly” true. Most interpretations of modal logic assign dual meanings to these two operators. For example in Kripke semantics, "p is possibly true” means “there exists some world W in which p is true”, while "p is necessarily true” means “for all worlds W, p is true”. The duality of □ and ♢ then follows from the analogous duality of ∀ and ∃. Other dual modal operators behave similarly. For example, temporal logic has operators denoting “will be true at some time in the future” and “will be true at all times in the future” which are similarly dual.

Other analogous dualities follow from these:

C C C • Set-theoretic union and intersection∩ are∪ dual under the set complement operator ⋅ . That is, A ∩ B = (A ∪ C C C B) , and more∩ generally, Aα = ( Aα) . This follows from∪ the duality of ∀ and ∃: an element x is a C C member of Aα if and only if ∀α.¬x∈Aα, and is a member of ( Aα) if and only if ¬∃α.x∈Aα.

Topology inherits a duality between open and closed subsets of some fixed topological space X: a subset U of X is closed if and only if its complement in X is open. Because of this, many theorems about closed sets are dual to theorems about open sets. For example, any union of open sets is open, so dually, any intersection of closed sets is closed. The interior of a set is the largest open set contained in it, and the closure of the set is the smallest that contains it. Because of the duality, the complement of the interior of any set U is equal to the closure of the complement of U. The collection of all open subsets of a topological space X forms a complete Heyting algebra. There is a duality, known as Stone duality, connecting sober spaces and spatial locales.

• Birkhoff’s representation theorem relating distributive lattices and partial orders

15.4 Dual objects

A group of dualities can be described by endowing, for any mathematical object X, the set of morphisms Hom (X, D) into some fixed object D, with a structure similar to the one of X. This is sometimes called internal Hom. In general, this yields a true duality only for specific choices of D, in which case X∗=Hom (X, D) is referred to as the dual of X. It may or may not be true that the bidual, that is to say, the dual of the dual, X∗∗ = (X∗)∗ is isomorphic to X, as the following example, which is underlying many other dualities, shows: the dual vector space V∗ of a K-vector space V is defined as

V∗ = Hom (V, K).

The set of morphisms, i.e., linear maps, is a vector space in its own right. There is always a natural, injective map V → V∗∗ given by v ↦ (f ↦ f(v)), where f is an element of the dual space. That map is an isomorphism if and only if the dimension of V is finite. In the realm of topological vector spaces, a similar construction exists, replacing the dual by the topological dual vector space. A topological vector space that is canonically isomorphic to its bidual is called reflexive space. The dual lattice of a lattice L is given by

Hom (L, Z),

which is used in the construction of toric varieties.[9] The Pontryagin dual of locally compact topological groups G is given by 84 CHAPTER 15. DUALITY (MATHEMATICS)

Hom (G, S1), continuous group homomorphisms with values in the circle (with multiplication of complex numbers as group oper- ation).

15.5 Dual categories

Main article: Dual (category theory)

15.5.1 Opposite category and adjoint functors

In another group of dualities, the objects of one theory are translated into objects of another theory and the maps between objects in the first theory are translated into morphisms in the second theory, but with direction reversed. Using the parlance of category theory, this amounts to a contravariant functor between two categories C and D:

F: C → D

which for any two objects X and Y of C gives a map

HomC(X, Y) → HomD(F(Y), F(X))

That functor may or may not be an equivalence of categories. There are various situations, where such a functor is an equivalence between the opposite category Cop of C, and D. Using a duality of this type, every statement in the first theory can be translated into a “dual” statement in the second theory, where the direction of all arrows has to be reversed.[10] Therefore, any duality between categories C and D is formally the same as an equivalence between C and Dop (Cop and D). However, in many circumstances the opposite categories have no inherent meaning, which makes duality an additional, separate concept.[11] A category that is equivalent to its dual is called self-dual.[12] Many category-theoretic notions come in pairs in the sense that they correspond to each other while considering the opposite category. For example, Cartesian products Y1 × Y2 and disjoint unions Y1 ⊔ Y2 of sets are dual to each other in the sense that

Hom (X, Y1 × Y2) = Hom (X, Y1) × Hom (X, Y2) and

Hom (Y1 ⊔ Y2, X) = Hom (Y1, X) × Hom (Y2, X)

for any set X. This is a particular case of a more general duality phenomenon, under which limits in a category C correspond to colimits in the opposite category Cop; further concrete examples of this are epimorphisms vs. monomorphism, in particular factor modules (or groups etc.) vs. submodules, direct products vs. direct sums (also called coproducts to emphasize the duality aspect). Therefore, in some cases, proofs of certain statements can be halved, using such a duality phenomenon. Further notions displaying related by such a categorical duality are projective and injective modules in ,[13] fibrations and cofibrations in topology and more generally model categories.[14] Two functors F: C → D and G: D → C are adjoint if for all objects c in C and d in D

HomD(F(c), d) ≅ HomC(c, G(d)), in a natural way. Actually, the correspondence of limits and colimits is an example of adjoints, since there is an adjunction 15.6. ANALYTIC DUALITIES 85

colim : CI ↔ C : ∆

between the colimit functor that assigns to any diagram in C indexed by some category I its colimit and the diagonal functor that maps any object c of C to the constant diagram which has c at all places. Dually,

∆ : CI ↔ C : lim .

15.5.2 Examples

For example, there is a duality between commutative rings and affine schemes: to every commutative ring A there is an affine spectrum, Spec A, conversely, given an affine S, one gets back a ring by taking global sections of the structure OS. In addition, ring homomorphisms are in one-to-one correspondence with morphisms of affine schemes, thereby there is an equivalence

(Commutative rings)op ≅ (affine schemes)[15]

Compare with noncommutative geometry and Gelfand duality. An example of self-dual category is the category of Hilbert spaces.[12] In a number of situations, the objects of two categories linked by a duality are partially ordered, i.e., there is some notion of an object “being smaller” than another one. In such a situation, a duality that respects the orderings in question is known as a Galois connection. An example is the standard duality in Galois theory (fundamental theorem of Galois theory) between field extensions and subgroups of the Galois group: a bigger field extension corresponds— under the mapping that assigns to any extension L ⊃ K (inside some fixed bigger field Ω) the Galois group Gal (Ω / L) —to a smaller group.[16] Pontryagin duality gives a duality on the category of locally compact abelian groups: given any such group G, the character group

χ(G) = Hom (G, S1) given by continuous group homomorphisms from G to the circle group S1 can be endowed with the compact-open topology. Pontryagin duality states that the character group is again locally compact abelian and that

G ≅ χ(χ(G)).[17]

Moreover, discrete groups correspond to compact abelian groups; finite groups correspond to finite groups. Pontryagin is the background to Fourier analysis, see below.

• Tannaka–Krein duality, a non-commutative analogue of Pontryagin duality[18] • Gelfand duality relating commutative C*-algebras and compact Hausdorff spaces

Both Gelfand and Pontryagin duality can be deduced in a largely formal, category-theoretic way.[19]

15.6 Analytic dualities

In analysis, problems are frequently solved by passing to the dual description of functions and operators. Fourier transform switches between functions on a vector space and its dual:

∫ ∞ fˆ(ξ) := f(x) e−2πixξ dx, −∞ 86 CHAPTER 15. DUALITY (MATHEMATICS)

and conversely

∫ ∞ f(x) = fˆ(ξ) e2πixξ dξ. −∞

ˆ If f is an L2-function on R or RN , say, then so is fˆ and f(−x) = fˆ(x) . Moreover, the transform interchanges operations of multiplication and convolution on the corresponding function spaces. A conceptual explanation of the Fourier transform is obtained by the aforementioned Pontryagin duality, applied to the locally compact groups R (or RN etc.): any character of R is given by ξ↦ e−2πixξ. The dualizing character of Fourier transform has many other manifestations, for example, in alternative descriptions of quantum mechanical systems in terms of coordinate and momentum representations.

• Laplace transform is similar to Fourier transform and interchanges operators of multiplication by polynomials with constant coefficient linear differential operators.

• Legendre transformation is an important analytic duality which switches between velocities in Lagrangian me- chanics and momenta in Hamiltonian mechanics.

15.7 Poincaré-style dualities

Theorems showing that certain objects of interest are the dual spaces (in the sense of linear algebra) of other objects of interest are often called dualities. Many of these dualities are given by a bilinear pairing of two K-vector spaces

A ⊗ B → K.

For perfect pairings, there is, therefore, an isomorphism of A to the dual of B. For example, Poincaré duality of a smooth compact complex manifold X is given by a pairing of singular cohomology with C-coefficients (equivalently, of the constant sheaf C)

Hi(X) ⊗ H2n−i(X) → C,

where n is the (complex) dimension of X.[20] Poincaré duality can also be expressed as a relation of singular homology and de Rham cohomology, by asserting that the map

∫ (γ, ω) 7→ ω γ

(integrating a differential k-form over an 2n−k-(real) -dimensional cycle) is a perfect pairing. The same duality pattern holds for a smooth over a separably closed field, using l-adic cohomology with Qℓ-coefficients instead.[21] This is further generalized to possibly singular varieties, using intersection cohomol- ogy instead, a duality called Verdier duality.[22] With increasing level of generality, it turns out, an increasing amount of technical background is helpful or necessary to understand these theorems: the modern formulation of both these dualities can be done using derived categories and certain direct and inverse image functors of sheaves, applied to locally constant sheaves (with respect to the classical analytical topology in the first case, and with respect to the étale topology in the second case). Yet another group of similar duality statements is encountered in arithmetics: étale cohomology of finite, local and global fields (also known as Galois cohomology, since étale cohomology over a field is equivalent to group cohomology of the (absolute) Galois group of the field) admit similar pairings. The absolute Galois group G(Fq) of a finite field, for example, is isomorphic to Zˆ , the profinite completion of Z, the integers. Therefore, the perfect pairing (for any G-module M)

Hn(G, M) × H1−n (G, Hom (M, Q/Z)) → Q/Z[23] 15.8. SEE ALSO 87 is a direct consequence of Pontryagin duality of finite groups. For local and global fields, similar statements exist (local duality and global or Poitou–Tate duality).[24] Serre duality or coherent duality are similar to the statements above, but applies to cohomology of coherent sheaves instead.[25]

• Alexander duality

15.8 See also

• List of dualities

• Duality principle (disambiguation)

• Dual (category theory)

• Autonomous category

• Dual numbers, a certain associative algebra; the term “dual” here is synonymous with double, and is unrelated to the notions given above.

• Duality (electrical engineering)

• Duality (optimization)

• Koszul duality

• Lagrange duality

• Langlands dual

• Petrie duality

• T-duality, Mirror symmetry

• Linear programming#Duality

• Dual code

• Dual lattice

• Dual basis

• Dual

• Adjoint functor

• Dualizing module

• Matlis duality

15.9 Notes

[1] See Atiyah (2007)

[2] Kostrikin 2001

[3] Gowers 2008, p. 187, col. 1

[4] Gowers 2008, p. 189, col. 2

[5] Artstein-Avidan & Milman 2007

[6] Artstein-Avidan & Milman 2008 88 CHAPTER 15. DUALITY (MATHEMATICS)

[7] Veblen & Young 1965.

[8] (Veblen & Young 1965, Ch. I, Theorem 11)

[9] Fulton 1993

[10] Mac Lane 1998, Ch. II.1.

[11] (Lam 1999, §19C)

[12] Jiří Adámek; J. Rosicky (1994). Locally Presentable and Accessible Categories. Cambridge University Press. p. 62. ISBN 978-0-521-42261-1.

[13] Weibel (1994)

[14] Dwyer and Spaliński (1995)

[15] Hartshorne 1966, Ch. II.2, esp. Prop. II.2.3

[16] See (Lang 2002, Theorem VI.1.1) for finite Galois extensions.

[17] (Loomis 1953, p. 151, section 37D)

[18] Joyal and Street (1991)

[19] Negrepontis 1971.

[20] Griffiths & Harris 1994, p. 56

[21] Milne 1980, Ch. VI.11

[22] Iversen 1986, Ch. VII.3, VII.5

[23] Milne (2006, Example I.1.10)

[24] Mazur (1973); Milne (2006)

[25] Hartshorne 1966, Ch. III.7

15.10 References

15.10.1 Duality in general

• Atiyah, Michael (2007), Duality in Mathematics and Physics, lecture notes from the Institut de Matematica de la Universitat de Barcelona (IMUB).

• Kostrikin, A. I. (2001), “Duality”, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978- 1-55608-010-4.

• Gowers, Timothy (2008), “III.19 Duality”, The Princeton Companion to Mathematics, Princeton University Press, pp. 187–190.

• Cartier, Pierre (2001), “A mad day’s work: from Grothendieck to Connes and Kontsevich. The evolution of concepts of space and symmetry”, American Mathematical Society. Bulletin. New Series 38 (4): 389– 408, doi:10.1090/S0273-0979-01-00913-2, ISSN 0002-9904, MR 1848254 (a non-technical overview about several aspects of geometry, including dualities)

15.10.2 Duality in algebraic topology

• James C. Becker and Daniel Henry Gottlieb, A History of Duality in Algebraic Topology 15.10. REFERENCES 89

15.10.3 Specific dualities

• Artstein-Avidan, Shiri; Milman, Vitali (2008), “The concept of duality for measure projections of convex bodies”, Journal of functional analysis 254 (10): 2648–2666, doi:10.1016/j.jfa.2007.11.008. Also author’s site. • Artstein-Avidan, Shiri; Milman, Vitali (2007), “A characterization of the concept of duality”, Electronic re- search announcements in mathematical sciences 14: 42–59. Also author’s site. • Dwyer, William G.; Spaliński, J. (1995), “Homotopy theories and model categories”, Handbook of algebraic topology, Amsterdam: North-Holland, pp. 73–126, MR 1361887 • Fulton, William (1993), Introduction to toric varieties, Princeton University Press, ISBN 978-0-691-00049-7

• Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-05059-9, MR 1288523

• Hartshorne, Robin (1966), Residues and Duality, Lecture Notes in Mathematics 20, Berlin, New York: Springer- Verlag, pp. 20–48 • Hartshorne, Robin (1977), Algebraic Geometry, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90244- 9, MR 0463157, OCLC 13348052 • Iversen, Birger (1986), Cohomology of sheaves, Universitext, Berlin, New York: Springer-Verlag, ISBN 978- 3-540-16389-3, MR 842190 • Joyal, André; Street, Ross (1991), “An introduction to Tannaka duality and quantum groups”, Category theory (Como, 1990) (PDF), Lecture notes in mathematics 1488, Berlin, New York: Springer-Verlag, pp. 413–492, MR 1173027

• Lam, Tsit-Yuen (1999), Lectures on modules and rings, Graduate Texts in Mathematics No. 189, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98428-5, MR 1653294

• Lang, Serge (2002), Algebra, Graduate Texts in Mathematics 211, Berlin, New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556

• Loomis, Lynn H. (1953), An introduction to abstract harmonic analysis, Toronto-New York-London: D. Van Nostrand Company, Inc., pp. x+190

• Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York: Springer- Verlag, ISBN 978-0-387-98403-2

• Mazur, Barry (1973), “Notes on étale cohomology of number fields”, Annales Scientifiques de l'École Normale Supérieure. Quatrième Série 6: 521–552, ISSN 0012-9593, MR 0344254

• Milne, James S. (1980), Étale cohomology, Princeton University Press, ISBN 978-0-691-08238-7 • Milne, James S. (2006), Arithmetic duality theorems (2nd ed.), Charleston, South Carolina: BookSurge, LLC, ISBN 978-1-4196-4274-6, MR 2261462 • Negrepontis, Joan W. (1971), “Duality in analysis from the point of view of triples”, Journal of Algebra 19 (2): 228–253, doi:10.1016/0021-8693(71)90105-0, ISSN 0021-8693, MR 0280571

• Veblen, Oswald; Young, John Wesley (1965), Projective geometry. Vols. 1, 2, Blaisdell Publishing Co. Ginn and Co. New York-Toronto-London, MR 0179666

• Weibel, Charles A. (1994), An introduction to homological algebra, Cambridge University Press, ISBN 978-0- 521-55987-4, MR 1269324 Chapter 16

Equivalence relation

This article is about the mathematical concept. For the patent doctrine, see Doctrine of equivalents. In mathematics, an equivalence relation is the relation that holds between two elements if and only if they are members of the same cell within a set that has been partitioned into cells such that every element of the set is a member of one and only one cell of the partition. The intersection of any two different cells is empty; the union of all the cells equals the original set. These cells are formally called equivalence classes.

16.1 Notation

Although various notations are used throughout the literature to denote that two elements a and b of a set are equivalent with respect to an equivalence relation R, the most common are "a ~ b" and "a ≡ b", which are used when R is the obvious relation being referenced, and variations of "a ~R b", "a ≡R b", or "aRb" otherwise.

16.2 Definition

A given ~ on a set X is said to be an equivalence relation if and only if it is reflexive, symmetric and transitive. Equivalently, for all a, b and c in X:

• a ~ a.(Reflexivity)

• if a ~ b then b ~ a.(Symmetry)

• if a ~ b and b ~ c then a ~ c.(Transitivity)

X together with the relation ~ is called a setoid. The equivalence class of a under ~, denoted [a], is defined as [a] = {b ∈ X | a ∼ b} .

16.3 Examples

16.3.1 Simple example

Let the set {a, b, c} have the equivalence relation {(a, a), (b, b), (c, c), (b, c), (c, b)} . The following sets are equivalence classes of this relation: [a] = {a}, [b] = [c] = {b, c} . The set of all equivalence classes for this relation is {{a}, {b, c}} .

90 16.4. CONNECTIONS TO OTHER RELATIONS 91

16.3.2 Equivalence relations

The following are all equivalence relations:

• “Has the same birthday as” on the set of all people.

• “Is similar to” on the set of all triangles.

• “Is congruent to” on the set of all triangles.

• “Is congruent to, modulo n" on the integers.

• “Has the same image under a function" on the elements of the domain of the function.

• “Has the same absolute value” on the set of real numbers

• “Has the same cosine” on the set of all angles.

16.3.3 Relations that are not equivalences

• The relation "≥" between real numbers is reflexive and transitive, but not symmetric. For example, 7 ≥ 5 does not imply that 5 ≥ 7. It is, however, a partial order.

• The relation “has a common factor greater than 1 with” between natural numbers greater than 1, is reflexive and symmetric, but not transitive. (Example: The natural numbers 2 and 6 have a common factor greater than 1, and 6 and 3 have a common factor greater than 1, but 2 and 3 do not have a common factor greater than 1).

• The empty relation R on a non-empty set X (i.e. aRb is never true) is vacuously symmetric and transitive, but not reflexive. (If X is also empty then R is reflexive.)

• The relation “is approximately equal to” between real numbers, even if more precisely defined, is not an equiv- alence relation, because although reflexive and symmetric, it is not transitive, since multiple small changes can accumulate to become a big change. However, if the approximation is defined asymptotically, for example by saying that two functions f and g are approximately equal near some point if the limit of f − g is 0 at that point, then this defines an equivalence relation.

16.4 Connections to other relations

• A partial order is a relation that is reflexive, antisymmetric, and transitive.

• A congruence relation is an equivalence relation whose domain X is also the underlying set for an algebraic structure, and which respects the additional structure. In general, congruence relations play the role of kernels of homomorphisms, and the quotient of a structure by a congruence relation can be formed. In many important cases congruence relations have an alternative representation as substructures of the structure on which they are defined. E.g. the congruence relations on groups correspond to the normal subgroups.

• Equality is both an equivalence relation and a partial order. Equality is also the only relation on a set that is reflexive, symmetric and antisymmetric.

• A strict partial order is irreflexive, transitive, and asymmetric.

• A partial equivalence relation is transitive and symmetric. Transitive and symmetric imply reflexive if and only if for all a ∈ X, there exists a b ∈ X such that a ~ b.

• A reflexive and symmetric relation is a dependency relation, if finite, and a tolerance relation if infinite.

• A preorder is reflexive and transitive. 92 CHAPTER 16. EQUIVALENCE RELATION

16.5 Well-definedness under an equivalence relation

If ~ is an equivalence relation on X, and P(x) is a property of elements of X, such that whenever x ~ y, P(x) is true if P(y) is true, then the property P is said to be well-defined or a class invariant under the relation ~.

A frequent particular case occurs when f is a function from X to another set Y; if x1 ~ x2 implies f(x1) = f(x2) then f is said to be a morphism for ~, a class invariant under ~, or simply invariant under ~. This occurs, e.g. in the character theory of finite groups. The latter case with the function f can be expressed by a commutative triangle. See also invariant. Some authors use “compatible with ~" or just “respects ~" instead of “invariant under ~". More generally, a function may map equivalent arguments (under an equivalence relation ~A) to equivalent values (under an equivalence relation ~B). Such a function is known as a morphism from ~A to ~B.

16.6 Equivalence class, quotient set, partition

Let a, b ∈ X . Some definitions:

16.6.1 Equivalence class

Main article: Equivalence class

A subset Y of X such that a ~ b holds for all a and b in Y, and never for a in Y and b outside Y, is called an equivalence class of X by ~. Let [a] := {x ∈ X | a ∼ x} denote the equivalence class to which a belongs. All elements of X equivalent to each other are also elements of the same equivalence class.

16.6.2 Quotient set

Main article: Quotient set

The set of all possible equivalence classes of X by ~, denoted X/∼ := {[x] | x ∈ X} , is the quotient set of X by ~. If X is a topological space, there is a natural way of transforming X/~ into a topological space; see quotient space for the details.

16.6.3 Projection

Main article: Projection (relational algebra)

The projection of ~ is the function π : X → X/∼ defined by π(x) = [x] which maps elements of X into their respective equivalence classes by ~.

Theorem on projections:[1] Let the function f: X → B be such that a ~ b → f(a) = f(b). Then there is a unique function g : X/~ → B, such that f = gπ. If f is a surjection and a ~ b ↔ f(a) = f(b), then g is a bijection.

16.6.4 Equivalence kernel

The equivalence kernel of a function f is the equivalence relation ~ defined by x ∼ y ⇐⇒ f(x) = f(y) . The equivalence kernel of an injection is the identity relation.

16.6.5 Partition

Main article: Partition of a set 16.7. FUNDAMENTAL THEOREM OF EQUIVALENCE RELATIONS 93

A partition of X is a set P of nonempty subsets of X, such that every element of X is an element of a single element of P. Each element of P is a cell of the partition. Moreover, the elements of P are pairwise disjoint and their union is X.

Counting possible partitions

Let X be a finite set with n elements. Since every equivalence relation over X corresponds to a partition of X, and vice versa, the number of possible equivalence relations on X equals the number of distinct partitions of X, which is the nth Bell number Bn:

∞ 1 ∑ kn B = , n e k! k=0 where the above is one of the ways to write the nth Bell number.

16.7 Fundamental theorem of equivalence relations

A key result links equivalence relations and partitions:[2][3]

• An equivalence relation ~ on a set X partitions X.

• Conversely, corresponding to any partition of X, there exists an equivalence relation ~ on X.

In both cases, the cells of the partition of X are the equivalence classes of X by ~. Since each element of X belongs to a unique cell of any partition of X, and since each cell of the partition is identical to an equivalence class of X by ~, each element of X belongs to a unique equivalence class of X by ~. Thus there is a natural bijection from the set of all possible equivalence relations on X and the set of all partitions of X.

16.8 Comparing equivalence relations

If ~ and ≈ are two equivalence relations on the same set S, and a~b implies a≈b for all a,b ∈ S, then ≈ is said to be a coarser relation than ~, and ~ is a finer relation than ≈. Equivalently,

• ~ is finer than ≈ if every equivalence class of ~ is a subset of an equivalence class of ≈, and thus every equivalence class of ≈ is a union of equivalence classes of ~.

• ~ is finer than ≈ if the partition created by ~ is a refinement of the partition created by ≈.

The equality equivalence relation is the finest equivalence relation on any set, while the trivial relation that makes all pairs of elements related is the coarsest. The relation "~ is finer than ≈" on the collection of all equivalence relations on a fixed set is itself a partial order relation.

16.9 Generating equivalence relations

• Given any set X, there is an equivalence relation over the set [X→X] of all possible functions X→X. Two such functions are deemed equivalent when their respective sets of fixpoints have the same cardinality, corresponding to cycles of length one in a permutation. Functions equivalent in this manner form an equivalence class on [X→X], and these equivalence classes partition [X→X]. 94 CHAPTER 16. EQUIVALENCE RELATION

• An equivalence relation ~ on X is the equivalence kernel of its surjective projection π : X → X/~.[4] Conversely, any surjection between sets determines a partition on its domain, the set of preimages of singletons in the codomain. Thus an equivalence relation over X, a partition of X, and a projection whose domain is X, are three equivalent ways of specifying the same thing.

• The intersection of any collection of equivalence relations over X (binary relations viewed as a subset of X × X) is also an equivalence relation. This yields a convenient way of generating an equivalence relation: given any binary relation R on X, the equivalence relation generated by R is the smallest equivalence relation containing R. Concretely, R generates the equivalence relation a ~ b if and only if there exist elements x1, x2, ..., xn in X such that a = x1, b = xn, and (xi,xi₊ ₁)∈R or (xi₊₁,xi)∈R, i = 1, ..., n−1.

Note that the equivalence relation generated in this manner can be trivial. For instance, the equivalence relation ~ generated by:

• • Any total order on X has exactly one equivalence class, X itself, because x ~ y for all x and y; • Any subset of the identity relation on X has equivalence classes that are the singletons of X.

• Equivalence relations can construct new spaces by “gluing things together.” Let X be the unit Cartesian square [0,1] × [0,1], and let ~ be the equivalence relation on X defined by ∀a, b ∈ [0,1] ((a, 0) ~ (a, 1) ∧ (0, b) ~ (1, b)). Then the quotient space X/~ can be naturally identified (homeomorphism) with a torus: take a square piece of paper, bend and glue together the upper and lower edge to form a cylinder, then bend the resulting cylinder so as to glue together its two open ends, resulting in a torus.

16.10 Algebraic structure

Much of mathematics is grounded in the study of equivalences, and order relations. Lattice theory captures the mathematical structure of order relations. Even though equivalence relations are as ubiquitous in mathematics as order relations, the algebraic structure of equivalences is not as well known as that of orders. The former structure draws primarily on group theory and, to a lesser extent, on the theory of lattices, categories, and groupoids.

16.10.1 Group theory

Just as order relations are grounded in ordered sets, sets closed under pairwise supremum and infimum, equivalence relations are grounded in partitioned sets, which are sets closed under bijections and preserve partition structure. Since all such bijections map an equivalence class onto itself, such bijections are also known as permutations. Hence permutation groups (also known as transformation groups) and the related notion of orbit shed light on the mathe- matical structure of equivalence relations. Let '~' denote an equivalence relation over some nonempty set A, called the universe or underlying set. Let G denote the set of bijective functions over A that preserve the partition structure of A: ∀x ∈ A ∀g ∈ G (g(x) ∈ [x]). Then the following three connected theorems hold:[5]

• ~ partitions A into equivalence classes. (This is the Fundamental Theorem of Equivalence Relations, mentioned above);

• Given a partition of A, G is a transformation group under composition, whose orbits are the cells of the parti- tion‡;

• Given a transformation group G over A, there exists an equivalence relation ~ over A, whose equivalence classes are the orbits of G.[6][7]

In sum, given an equivalence relation ~ over A, there exists a transformation group G over A whose orbits are the equivalence classes of A under ~. This transformation group characterisation of equivalence relations differs fundamentally from the way lattices char- acterize order relations. The arguments of the lattice theory operations meet and join are elements of some universe 16.11. EQUIVALENCE RELATIONS AND MATHEMATICAL LOGIC 95

A. Meanwhile, the arguments of the transformation group operations composition and inverse are elements of a set of bijections, A → A. Moving to groups in general, let H be a subgroup of some group G. Let ~ be an equivalence relation on G, such that a ~ b ↔ (ab−1 ∈ H). The equivalence classes of ~—also called the orbits of the action of H on G—are the right cosets of H in G. Interchanging a and b yields the left cosets. ‡Proof.[8] Let function composition interpret group multiplication, and function inverse interpret group inverse. Then G is a group under composition, meaning that ∀x ∈ A ∀g ∈ G ([g(x)] = [x]), because G satisfies the following four conditions:

• G is closed under composition. The composition of any two elements of G exists, because the domain and codomain of any element of G is A. Moreover, the composition of bijections is bijective;[9] • Existence of identity function. The identity function, I(x)=x, is an obvious element of G; • Existence of inverse function. Every bijective function g has an inverse g−1, such that gg−1 = I; • Composition associates. f(gh) = (fg)h. This holds for all functions over all domains.[10]

Let f and g be any two elements of G. By virtue of the definition of G,[g(f(x))] = [f(x)] and [f(x)] = [x], so that [g(f(x))] = [x]. Hence G is also a transformation group (and an automorphism group) because function composition preserves the partitioning of A. □ Related thinking can be found in Rosen (2008: chpt. 10).

16.10.2 Categories and groupoids

Let G be a set and let "~" denote an equivalence relation over G. Then we can form a groupoid representing this equivalence relation as follows. The objects are the elements of G, and for any two elements x and y of G, there exists a unique morphism from x to y if and only if x~y. The advantages of regarding an equivalence relation as a special case of a groupoid include:

• Whereas the notion of “free equivalence relation” does not exist, that of a free groupoid on a directed graph does. Thus it is meaningful to speak of a “presentation of an equivalence relation,” i.e., a presentation of the corresponding groupoid; • Bundles of groups, group actions, sets, and equivalence relations can be regarded as special cases of the notion of groupoid, a point of view that suggests a number of analogies; • In many contexts “quotienting,” and hence the appropriate equivalence relations often called congruences, are important. This leads to the notion of an internal groupoid in a category.[11]

16.10.3 Lattices

The possible equivalence relations on any set X, when ordered by set inclusion, form a complete lattice, called Con X by convention. The canonical map ker: X^X → Con X, relates the monoid X^X of all functions on X and Con X. ker is surjective but not injective. Less formally, the equivalence relation ker on X, takes each function f: X→X to its kernel ker f. Likewise, ker(ker) is an equivalence relation on X^X.

16.11 Equivalence relations and mathematical logic

Equivalence relations are a ready source of examples or counterexamples. For example, an equivalence relation with exactly two infinite equivalence classes is an easy example of a theory which is ω-categorical, but not categorical for any larger cardinal number. An implication of model theory is that the properties defining a relation can be proved independent of each other (and hence necessary parts of the definition) if and only if, for each property, examples can be found of relations not satisfying the given property while satisfying all the other properties. Hence the three defining properties of equivalence relations can be proved mutually independent by the following three examples: 96 CHAPTER 16. EQUIVALENCE RELATION

• Reflexive and transitive: The relation ≤ on N. Or any preorder;

• Symmetric and transitive: The relation R on N, defined as aRb ↔ ab ≠ 0. Or any partial equivalence relation;

• Reflexive and symmetric: The relation R on Z, defined as aRb ↔ "a − b is divisible by at least one of 2 or 3.” Or any dependency relation.

Properties definable in first-order logic that an equivalence relation may or may not possess include:

• The number of equivalence classes is finite or infinite;

• The number of equivalence classes equals the (finite) natural number n;

• All equivalence classes have infinite cardinality;

• The number of elements in each equivalence class is the natural number n.

16.12 Euclidean relations

Euclid's The Elements includes the following “Common Notion 1":

Things which equal the same thing also equal one another.

Nowadays, the property described by Common Notion 1 is called Euclidean (replacing “equal” by “are in relation with”). By “relation” is meant a binary relation, in which aRb is generally distinct from bRa. An Euclidean relation thus comes in two forms:

(aRc ∧ bRc) → aRb (Left-Euclidean relation) (cRa ∧ cRb) → aRb (Right-Euclidean relation)

The following theorem connects Euclidean relations and equivalence relations:

Theorem If a relation is (left or right) Euclidean and reflexive, it is also symmetric and transitive.

Proof for a left-Euclidean relation

(aRc ∧ bRc) → aRb [a/c] = (aRa ∧ bRa) → aRb [reflexive; erase T∧] = bRa → aRb. Hence R is symmetric.

(aRc ∧ bRc) → aRb [symmetry] = (aRc ∧ cRb) → aRb. Hence R is transitive. □

with an analogous proof for a right-Euclidean relation. Hence an equivalence relation is a relation that is Euclidean and reflexive. The Elements mentions neither symmetry nor reflexivity, and Euclid probably would have deemed the reflexivity of equality too obvious to warrant explicit mention.

16.13 See also

• Partition of a set

• Equivalence class

• Up to

• Conjugacy class

• Topological conjugacy 16.14. NOTES 97

16.14 Notes

[1] Garrett Birkhoff and Saunders Mac Lane, 1999 (1967). Algebra, 3rd ed. p. 35, Th. 19. Chelsea.

[2] Wallace, D. A. R., 1998. Groups, Rings and Fields. p. 31, Th. 8. Springer-Verlag.

[3] Dummit, D. S., and Foote, R. M., 2004. Abstract Algebra, 3rd ed. p. 3, Prop. 2. John Wiley & Sons.

[4] Garrett Birkhoff and Saunders Mac Lane, 1999 (1967). Algebra, 3rd ed. p. 33, Th. 18. Chelsea.

[5] Rosen (2008), pp. 243-45. Less clear is §10.3 of Bas van Fraassen, 1989. Laws and Symmetry. Oxford Univ. Press.

[6] Wallace, D. A. R., 1998. Groups, Rings and Fields. Springer-Verlag: 202, Th. 6.

[7] Dummit, D. S., and Foote, R. M., 2004. Abstract Algebra, 3rd ed. John Wiley & Sons: 114, Prop. 2.

[8] Bas van Fraassen, 1989. Laws and Symmetry. Oxford Univ. Press: 246.

[9] Wallace, D. A. R., 1998. Groups, Rings and Fields. Springer-Verlag: 22, Th. 6.

[10] Wallace, D. A. R., 1998. Groups, Rings and Fields. Springer-Verlag: 24, Th. 7.

[11] Borceux, F. and Janelidze, G., 2001. Galois theories, Cambridge University Press, ISBN 0-521-80309-8

16.15 References

• Brown, Ronald, 2006. Topology and Groupoids. Booksurge LLC. ISBN 1-4196-2722-8.

• Castellani, E., 2003, “Symmetry and equivalence” in Brading, Katherine, and E. Castellani, eds., Symmetries in Physics: Philosophical Reflections. Cambridge Univ. Press: 422-433. • Robert Dilworth and Crawley, Peter, 1973. Algebraic Theory of Lattices. Prentice Hall. Chpt. 12 discusses how equivalence relations arise in lattice theory. • Higgins, P.J., 1971. Categories and groupoids. Van Nostrand. Downloadable since 2005 as a TAC Reprint.

• John Randolph Lucas, 1973. A Treatise on Time and Space. London: Methuen. Section 31. • Rosen, Joseph (2008) Symmetry Rules: How Science and Nature are Founded on Symmetry. Springer-Verlag. Mostly chpts. 9,10. • Raymond Wilder (1965) Introduction to the Foundations of Mathematics 2nd edition, Chapter 2-8: Axioms defining equivalence, pp 48–50, John Wiley & Sons.

16.16 External links

• Hazewinkel, Michiel, ed. (2001), “Equivalence relation”, Encyclopedia of Mathematics, Springer, ISBN 978- 1-55608-010-4 • Bogomolny, A., "Equivalence Relationship" cut-the-knot. Accessed 1 September 2009

• Equivalence relation at PlanetMath

• Binary matrices representing equivalence relations at OEIS. 98 CHAPTER 16. EQUIVALENCE RELATION

Logical matrices of the 52 equivalence relations on a 5-element set (Colored fields, including those in light gray, stand for ones; white fields for zeros.) Chapter 17

Exponential random graph models

Exponential random graph models (ERGMs) are a family of statistical models for analyzing data about social and other networks.

17.1 Background

Many metrics exist to describe the structural features of an observed network such as the density, centrality, or .[1][2] However, these metrics describe the observed network which is only one instance of a large number of possible alternative networks. This set of alternative networks may have similar or dissimilar structural features. To support statistical inference on the processes influencing the formation of network structure, a statistical model should consider the set of all possible alternative networks weighted on their similarity to an observed network. However because network data is inherently relational, it violates the assumptions of independence and identical distribution of standard statistical models like linear regression.[3] Alternative statistical models should reflect the uncertainty associated with a given observation, permit inference about the relative frequency about network substructures of theoretical interest, disambiguating the influence of confounding processes, efficiently representing complex struc- tures, and linking local-level processes to global-level properties.[4] Degree Preserving Randomization, for example, is a specific way in which an observed network could be considered in terms of multiple alternative networks.

17.2 Definition

The Exponential family is a broad family of models for covering many types of data, not just networks. An ERGM is a model from this family which describes networks.

Formally a random graph Y consists of a set of n nodes and m dyads (edges) {Yij : i = 1, . . . , n; j = 1, . . . , n} where Yij = 1 if the nodes (i, j) are connected and Yij = 0 otherwise. The basic assumption of these models is that the structure in an observed graph y can be explained by any statis- tics s(y) depending on the observed network and nodal attributes. This way, it is possible to describe any kind of dependence between the dyadic variables: | exp(θT s(y)) P (Y = y θ) = c(θ) where θ is a vector of model parameters associated with s(y) and c(θ) is a normalising constant. These models represent a probability distribution on each possible network on n nodes. However, the size of the set of possible networks for an undirected network (simple graph) of size n is 2n(n−1)/2 . Because the number of possible networks in the set vastly outnumbers the number of parameters which can constrain the model, the ideal probability distribution is the one which maximizes the Gibbs entropy.[5]

99 100 CHAPTER 17. EXPONENTIAL RANDOM GRAPH MODELS

17.3 References

[1] Wasserman, Stanley; Faust, Katherine (1994). Social Network Analysis: Methods and Applications. ISBN 978-0-521- 38707-1.

[2] Newman, M.E.J. “The Structure and Function of Complex Networks”. SIAM Review 45 (2): 167–256. doi:10.1137/S003614450342480.

[3] Contractor, Noshir; Wasserman, Stanley; Faust, Katherine. “Testing Multitheoretical, Multilevel Hypotheses About Orga- nizational Networks: An Analytic Framework and Empirical Example”. Academy of Management Review 31 (3): 681–703. doi:10.5465/AMR.2006.21318925.

[4] Robins, G.; Pattison, P.; Kalish, Y.; Lusher, D. (2007). “An introduction to exponential random graph models for social networks”. Social Networks 29: 173–191. doi:10.1016/j.socnet.2006.08.002.

[5] Newman, M.E.J. “Other Network Models”. Networks. pp. 565–585. ISBN 978-0-19-920665-0.

17.4 Further reading

• Caimo, A.; Friel, N (2011). “Bayesian inference for exponential random graph models”. Social Networks 33: 41–55. doi:10.1016/j.socnet.2010.09.004.

• Erdős, P.; Rényi, A (1959). “On random graphs”. Publicationes Mathematicae 6: 290–297.

• Fienberg, S. E.; Wasserman, S. (1981). “Discussion of An Exponential Family of Probability Distributions for Directed Graphs by Holland and Leinhardt”. Journal of the American Statistical Association 76: 54–57.

• Frank, O.; Strauss, D (1986). “Markov Graphs”. Journal of the American Statistical Association 81: 832–842. doi:10.2307/2289017.

• Handcock, M. S.; Hunter, D. R.; Butts, C. T.; Goodreau, S. M.; Morris, M. (2008). “statnet: Software Tools for the Representation, Visualization, Analysis and Simulation of Network Data”. Journal of Statistical Software 24: 1–11.

• Hunter, D. R.; Goodreau, S. M.; Handcock, M. S. (2008). “Goodness of Fit of Social Network Models”. Journal of the American Statistical Association 103: 248–258. doi:10.1198/016214507000000446.

• Hunter, D. R; Handcock, M. S. (2006). “Inference in curved exponential family models for networks”. Journal of Computational and Graphical Statistics 15: 565–583. doi:10.1198/106186006X133069.

• Hunter, D. R.; Handcock, M. S.; Butts, C. T.; Goodreau, S. M.; Morris, M. (2008). “ergm: A Package to Fit, Simulate and Diagnose Exponential-Family Models for Networks”. Journal of Statistical Software 24: 1–29.

• Jin, I.H.; Liang, F. (2012). “Fitting social networks models using varying truncation stochastic approximation MCMC algorithm”. Journal of Computational and Graphical Statistics. doi:10.1080/10618600.2012.680851.

• Koskinen, J. H.; Robins, G. L.; Pattison, P. E. (2010). “Analysing exponential random graph (p-star) models with missing data using Bayesian data augmentation”. Statistical Methodology 7: 366–384. doi:10.1016/j.stamet.2009.09.007.

• Morris, M.; Handcock, M. S.; Hunter, D. R. (2008). “Specification of Exponential-Family Random Graph Models: Terms and Computational Aspects”. Journal of Statistical Software 24.

• Rinaldo, A.; Fienberg, S. E.; Zhou, Y. (2009). “On the geometry of descrete exponential random families with application to exponential random graph models”. Electronic Journal of Statistics 3: 446–484. doi:10.1214/08- EJS350.

• Robins, G.; Snijders, T.; Wang, P.; Handcock, M.; Pattison, P (2007). “Recent developments in exponential random graph (p*) models for social networks”. Social Networks 29: 192–215. doi:10.1016/j.socnet.2006.08.003.

• Snijders, T. A. B. (2002). “ Monte Carlo estimation of exponential random graph models” (PDF). Journal of Social Structure 3.

• Snijders, T. A. B.; Pattison, P. E.; Robins, G. L. (2006). “New specifications for exponential random graph models”. Sociological Methodology 36: 99–153. doi:10.1111/j.1467-9531.2006.00176.x. 17.4. FURTHER READING 101

• Strauss, D; Ikeda, M (1990). “Pseudolikelihood estimation for social networks”. Journal of the American Statistical Association 5: 204–212. doi:10.2307/2289546. • van Duijn, M. A.; Snijders, T. A. B.; Zijlstra, B. H. (2004). “p2: a random effects model with covariates for directed graphs”. Statistica Neerlandica 58: 234–254. doi:10.1046/j.0039-0402.2003.00258.x. • van Duijn, M. A. J.; Gile, K. J.; Handcock, M. S. (2009). “A framework for the comparison of maximum pseudo-likelihood and maximum likelihood estimation of exponential family random graph models”. Social Networks 31: 52–62. doi:10.1016/j.socnet.2008.10.003. Chapter 18

Flow network

In graph theory, a flow network (also known as a transportation network) is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in operations research, a directed graph is called a network. The vertices are called nodes and the edges are called arcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only incoming flow. A network can be used to model traffic in a road system, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes.

18.1 Definition

Let G = (V,E) be a finite directed graph in which every edge (u, v) ∈ E has a non-negative, real-valued capacity c(u, v) . If (u, v) ̸∈ E , we assume that c(u, v) = 0 . We distinguish two vertices: a source s and a sink t . A flow in a flow network is a real function f : V × V → R with the following three properties for all nodes u and v :

∑ ∑ ∈ \{ } i.e. Flow conservation implies: (u,v)∈E f(u, v) = (v,z)∈E f(v, z) , for each vertex v V s, t Notice that f(u, v) is the net flow from u to v . If the graph represents a physical network, and if there is a real flow of, for example, 4 units from u to v , and a real flow of 3 units from v to u , we have f(u, v) = 1 and f(v, u) = −1 . ∑ Basically we can say that flow for a physical network is flow leaving at s = (s,v)∈E f(s, v)

The residual capacity of an edge is cf (u, v) = c(u, v) − f(u, v) . This defines a residual network denoted Gf (V,Ef ) , giving the amount of available capacity. See that there can be a path from u to v in the residual network, even though there is no path from u to v in the original network. Since flows in opposite directions cancel out, decreasing the flow from v to u is the same as increasing the flow from u to v . An augmenting path is a path (u1, u2, . . . , uk) in the residual network, where u1 = s , uk = t , and cf (ui, ui+1) > 0 . A network is at maximum flow if and only if there is no augmenting path in the residual network Gf .

So Gf is constructed using graph G as follows:

1. Vertices of Gf = V

2. Edges of Gf = Ef defined as- For each edge (x, y) ∈ E

(i). If f(x, y) < c(x, y), make Forward edge (x, y) ∈ Ef with capacity cf = c(x, y) − f(x, y) . (ii). If f(x, y) > 0, make Backward edge (y, x) ∈ Ef with capacity cf = f(x, y) . This concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network. Sometimes one needs to model a network with more than one source, a supersource is introduced to the graph.[1] This consists of a vertex connected to each of the sources with edges of infinite capacity, so as to act as a global source. A similar construct for sinks is called a supersink.[2]

102 18.2. EXAMPLE 103

18.2 Example

A flow network showing flow and capacity

To the right you see a flow network with source labeled s , sink t , and four additional nodes. The flow and capacity is denoted f/c . Notice how the network upholds skew symmetry, capacity constraints and flow conservation. The total amount of flow from s to t is 5, which can be easily seen from the fact that the total outgoing flow from s is 5, which is also the incoming flow to t . We know that no flow appears or disappears in any of the other nodes.

Residual network for the above flow network, showing residual capacities.

Below you see the residual network for the given flow. Notice how there is positive residual capacity on some edges where the original capacity is zero, for example for the edge (d, c) . This flow is not a maximum flow. There is available capacity along the paths (s, a, c, t) , (s, a, b, d, t) and (s, a, b, d, c, t) , which are then the augmenting paths. The residual capacity of the first path is min(c(s, a) − f(s, a), c(a, c) − f(a, c), c(c, t) − f(c, t)) = min(5 − 3, 3 − 2, 2 − 1) = min(2, 1, 1) = 1 . Notice that as long as there exists some path with a positive residual capacity, the flow will not be maximum. The residual capacity for some path is the minimum residual capacity of all edges in that path. 104 CHAPTER 18. FLOW NETWORK

18.3 Applications

See also: Pipe network analysis

Picture a series of water pipes, fitting into a network. Each pipe is of a certain diameter, so it can only maintain a flow of a certain amount of water. Anywhere that pipes meet, the total amount of water coming into that junction must be equal to the amount going out, otherwise we would quickly run out of water, or we would have a buildup of water. We have a water inlet, which is the source, and an outlet, the sink. A flow would then be one possible way for water to get from source to sink so that the total amount of water coming out of the outlet is consistent. Intuitively, the total flow of a network is the rate at which water comes out of the outlet. Flows can pertain to people or material over transportation networks, or to electricity over electrical distribution systems. For any such physical network, the flow coming into any intermediate node needs to equal the flow going out of that node. This conservation constraint was formalized as Kirchhoff’s current law. Flow networks also find applications in ecology: flow networks arise naturally when considering the flow of nutrients and energy between different organizations in a food web. The mathematical problems associated with such networks are quite different from those that arise in networks of fluid or traffic flow. The field of ecosystem network analysis, developed by Robert Ulanowicz and others, involves using concepts from information theory and thermodynamics to study the evolution of these networks over time. The simplest and most common problem using flow networks is to find what is called the maximum flow, which provides the largest possible total flow from the source to the sink in a given graph. There are many other problems which can be solved using max flow algorithms, if they are appropriately modeled as flow networks, such as bipartite matching, the assignment problem and the transportation problem. Maximum flow problems can be solved efficiently with the relabel-to-front algorithm. The max-flow min-cut theorem states that finding a maximal network flow is equivalent to finding a cut of minimum capacity that separates the source and the sink. Where a cut is the division of vertices such that the source is in one division and the sink is in another. In a multi-commodity flow problem, you have multiple sources and sinks, and various “commodities” which are to flow from a given source to a given sink. This could be for example various goods that are produced at various factories, and are to be delivered to various given customers through the same transportation network. In a minimum cost flow problem, each edge u, v has a given cost k(u, v) , and the cost of sending the flow f(u, v) across the edge is f(u, v) · k(u, v) . The objective is to send a given amount of flow from the source to the sink, at the lowest possible price. In a circulation problem, you have a lower bound l(u, v) on the edges, in addition to the upper bound c(u, v) . Each edge also has a cost. Often, flow conservation holds for all nodes in a circulation problem, and there is a connection from the sink back to the source. In this way, you can dictate the total flow with l(t, s) and c(t, s) . The flow circulates through the network, hence the name of the problem. In a network with gains or generalized network each edge has a gain, a real number (not zero) such that, if the edge has gain g, and an amount x flows into the edge at its tail, then an amount gx flows out at the head. In a source localization problem, an algorithm tries to identify the most likely source node of information diffusion through a partially observed network. This can be done in linear time for trees and cubic time for arbitrary net- works and has applications ranging from tracking mobile phone users to identifying the originating village of disease outbreaks.[3]

18.4 See also

• Braess’ paradox • Centrality • Constructal theory • Ford–Fulkerson algorithm • Dinic’s algorithm • Flow (computer networking) 18.5. REFERENCES 105

• Flow graph

• Max-flow min-cut theorem • Oriented matroid

• Shortest path problem

18.5 References

[1] Black, Paul E. “Supersource”. Dictionary of Algorithms and Data Structures. NIST.

[2] Black, Paul E. “Supersink”. Dictionary of Algorithms and Data Structures. NIST.

[3] http://www.pedropinto.org.s3.amazonaws.com/publications/locating_source_diffusion_networks.pdf

18.6 Further reading

• George T. Heineman, Gary Pollice, and Stanley Selkow (2008). “Chapter 8:Network Flow Algorithms”. Al- gorithms in a Nutshell. Oreilly Media. pp. 226–250. ISBN 978-0-596-51624-6. • Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin (1993). Network Flows: Theory, Algorithms and Applications. Prentice Hall. ISBN 0-13-617549-X. • Bollobás, Béla (1979). Graph Theory: An Introductory Course. Heidelberg: Springer-Verlag. ISBN 3-540- 90399-2.

• Chartrand, Gary & Oellermann, Ortrud R. (1993). Applied and Algorithmic Graph Theory. New York: McGraw-Hill. ISBN 0-07-557101-3.

• Even, Shimon (1979). Graph Algorithms. Rockville, Maryland: Computer Science Press. ISBN 0-914894- 21-8.

• Gibbons, Alan (1985). Algorithmic Graph Theory. Cambridge: Cambridge University Press. ISBN 0-521- 28881-9.

• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein (2001) [1990]. “26”. Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 696–697. ISBN 0-262-03293-7.

18.7 External links

• Maximum Flow Problem • Maximum Flow

• Real graph instances • dead link] Software, papers, test graphs, etc.

• dead link][Software and papers for network flow problems • Lemon C++ library with several maximum flow and minimum cost circulation algorithms

• QuickGraph, graph data structures and algorithms for .Net Chapter 19

Forbidden graph characterization

In graph theory, a branch of mathematics, many important families of graphs can be described by a finite set of individual graphs that do not belong to the family and further exclude all graphs from the family which contain any of these forbidden graphs as (induced) subgraph or minor. A prototypical example of this phenomenon is Kuratowski’s theorem, which states that a graph is planar (can be drawn without crossings in the plane) if and only if it does not contain either of two forbidden graphs, the complete graph K5 and the complete bipartite graph K₃,₃. For Kuratowski’s theorem, the notion of containment is that of graph homeomorphism, in which a subdivision of one graph appears as a subgraph of the other. Thus, every graph either has a planar drawing (in which case it belongs to the family of planar graphs) or it has a subdivision of one of these two graphs as a subgraph (in which case it does not belong to the planar graphs). More generally, a forbidden graph characterization is a method of specifying a family of graph, or hypergraph, structures, by specifying substructures that are forbidden from existing within any graph in the family. Different families vary in the nature of what is forbidden. In general, a structure G is a member of a family F if and only if a forbidden substructure is not contained in G. The forbidden substructure might be one of:

• subgraphs, smaller graphs obtained from subsets of the vertices and edges of a larger graph, • induced subgraphs, smaller graphs obtained by selecting a subset of the vertices and using all edges with both endpoints in that subset, • homeomorphic subgraphs (also called topological minors), smaller graphs obtained from subgraphs by collaps- ing paths of degree-two vertices to single edges, or • graph minors, smaller graphs obtained from subgraphs by arbitrary edge contractions.

The set of structures that are forbidden from belonging to a given graph family can also be called an obstruction set for that family. Forbidden graph characterizations may be used in algorithms for testing whether a graph belongs to a given family. In many cases, it is possible to test in polynomial time whether a given graph contains any of the members of the obstruction set, and therefore whether it belongs to the family defined by that obstruction set. In order for a family to have a forbidden graph characterization, with a particular type of substructure, the family must be closed under substructures. That is, every substructure (of a given type) of a graph in the family must be another graph in the family. Equivalently, if a graph is not part of the family, all larger graphs containing it as a substructure must also be excluded from the family. When this is true, there always exists an obstruction set (the set of graphs that are not in the family but whose smaller substructures all belong to the family). However, for some notions of what a substructure is, this obstruction set could be infinite. The Robertson–Seymour theorem proves that, for the particular case of graph minors, a family that is closed under minors always has a finite obstruction set.

19.1 List of forbidden characterizations for graphs and hypergraphs

This list is incomplete; you can help by expanding it.

106 19.2. SEE ALSO 107

19.2 See also

• Erdős–Hajnal conjecture

• Forbidden subgraph problem

• Zarankiewicz problem

19.3 References

[1] Diestel, Reinhard (2000), Graph Theory, Graduate Texts in Mathematics 173, Springer-Verlag, ISBN 0-387-98976-5.

[2] Auer, Christopher; Bachmaier, Christian; Brandenburg, Franz J.; Gleißner, Andreas; Hanauer, Kathrin; Neuwirth, Daniel; Reislhuber, Josef (2013), “Recognizing outer 1-planar graphs in linear time”, in Wismath, Stephen; Wolff, Alexander, 21st International Symposium, GD 2013, Bordeaux, France, September 23-25, 2013, Revised Selected Papers, Lecture Notes in Computer Science 8242, pp. 107–118, doi:10.1007/978-3-319-03841-4_10.

[3] Gupta, A.; Impagliazzo, R. (1991), “Computing planar intertwines”, Proc. 32nd IEEE Symposium on Foundations of Computer Science (FOCS '91), IEEE Computer Society, pp. 802–811, doi:10.1109/SFCS.1991.185452.

[4] Robertson, Neil; Seymour, P. D.; Thomas, Robin (1993), “Linkless embeddings of graphs in 3-space”, Bulletin of the Amer- ican Mathematical Society 28 (1): 84–89, arXiv:math/9301216, doi:10.1090/S0273-0979-1993-00335-5, MR 1164063.

[5] Béla Bollobás (1998) “Modern Graph Theory”, Springer, ISBN 0-387-98488-7 p. 9

[6] Kashiwabara, Toshinobu (1981), “Algorithms for some intersection graphs”, in Saito, Nobuji; Nishizeki, Takao, Graph Theory and Algorithms, 17th Symposium of Research Institute of Electric Communication, Tohoku University, Sendai, Japan, October 24-25, 1980, Proceedings, Lecture Notes in Computer Science 108, Springer-Verlag, pp. 171–181, doi:10.1007/3- 540-10704-5\_15.

[7] Chudnovsky, Maria; Robertson, Neil; Seymour, Paul; Thomas, Robin (2006), “The strong perfect graph theorem”, Annals of Mathematics 164 (1): 51–229, arXiv:math/0212070v1, doi:10.4007/annals.2006.164.51.

[8] Beineke, L. W. (1968), “Derived graphs of digraphs”, in Sachs, H.; Voss, H.-J.; Walter, H.-J., Beiträge zur Graphentheorie, Leipzig: Teubner, pp. 17–33.

[9] El-Mallah, Ehab; Colbourn, Charles J. (1988), “The complexity of some edge deletion problems”, IEEE Transactions on Circuits and Systems 35 (3): 354–362, doi:10.1109/31.1748.

[10] Takamizawa, K.; Nishizeki, Takao; Saito, Nobuji (1981), “Combinatorial problems on series-parallel graphs”, Discrete Applied Mathematics 3 (1): 75–76, doi:10.1016/0166-218X(81)90031-7.

[11] Joeris, Benson L.; Lin, Min Chih; McConnell, Ross M.; Spinrad, Jeremy P.; Szwarcfiter, Jayme L. (2009), “Linear-Time Recognition of Helly Circular-Arc Models and Graphs”, Algorithmica 59 (2): 215–239, doi:10.1007/s00453-009-9304-5

[12] Földes, Stéphane; Hammer, Peter L. (1977a), “Split graphs”, Proceedings of the Eighth Southeastern Conference on Com- binatorics, Graph Theory and Computing (Louisiana State Univ., Baton Rouge, La., 1977), Congressus Numerantium XIX, Winnipeg: Utilitas Math., pp. 311–315, MR 0505860

[13] Bodlaender, Hans L. (1998), “A partial k-arboretum of graphs with bounded ”, Theoretical Computer Science 209 (1–2): 1–45, doi:10.1016/S0304-3975(97)00228-4.

[14] Bodlaender, Hans L.; Thilikos, Dimitrios M. (1999), “Graphs with branchwidth at most three”, Journal of Algorithms 32 (2): 167–194, doi:10.1006/jagm.1999.1011.

[15] Seinsche, D. (1974), “On a property of the class of n-colorable graphs”, Journal of Combinatorial Theory, Series B 16 (2): 191–193, doi:10.1016/0095-8956(74)90063-X, MR 0337679

[16] Golumbic, Martin Charles (1978), “Trivially perfect graphs”, Discrete Mathematics 24 (1): 105–107, doi:10.1016/0012- 365X(78)90178-4..

[17] Metelsky, Yury; Tyshkevich, Regina (1997), “On line graphs of linear 3-uniform hypergraphs”, Journal of Graph Theory 25 (4): 243–251, doi:10.1002/(SICI)1097-0118(199708)25:4<243::AID-JGT1>3.0.CO;2-K, MR 1459889 108 CHAPTER 19. FORBIDDEN GRAPH CHARACTERIZATION

[18] Jacobson, M. S.; Kézdy, Andre E.; Lehel, Jeno (1997), “Recognizing intersection graphs of linear uniform hypergraphs”, Graphs and Combinatorics 13: 359–367, MR 1485929

[19] Naik, Ranjan N.; Rao, S. B.; Shrikhande, S. S.; Singhi, N. M. (1982), “Intersection graphs of k-uniform hypergraphs”, European J. Combinatorics 3: 159–172, doi:10.1016/s0195-6698(82)80029-2, MR 0670849 Chapter 20

Fundamental group

For the fundamental group of a factor, see von Neumann algebra.

In the mathematics of algebraic topology, the fundamental group is a mathematical group associated to any given pointed topological space that provides a way to determine when two paths, starting and ending at a fixed base point, can be continuously deformed into each other. It records information about the basic shape, or holes, of the topological space. The fundamental group is the first and simplest homotopy group. The fundamental group is a topological invariant: homeomorphic topological spaces have the same fundamental group. Fundamental groups can be studied using the theory of covering spaces, since a fundamental group coincides with the group of deck transformations of the associated universal covering space. The abelianization of the fundamental group can be identified with the first homology group of the space. When the topological space is homeomorphic to a simplicial complex, its fundamental group can be described explicitly in terms of generators and relations. Henri Poincaré defined the fundamental group in 1895 in his paper "Analysis situs".[1] The concept emerged in the theory of Riemann surfaces, in the work of Bernhard Riemann, Poincaré, and Felix Klein. It describes the monodromy properties of complex-valued functions, as well as providing a complete topological classification of closed surfaces.

20.1 Intuition

Start with a space (e.g. a ), and some point in it, and all the loops both starting and ending at this point — paths that start at this point, wander around and eventually return to the starting point. Two loops can be combined together in an obvious way: travel along the first loop, then along the second. Two loops are considered equivalent if one can be deformed into the other without breaking. The set of all such loops with this method of combining and this equivalence between them is the fundamental group for that particular space.

20.2 Definition

Let X be a topological space, and let x0 be a point of X. We are interested in the following set of continuous functions called loops with base point x0.

{f : [0, 1] → X : f(0) = x0 = f(1)}

Now the fundamental group of X with base point x0 is this set modulo homotopy h

{f : [0, 1] → X : f(0) = x0 = f(1)}/h

equipped with the group multiplication defined by

109 110 CHAPTER 20. FUNDAMENTAL GROUP

{ f(2t) 0 ≤ t ≤ 1 (f ∗ g)(t) = 2 − 1 ≤ ≤ g(2t 1) 2 t 1 Thus the loop f ∗ g first follows the loop f with “twice the speed” and then follows g with twice the speed. The product of two homotopy classes of loops [f] and [g] is then defined as [f ∗ g], and it can be shown that this product does not depend on the choice of representatives.

With the above product, the set of all homotopy classes of loops with base point x0 forms the fundamental group of X at the point x0 and is denoted

π1(X, x0),

or simply π(X, x0). The identity element is the constant map at the basepoint, and the inverse of a loop f is the loop g defined by g(t) = f(1 − t). That is, g follows f backwards. Although the fundamental group in general depends on the choice of base point, it turns out that, up to isomorphism (actually, even up to inner isomorphism), this choice makes no difference as long as the space X is path-connected. For path-connected spaces, therefore, we can write π1(X) instead of π1(X, x0) without ambiguity whenever we care about the isomorphism class only.

20.3 Examples

20.3.1 Trivial Fundamental Group

In Euclidean space Rn, or any convex subset of Rn, there is only one homotopy class of loops, and the fundamental group is therefore the trivial group with one element. A path-connected space with a trivial fundamental group is said to be simply connected.

20.3.2 Infinite Cyclic Fundamental Group

The circle. Each homotopy class consists of all loops which wind around the circle a given number of times (which can be positive or negative, depending on the direction of winding). The product of a loop which winds around m times and another that winds around n times is a loop which winds around m + n times. So the fundamental group of the circle is isomorphic to (Z, +), the additive group of integers. This fact can be used to give proofs of the Brouwer fixed point theorem and the Borsuk–Ulam theorem in dimension 2. Since the fundamental group is a homotopy invariant, the theory of the winding number for the complex plane minus one point is the same as for the circle.

20.3.3 Free Groups of Higher Rank

Unlike the homology groups and higher homotopy groups associated to a topological space, the fundamental group need not be abelian. For example, the fundamental group of the figure eight is the free group on two letters. More generally, the fundamental group of any graph is a free group. If the graph G is connected, then the rank of the free group is equal to the number of edges not in a spanning tree. The fundamental group of the plane punctured at n points is also the free group with n generators. The i-th generator is the class of the loop that goes around the i-th puncture without going around any other punctures.

20.3.4 Knot Theory

Main article: knot group

A somewhat more sophisticated example of a space with a non-abelian fundamental group is the complement of a 3 trefoil knot in R , as known, whose fundamental group is the braid group B3 . 20.4. FUNCTORIALITY 111

20.4 Functoriality

If f : X → Y is a continuous map, x0 ∈ X and y0 ∈ Y with f(x0) = y0, then every loop in X with base point x0 can be composed with f to yield a loop in Y with base point y0. This operation is compatible with the homotopy equivalence relation and with composition of loops. The resulting group homomorphism, called the induced homomorphism, is written as π(f) or, more commonly,

f∗ : π1(X, x0) → π1(Y, y0).

This mapping from continuous maps to group homomorphisms is compatible with composition of maps and identity morphisms. In other words, we have a functor from the category of topological spaces with base point to the category of groups. It turns out that this functor cannot distinguish maps which are homotopic relative to the base point: if f, g : X → Y are continuous maps with f(x0) = g(x0) = y0, and f and g are homotopic relative to {x0}, then f∗ = g∗. As a consequence, two homotopy equivalent path-connected spaces have isomorphic fundamental groups:

∼ X ≃ Y ⇒ π1(X, x0) = π1(Y, y0).

As an important special case, if X is path-connected then any two basepoints give isomorphic fundamental groups, with isomorphism given by a choice of path between the given basepoints. The fundamental group functor takes products to products and coproducts to coproducts. That is, if X and Y are path connected, then

∼ π1(X × Y ) = π1(X) × π1(Y )

and if they are also locally contractible, then

∼ π1(X ∨ Y ) = π1(X) ∗ π1(Y ).

(In the latter formula, ∨ denotes the wedge sum of topological spaces, and * the free product of groups.) Both formulas generalize to arbitrary products. Furthermore the latter formula is a special case of the Seifert–van Kampen theorem which states that the fundamental group functor takes pushouts along inclusions to pushouts.

20.5 Fibrations

Main article: Fibration

A generalization of a product of spaces is given by a fibration,

F → E → B.

Here the total space E is a sort of "twisted product” of the base space B and the fiber F. In general the fundamental groups of B, E and F are terms in a long exact sequence involving higher homotopy groups. When all the spaces are connected, this has the following consequences for the fundamental groups:

• π1(B) and π1(E) are isomorphic if F is simply connected • πn₊₁(B) and πn(F) are isomorphic if E is contractible

The latter is often applied to the situation E = path space of B, F = loop space of B or B = classifying space BG of a G, E = universal G-bundle EG. 112 CHAPTER 20. FUNDAMENTAL GROUP

20.6 Relationship to first homology group

The fundamental groups of a topological space X are related to its first singular homology group, because a loop is also a singular 1-cycle. Mapping the homotopy class of each loop at a base point x0 to the homology class of the loop gives a homomorphism from the fundamental group π1(X, x0) to the homology group H1(X). If X is path-connected, then this homomorphism is surjective and its kernel is the commutator subgroup of π1(X, x0), and H1(X) is therefore isomorphic to the abelianization of π1(X, x0). This is a special case of the Hurewicz theorem of algebraic topology.

20.7 Universal covering space

Main article: Covering space

If X is a topological space that is path connected, locally path connected and locally simply connected, then it has a simply connected universal covering space on which the fundamental group π(X,x0) acts freely by deck transforma- tions with quotient space X. This space can be constructed analogously to the fundamental group by taking pairs (x, γ), where x is a point in X and γ is a homotopy class of paths from x0 to x and the action of π(X, x0) is by concatenation of paths. It is uniquely determined as a covering space.

20.7.1 Examples

The Circle

1 1 1 The universal cover of a circle S is the line R, we have S = R/Z. Thus π1(S ,x) = Z for any base point x.

The Torus

By taking the Cartesian product of two instances of the previous example we see that the universal cover of a torus 1 1 2 2 2 2 T = S × S is the plane R : we have T = R /Z . Thus π1(T,x) = Z for any base point x. Similarly, the fundamental group of the n-dimensional torus equals Zn.

Real Projective Spaces

For n ≥ 1 the real n-dimensional real projective space Pn(R) is obtained by factorizing the n-dimensional sphere Sn n n n by the central symmetry: P (R) = S /Z2. Since the n-sphere S is simply connected for n ≥ 2, we conclude that it is n the universal cover of the real projective space. Thus the fundamental group of P (R) is equal to Z2 for any n ≥ 2.

Lie Groups

Let G be a connected, simply connected compact Lie group, for example the special SU(n), and let Γ be a finite subgroup of G. Then the homogeneous space X = G/Γ has fundamental group Γ, which acts by right multiplication on the universal covering space G. Among the many variants of this construction, one of the most important is given by locally symmetric spaces X = Γ\G/K, where

• G is a non-compact simply connected, connected Lie group (often semisimple),

• K is a maximal compact subgroup of G

• Γ is a discrete countable torsion-free subgroup of G.

In this case the fundamental group is Γ and the universal covering space G/K is actually contractible (by the Cartan decomposition for Lie groups). As an example take G = SL(2, R), K = SO(2) and Γ any torsion-free congruence subgroup of the modular group SL(2, Z). 20.8. EDGE-PATH GROUP OF A SIMPLICIAL COMPLEX 113

From the explicit realization, it also follows that the universal covering space of a path connected topological group H is again a path connected topological group G. Moreover the covering map is a continuous open homomorphism of G onto H with kernel Γ, a closed discrete normal subgroup of G:

1 → Γ → G → H → 1.

Since G is a connected group with a continuous action by conjugation on a discrete group Γ, it must act trivially, so that Γ has to be a subgroup of the center of G. In particular π1(H) = Γ is an Abelian group; this can also easily be seen directly without using covering spaces. The group G is called the universal covering group of H. As the universal covering group suggests, there is an analogy between the fundamental group of a topological group and the center of a group; this is elaborated at Lattice of covering groups.

20.8 Edge-path group of a simplicial complex

If X is a connected simplicial complex, an edge-path in X is defined to be a chain of vertices connected by edges in X. Two edge-paths are said to be edge-equivalent if one can be obtained from the other by successively switching between an edge and the two opposite edges of a triangle in X. If v is a fixed vertex in X, an edge-loop at v is an edge-path starting and ending at v. The edge-path group E(X, v) is defined to be the set of edge-equivalence classes of edge-loops at v, with product and inverse defined by concatenation and reversal of edge-loops.

The edge-path group is naturally isomorphic to π1(|X|, v), the fundamental group of the geometric realisation |X| of 2 X. Since it depends only on the 2-skeleton X of X (i.e. the vertices, edges and triangles of X), the groups π1(|X|,v) 2 and π1(|X |, v) are isomorphic. The edge-path group can be described explicitly in terms of generators and relations. If T is a maximal spanning tree in the 1-skeleton of X, then E(X, v) is canonically isomorphic to the group with generators (the oriented edge-paths of X not occurring in T) and relations (the edge-equivalences corresponding to triangles in X). A similar result holds if T is replaced by any simply connected—in particular contractible—subcomplex of X. This often gives a practical way of computing fundamental groups and can be used to show that every finitely presented group arises as the fundamental group of a finite simplicial complex. It is also one of the classical methods used for topological surfaces, which are classified by their fundamental groups. The universal covering space of a finite connected simplicial complex X can also be described directly as a simplicial complex using edge-paths. Its vertices are pairs (w,γ) where w is a vertex of X and γ is an edge-equivalence class of paths from v to w. The k-simplices containing (w,γ) correspond naturally to the k-simplices containing w. Each new vertex u of the k-simplex gives an edge wu and hence, by concatenation, a new path γu from v to u. The points (w,γ) and (u, γu) are the vertices of the “transported” simplex in the universal covering space. The edge-path group acts naturally by concatenation, preserving the simplicial structure, and the quotient space is just X. It is well known that this method can also be used to compute the fundamental group of an arbitrary topological space. This was doubtless known to Čech and Leray and explicitly appeared as a remark in a paper by Weil (1960); various other authors such as L. Calabi, W-T. Wu and N. Berikashvili have also published proofs. In the simplest case of a compact space X with a finite open covering in which all non-empty finite intersections of open sets in the covering are contractible, the fundamental group can be identified with the edge-path group of the simplicial complex corresponding to the nerve of the covering.

20.9 Realizability

• Every group can be realized as the fundamental group of a connected CW-complex of dimension 2 (or higher). As noted above, though, only free groups can occur as fundamental groups of 1-dimensional CW-complexes (that is, graphs).

• Every finitely presented group can be realized as the fundamental group of a compact, connected, smooth manifold of dimension 4 (or higher). But there are severe restrictions on which groups occur as fundamental groups of low-dimensional manifolds. For example, no free abelian group of rank 4 or higher can be realized as the fundamental group of a manifold of dimension 3 or less. 114 CHAPTER 20. FUNDAMENTAL GROUP

20.10 Related concepts

The fundamental group measures the 1-dimensional hole structure of a space. For studying “higher-dimensional holes”, the homotopy groups are used. The elements of the n-th homotopy group of X are homotopy classes of (basepoint-preserving) maps from Sn to X. The set of loops at a particular base point can be studied without regarding homotopic loops as equivalent. This larger object is the loop space. For topological groups, a different group multiplication may be assigned to the set of loops in the space, with pointwise multiplication rather than concatenation. The resulting group is the loop group.

20.10.1 Fundamental groupoid

Rather than singling out one point and considering the loops based at that point up to homotopy, one can also consider all paths in the space up to homotopy (fixing the initial and final point). This yields not a group but a groupoid, the fundamental groupoid of the space. More generally, one can consider the fundamental groupoid on a set A of base points, chosen according to the geom- etry of the situation; for example, in the case of the circle, which can be represented as the union of two connected open sets whose intersection has two components, one can choose one base point in each component. The exposition of this theory was given in the 1968, 1988 editions of the book now available as Topology and groupoids, which also includes related accounts of covering spaces and orbit spaces.

20.11 See also

• Homotopy group, generalization of fundamental group

There are also similar notions of fundamental group for algebraic varieties (the étale fundamental group) and for orbifolds (the orbifold fundamental group).

20.12 Notes

[1] Poincaré, Henri (1895). “Analysis situs”. Journal de l'École Polytechnique. (2) (in French) 1: 1–123. Translated in Poincaré, Henri (2009). “Analysis situs”. Papers on Topology: Analysis Situs and Its Five Supplements (PDF). Translated by John Stillwell. pp. 18–99.

20.13 References

• Ronald Brown, Topology and groupoids, Booksurge (2006). ISBN 1-4196-2722-8

• Joseph J. Rotman, An Introduction to Algebraic Topology, Springer-Verlag, ISBN 0-387-96678-1

• Isadore Singer and John A. Thorpe, Lecture Notes on Elementary Geometry and Topology, Springer-Verlag (1967) ISBN 0-387-90202-3

• Allen Hatcher, Algebraic Topology, Cambridge University Press (2002) ISBN 0-521-79540-0

• Peter Hilton and Shaun Wylie, Homology Theory, Cambridge University Press (1967) [warning: these authors use contrahomology for cohomology]

• Richard Maunder, Algebraic Topology, Dover (1996) ISBN 0-486-69131-4

• Deane Montgomery and Leo Zippin, Topological Transformation Groups, Interscience Publishers (1955)

• James Munkres, Topology, Prentice Hall (2000) ISBN 0-13-181629-2 20.14. EXTERNAL LINKS 115

• Rubei, Elena (2014), Algebraic Geometry, a concise dictionary, Berlin/Boston: Walter De Gruyter, ISBN 978- 3-11-031622-3 • Herbert Seifert and William Threlfall, A Textbook of Topology (translated from German by Wofgang Heil), Academic Press (1980), ISBN 0-12-634850-2 • Edwin Spanier, Algebraic Topology, Springer-Verlag (1966) ISBN 0-387-94426-5

• André Weil, On discrete subgroups of Lie groups, Ann. Math. 72 (1960), 369-384. • Fundamental group at PlanetMath.org.

• Fundamental groupoid at PlanetMath.org. • Weisstein, Eric W., “Fundamental group”, MathWorld.

20.14 External links

• Dylan G.L. Allegretti, Simplicial Sets and van Kampen’s Theorem: A discussion of the fundamental groupoid of a topological space and the fundamental groupoid of a simplicial set • Animations to introduce fundamental group by Nicolas Delanoue Chapter 21

Geometric graph theory

A geometric graph is a graph in which the vertices or edges are associated with geometric objects, the simplest realisation is a .

21.1 Different types of geometric graphs

A planar straight line graph is a graph in which the vertices are embedded as points in the Euclidean plane, and the edges are embedded as non-crossing line segments. Fáry’s theorem states that any planar graph may be represented as a planar straight line graph. A triangulation is a planar straight line graph to which no more edges may be added, so called because every face is necessarily a triangle; a special case of this is the Delaunay triangulation, a graph defined from a set of points in the plane by connecting two points with an edge whenever there exists a circle containing only those two points. The 1-skeleton of a polyhedron or polytope is the set of vertices and edges of the polytope. The skeleton of any convex polyhedron is a planar graph, and the skeleton of any k-dimensional convex polytope is a k-connected graph. Conversely, Steinitz’s theorem states that any 3-connected planar graph is the skeleton of a convex polyhedron; for this reason, this class of graphs is also known as the polyhedral graphs. A Euclidean graph is a graph in which the vertices represent points in the plane, and the edges are assigned lengths equal to the Euclidean distance between those points. The Euclidean minimum spanning tree is the minimum span- ning tree of a Euclidean complete graph. It is also possible to define graphs by conditions on the distances; in particular, a unit distance graph is formed by connecting pairs of points that are a unit distance apart in the plane. The Hadwiger–Nelson problem concerns the chromatic number of these graphs. An intersection graph is a graph in which each vertex is associated with a set and in which vertices are connected by edges whenever the corresponding sets have a nonempty intersection. When the sets are geometric objects, the result is a geometric graph. For instance, the intersection graph of line segments in one dimension is an interval graph; the intersection graph of unit disks in the plane is a unit disk graph. The Circle packing theorem states that the intersection graphs of non-crossing circles are exactly the planar graphs. Scheinerman’s conjecture states that every planar graph can be represented as the intersection graph of line segments in the plane. A Levi graph of a family of points and lines has a vertex for each of these objects and an edge for every incident point-line pair. The Levi graphs of projective configurations lead to many important symmetric graphs and cages. The visibility graph of a closed polygon connects each pair of vertices by an edge whenever the line segment con- necting the vertices lies entirely in the polygon. It is not known how to test efficiently whether an undirected graph can be represented as a visibility graph. A partial cube is a graph for which the vertices can be associated with the vertices of a hypercube, in such a way that distance in the graph equals Hamming distance between the corresponding hypercube vertices. Many important families of combinatorial structures, such as the acyclic orientations of a graph or the adjacencies between regions in a hyperplane arrangement, can be represented as partial cube graphs. An important special case of a partial cube is the skeleton of the permutohedron, a graph in which vertices represent permutations of a set of ordered objects and edges represent swaps of objects adjacent in the order. Several other important classes of graphs including median graphs have related definitions involving metric embeddings (Bandelt & Chepoi 2008).

116 21.2. SEE ALSO 117

A flip graph is a graph formed from the triangulations of a point set, in which each vertex represents a triangulation and two triangulations are connected by an edge if they differ by the replacement of one edge for another. It is also possible to define related flip graphs for partitions into quadrilaterals or pseudotriangles, and for higher-dimensional triangulations. The flip graph of triangulations of a convex polygon forms the skeleton of the associahedron or Stasheff polytope. The flip graph of regular triangulations of a point set (projections of higher-dimensional convex hulls) can also be represented as a skeleton, of the so-called secondary polytope.

21.2 See also

• Topological graph theory • Chemical graph

21.3 References

• Bandelt, Hans-Jürgen; Chepoi, Victor (2008). “Metric graph theory and geometry: a survey” (PDF). Contemp. Math.: to appear.

• Pach; János]]; ed. (2004). Towards a Theory of Geometric Graphs. Contemporary Mathematics, no. 342, American Mathematical Society.

• Pisanski, Tomaž; Randić, Milan (2000). “Bridges between geometry and graph theory”. In Gorini, C. A. (Ed.). Geometry at Work: Papers in Applied Geometry. Washington, DC: Mathematical Association of America. pp. 174–194. Chapter 22

Glossary of graph theory

Graph theory is a growing area in mathematical research, and has a large specialized vocabulary. Some authors use the same word with different meanings. Some authors use different words to mean the same thing. This page attempts to describe the majority of current usage.

22.1 Basics

A graph G consists of two types of elements, namely vertices and edges. Every edge has two endpoints in the set of vertices, and is said to connect or join the two endpoints. An edge can thus be defined as a set of two vertices (or an ordered pair, in the case of a directed graph - see Section Direction). The two endpoints of an edge are also said to be adjacent to each other. Alternative models of graphs exist; e.g., a graph may be thought of as a Boolean binary function over the set of vertices or as a square (0,1)-matrix. A vertex is simply drawn as a node or a dot. The vertex set of G is usually denoted by V(G), or V when there is no danger of confusion. The order of a graph is the number of its vertices, i.e. |V(G)|. An edge (a set of two elements) is drawn as a line connecting two vertices, called endpoints or end vertices or endvertices. An edge with endvertices x and y is denoted by xy (without any symbol in between). The edge set of G is usually denoted by E(G), or E when there is no danger of confusion. An edge xy is called incident to a vertex when this vertex is one of the endpoints x or y. The size of a graph is the number of its edges, i.e. |E(G)|.[1] A loop is an edge whose endpoints are the same vertex. A link has two distinct endvertices. An edge is multiple if there is another edge with the same endvertices; otherwise it is simple. The multiplicity of an edge is the number of sharing the same end vertices; the multiplicity of a graph, the maximum multiplicity of its edges. A graph is a simple graph if it has no multiple edges or loops, a multigraph if it has multiple edges, but no loops, and a multigraph or pseudograph if it contains both multiple edges and loops (the literature is highly inconsistent). When stated without any qualification, a graph is usually assumed to be simple, except in the literature of category theory, where it refers to a quiver. Graphs whose edges or vertices have names or labels are known as labeled, those without as unlabeled. Graphs with labeled vertices only are vertex-labeled, those with labeled edges only are edge-labeled. The difference between a labeled and an unlabeled graph is that the latter has no specific set of vertices or edges; it is regarded as another way to look upon an isomorphism type of graphs. (Thus, this usage distinguishes between graphs with identifiable vertex or edge sets on the one hand, and isomorphism types or classes of graphs on the other.) (Graph labeling usually refers to the assignment of labels (usually natural numbers, usually distinct) to the edges and vertices of a graph, subject to certain rules depending on the situation. This should not be confused with a graph’s merely having distinct labels or names on the vertices.) A hyperedge is an edge that is allowed to take on any number of vertices, possibly more than 2. A graph that allows any hyperedge is called a hypergraph. A simple graph can be considered a special case of the hypergraph, namely the 2-uniform hypergraph. However, when stated without any qualification, an edge is always assumed to consist of

118 22.1. BASICS 119

In this pseudograph the blue edges are loops and the red edges are multiple edges of multiplicity 2 and 3. The multiplicity of the graph is 3.

at most 2 vertices, and a graph is never confused with a hypergraph. A non-edge (or anti-edge) is an edge that is not present in the graph. More formally, for two vertices u and v , {u, v} is a non-edge in a graph G whenever {u, v} is not an edge in G . This means that there is either no edge between the two vertices or (for directed graphs) at most one of (u, v) and (v, u) from v is an arc in G. Occasionally the term cotriangle or anti-triangle is used for a set of three vertices none of which are connected. The complement G¯ of a graph G is a graph with the same vertex set as G but with an edge set such that xy is an edge in G¯ if and only if xy is not an edge in G. An edgeless graph or empty graph or null graph is a graph with zero or more vertices, but no edges. The empty graph or null graph may also be the graph with no vertices and no edges. If it is a graph with no edges and any number n of vertices, it may be called the null graph on n vertices. (There is no consistency at all in the literature.) A graph is infinite if it has infinitely many vertices or edges or both; otherwise the graph is finite. An infinite graph where every vertex has finite degree is called locally finite. When stated without any qualification, a graph is usually assumed to be finite. See also continuous graph. Two graphs G and H are said to be isomorphic, denoted by G ~ H, if there is a one-to-one correspondence, called an isomorphism, between the vertices of the graph such that two vertices are adjacent in G if and only if their 120 CHAPTER 22. GLOSSARY OF GRAPH THEORY

6 5 4 1

3 2

A labeled simple graph with vertex set V = {1, 2, 3, 4, 5, 6} and edge set E = {{1,2}, {1,5}, {2,3}, {2,5}, {3,4}, {4,5}, {4,6}}.

corresponding vertices are adjacent in H. Likewise, a graph G is said to be homomorphic to a graph H if there is a mapping, called a homomorphism, from V(G) to V(H) such that if two vertices are adjacent in G then their corresponding vertices are adjacent in H.

22.1.1 Subgraphs

A subgraph, H, of a graph, G, is a graph whose vertices are a subset of the vertex set of G, and whose edges are a subset of the edge set of G. In reverse, a supergraph of a graph G is a graph of which G is a subgraph. A graph, G, contains a graph, H, if H is a subgraph of, or is isomorphic to G. A subgraph, H, spans a graph, G, and is a spanning subgraph, or factor of G, if it has the same vertex set as G. A subgraph, H, of a graph, G, is said to be induced (or full) if, for every pair of vertices x and y of H, xy is an edge of H if and only if xy is an edge of G. In other words, H is an induced subgraph of G if it has exactly the edges that appear in G over the same vertex set. If the vertex set of H is the subset S of V(G), then H can be written as G[S] and is said to be induced by S. A graph, G, is minimal with some property, P, provided that G has property P and no proper subgraph of G has property P. In this definition, the term subgraph is usually understood to mean induced subgraph. The notion of maximality is defined dually: G is maximal with P provided that P(G) and G has no proper supergraph H such that P(H). A graph that does not contain H as an induced subgraph is said to be H-free, and more generally if F is a family of graphs then the graphs that do not contain any induced subgraph isomorphic to a member of F are called F -free.[2] For example the triangle-free graphs are the graphs that do not have a triangle graph as an induced subgraph. Many important classes of graphs can be defined by sets of forbidden subgraphs, the graphs that are not in the class and are minimal with respect to subgraphs, induced subgraphs, or graph minors. A universal graph in a class K of graphs is a simple graph in which every element in K can be embedded as a subgraph. 22.1. BASICS 121

22.1.2 Walks

A walk is a sequence of vertices and edges, where each edge’s endpoints are the preceding and following vertices in the sequence. A walk is closed if its first and last vertices are the same, and open if they are different. The length l of a walk is the number of edges that it uses. For an open walk, l = n–1, where n is the number of vertices visited (a vertex is counted each time it is visited). For a closed walk, l = n (the start/end vertex is listed twice, but is not counted twice). In the example labeled simple graph, (1, 2, 5, 1, 2, 3) is an open walk with length 5, and (4, 5, 2, 1, 5, 4) is a closed walk of length 5. A trail is a walk in which all the edges are distinct. A closed trail has been called a tour or circuit, but these are not universal, and the latter is often reserved for a regular subgraph of degree two.

A directed tour. This is not a simple cycle, since the blue vertices are used twice.

Traditionally, a path referred to what is now usually known as an open walk. Nowadays, when stated without any qualification, a path is usually understood to be simple, meaning that no vertices (and thus no edges) are repeated. (The term chain has also been used to refer to a walk in which all vertices and edges are distinct.) In the example labeled simple graph, (5, 2, 1) is a path of length 2. The closed equivalent to this type of walk, a walk that starts and ends at the same vertex but otherwise has no repeated vertices or edges, is called a cycle. Like path, this term traditionally referred to any closed walk, but now is usually understood to be simple by definition. In the example labeled simple graph, (1, 5, 2, 1) is a cycle of length 3. (A cycle, unlike a path, is not allowed to have length 0.) Paths and cycles of n vertices are often denoted by Pn and Cn, respectively. (Some authors use the length instead of the number of vertices, however.)

C1 is a loop, C2 is a digon (a pair of parallel undirected edges in a multigraph, or a pair of antiparallel edges in a 122 CHAPTER 22. GLOSSARY OF GRAPH THEORY

directed graph), and C3 is called a triangle. A cycle that has odd length is an odd cycle; otherwise it is an even cycle. One theorem is that a graph is bipartite if and only if it contains no odd cycles. (See complete bipartite graph.) A graph is acyclic if it contains no cycles; unicyclic if it contains exactly one cycle; and pancyclic if it contains cycles of every possible length (from 3 to the order of the graph). A is a graph with n vertices (n ≥ 4), formed by connecting a single vertex to all vertices of C-₁. The girth of a graph is the length of a shortest (simple) cycle in the graph; and the circumference, the length of a longest (simple) cycle. The girth and circumference of an acyclic graph are defined to be infinity ∞. A path or cycle is Hamiltonian (or spanning) if it uses all vertices exactly once. A graph that contains a Hamil- tonian path is traceable; and one that contains a for any given pair of (distinct) end vertices is a Hamiltonian connected graph. A graph that contains a Hamiltonian cycle is a Hamiltonian graph. A trail or circuit (or cycle) is Eulerian if it uses all edges precisely once. A graph that contains an Eulerian trail is traversable. A graph that contains an Eulerian circuit is an Eulerian graph. Two paths are internally disjoint (some people call it independent) if they do not have any vertex in common, except the first and last ones. A theta graph is the union of three internally disjoint (simple) paths that have the same two distinct end vertices.[3] A theta0 graph has seven vertices and eight edges that can be drawn as the perimeter and one diameter of a regular hexagon. (The seventh vertex splits the diameter into two edges.) The smallest, excluding , topological minor of a theta0 graph consists of a square plus one of its diagonals.

22.1.3 Trees

A tree is a connected acyclic simple graph. For directed graphs, each vertex has at most one incoming edge. A vertex of degree 1 is called a leaf, or pendant vertex. An edge incident to a leaf is a leaf edge, or pendant edge. (Some people define a leaf edge as a leaf and then define a leaf vertex on top of it. These two sets of definitions are often used interchangeably.) A non-leaf vertex is an internal vertex. Sometimes, one vertex of the tree is distinguished, and called the root; in this case, the tree is called rooted. Rooted trees are often treated as directed acyclic graphs with the edges pointing away from the root. A subtree of the tree T is a connected subgraph of T. A forest is an acyclic simple graph. For directed graphs, each vertex has at most one incoming edge. (That is, a tree with the connectivity requirement removed; a graph containing multiple disconnected trees.) A subforest of the forest F is a subgraph of F. A spanning tree is a spanning subgraph that is a tree. Every graph has a spanning forest. But only a connected graph has a spanning tree. A special kind of tree called a star is K₁,k. An induced star with 3 edges is a claw. A caterpillar is a tree in which all non-leaf nodes form a single path. A k-ary tree is a rooted tree in which every internal vertex has no more than k children. A 1-ary tree is just a path. A 2-ary tree is also called a binary tree.

22.1.4 Cliques

The complete graph Kn of order n is a simple graph with n vertices in which every vertex is adjacent to every other. The pentagon-shaped graph to the right is complete. The complete graph on n vertices is often denoted by Kn. It has n(n−1)/2 edges (corresponding to all possible choices of pairs of vertices). A clique in a graph is a set of pairwise adjacent vertices. Since any subgraph induced by a clique is a complete subgraph, the two terms and their notations are usually used interchangeably. A k-clique is a clique of order k. In the example labeled simple graph above, vertices 1, 2 and 5 form a 3-clique, or a triangle.A maximal clique is a clique that is not a subset of any other clique (some authors reserve the term clique for maximal cliques). The clique number ω(G) of a graph G is the order of a largest clique in G. 22.1. BASICS 123

2 1

3 4

5

6

A labeled tree with 6 vertices and 5 edges. Nodes 1, 2, 3, and 6 are leaves, while 4 and 5 are internal vertices.

22.1.5 Strongly connected component

A related but weaker concept is that of a strongly connected component. Informally, a strongly connected component of a directed graph is a subgraph where all nodes in the subgraph are reachable by all other nodes in the subgraph. Reachability between nodes is established by the existence of a path between the nodes. A directed graph can be decomposed into strongly connected components by running the depth-first search (DFS) algorithm twice: first, on the graph itself and next on the transpose graph in decreasing order of the finishing times of the first DFS. Given a directed graph G, the transpose GT is the graph G with all the edge directions reversed. 124 CHAPTER 22. GLOSSARY OF GRAPH THEORY

K5, a complete graph. If a subgraph looks like this, the vertices in that subgraph form a clique of size 5.

22.1.6 Hypercubes

n n−1 A hypercube graph Qn is a regular graph with 2 vertices, 2 n edges, and n edges touching each vertex. It can be obtained as the one-dimensional skeleton of the geometric hypercube.

22.1.7 Knots

A knot in a directed graph is a collection of vertices and edges with the property that every vertex in the knot has outgoing edges, and all outgoing edges from vertices in the knot terminate at other vertices in the knot. Thus it is impossible to leave the knot while following the directions of the edges.

22.1.8 Minors

A minor G2 = (V2,E2) of G1 = (V1,E1) is an injection from V2 to V1 such that every edge in E2 corresponds to a path (disjoint from all other such paths) in G1 such that every vertex in V1 is in one or more paths, or is part of the injection from V2 to V1 . This can alternatively be phrased in terms of contractions, which are operations which collapse a path and all vertices on it into a single edge (see Minor (graph theory)). 22.2. ADJACENCY AND DEGREE 125

22.1.9 Embedding

An embedding G2 = (V2,E2) of G1 = (V1,E1) is an injection from V2 to V1 such that every edge in E2 corresponds [4] to a path in G1 .

22.2 Adjacency and degree

In graph theory, degree, especially that of a vertex, is usually a measure of immediate adjacency. An edge connects two vertices; these two vertices are said to be incident to that edge, or, equivalently, that edge incident to those two vertices. All degree-related concepts have to do with adjacency or incidence. The degree, or valency, dG(v) of a vertex v in a graph G is the number of edges incident to v, with loops being counted twice. A vertex of degree 0 is an isolated vertex. A vertex of degree 1 is a leaf. In the example labeled simple graph, vertices 1 and 3 have a degree of 2, vertices 2, 4 and 5 have a degree of 3, and vertex 6 has a degree of 1. If E is finite, then the total sum of vertex degrees is equal to twice the number of edges. The total degree of a graph is the sum of the degrees of all its vertices. Thus, for a graph without loops, it is equal to the number of incidences between vertices and edges. The handshaking lemma states that the total degree is always equal to two times the number of edges, loops included. This means that for a simple graph with 3 vertices with each vertex having a degree of two (i.e. a triangle) the total degree would be six (e.g. 3 x 2 = 6).

A degree sequence is a list of degrees of a graph in non-increasing order (e.g. d1 ≥ d2 ≥ … ≥ dn). A sequence of non-increasing integers is realizable if it is a degree sequence of some graph. Two vertices u and v are called adjacent if an edge exists between them. We denote this by u ~ v or u ↓ v. In the above graph, vertices 1 and 2 are adjacent, but vertices 2 and 4 are not. The set of neighbors of v, that is, vertices adjacent to v not including v itself, forms an induced subgraph called the (open) neighborhood of v and denoted NG(v). When v is also included, it is called a closed neighborhood and denoted by NG[v]. When stated without any qualification, a neighborhood is assumed to be open. The subscript G is usually dropped when there is no danger of confusion; the same neighborhood notation may also be used to refer to sets of adjacent vertices rather than the corresponding induced subgraphs. In the example labeled simple graph, vertex 1 has two neighbors: vertices 2 and 5. For a simple graph, the number of neighbors that a vertex has coincides with its degree. A dominating set of a graph is a vertex subset whose closed neighborhood includes all vertices of the graph. A vertex v dominates another vertex u if there is an edge from v to u. A vertex subset V dominates another vertex subset U if every vertex in U is adjacent to some vertex in V. The minimum size of a dominating set is the domination number γ(G). In computers, a finite, directed or undirected graph (with n vertices, say) is often represented by its adjacency matrix: an n-by-n matrix whose entry in row i and column j gives the number of edges from the i-th to the j-th vertex. Spectral graph theory studies relationships between the properties of a graph and its adjacency matrix or other matrices associated with the graph. The maximum degree Δ(G) of a graph G is the largest degree over all vertices; the minimum degree δ(G), the smallest. A graph in which every vertex has the same degree is regular. It is k-regular if every vertex has degree k. A 0- regular graph is an independent set. A 1-regular graph is a matching. A 2-regular graph is a vertex disjoint union of cycles. A 3-regular graph is said to be cubic, or trivalent. A k-factor is a k-regular spanning subgraph. A 1-factor is a perfect matching. A partition of edges of a graph into k-factors is called a k-factorization.A k-factorable graph is a graph that admits a k-factorization. A graph is biregular if it has unequal maximum and minimum degrees and every vertex has one of those two degrees. A is a regular graph such that any adjacent vertices have the same number of common neighbors as other adjacent pairs and that any nonadjacent vertices have the same number of common neighbors as other nonadjacent pairs. 126 CHAPTER 22. GLOSSARY OF GRAPH THEORY

22.2.1 Independence

In graph theory, the word independent usually carries the connotation of pairwise disjoint or mutually nonadjacent. In this sense, independence is a form of immediate nonadjacency. An isolated vertex is a vertex not incident to any edges. An independent set, or coclique, or stable set, is a set of vertices of which no pair is adjacent. Since the graph induced by any independent set is an empty graph, the two terms are usually used interchangeably. In the example labeled simple graph at the top of this page, vertices 1, 3, and 6 form an independent set; and 2 and 4 form another one. Two subgraphs are edge disjoint if they have no edges in common. Similarly, two subgraphs are vertex disjoint if they have no vertices (and thus, also no edges) in common. Unless specified otherwise, a set of disjoint subgraphs are assumed to be pairwise vertex disjoint. The independence number α(G) of a graph G is the size of the largest independent set of G. A graph can be decomposed into independent sets in the sense that the entire vertex set of the graph can be partitioned into pairwise disjoint independent subsets. Such independent subsets are called partite sets, or simply parts. A graph that can be decomposed into two partite sets bipartite; three sets, tripartite; k sets, k-partite; and an unknown number of sets, multipartite. An 1-partite graph is the same as an independent set, or an empty graph. A 2-partite graph is the same as a bipartite graph. A graph that can be decomposed into k partite sets is also said to be k-colourable. A complete multipartite graph is a graph in which vertices are adjacent if and only if they belong to different partite sets. A complete bipartite graph is also referred to as a biclique; if its partite sets contain n and m vertices, respectively, then the graph is denoted Kn,m. A k-partite graph is semiregular if each of its partite sets has a uniform degree; equipartite if each partite set has the same size; and balanced k-partite if each partite set differs in size by at most 1 with any other. The matching number α′(G) of a graph G is the size of a largest matching, or pairwise vertex disjoint edges, of G. A spanning matching, also called a perfect matching is a matching that covers all vertices of a graph.

22.3 Complexity

Complexity of a graph denotes the quantity of information that a graph contained, and can be measured in several ways. For example, by counting the number of its spanning trees, or the value of a certain formula involving the number of vertices, edges, and proper paths in a graph. [5]

22.4 Connectivity

Connectivity extends the concept of adjacency and is essentially a form (and measure) of concatenated adjacency. If it is possible to establish a path from any vertex to any other vertex of a graph, the graph is said to be connected; otherwise, the graph is disconnected. A graph is totally disconnected if there is no path connecting any pair of vertices. This is just another name to describe an empty graph or independent set. A cut vertex, or articulation point, is a vertex whose removal disconnects the remaining subgraph. A cut set, or vertex cut or separating set, is a set of vertices whose removal disconnects the remaining subgraph. A bridge is an analogous edge (see below). If it is always possible to establish a path from any vertex to every other even after removing any k - 1 vertices, then the graph is said to be k-vertex-connected or k-connected. Note that a graph is k-connected if and only if it contains k internally disjoint paths between any two vertices. The example labeled simple graph above is connected (and therefore 1-connected), but not 2-connected. The vertex connectivity or connectivity κ(G) of a graph G is the minimum number of vertices that need to be removed to disconnect G. The complete graph Kn has connectivity n - 1 for n > 1; and a disconnected graph has connectivity 0. In , a giant component is a connected subgraph that contains a majority of the entire graph’s nodes. A bridge, or cut edge or isthmus, is an edge whose removal disconnects a graph. (For example, all the edges in a tree are bridges.) A cut vertex is an analogous vertex (see above). A disconnecting set is a set of edges whose removal 22.5. DISTANCE 127

increases the number of components. An edge cut is the set of all edges which have one vertex in some proper vertex subset S and the other vertex in V(G)\S. Edges of K3 form a disconnecting set but not an edge cut. Any two edges of K3 form a minimal disconnecting set as well as an edge cut. An edge cut is necessarily a disconnecting set; and a minimal disconnecting set of a nonempty graph is necessarily an edge cut. A bond is a minimal (but not necessarily minimum), nonempty set of edges whose removal disconnects a graph. A graph is k-edge-connected if any subgraph formed by removing any k - 1 edges is still connected. The edge connectivity κ'(G) of a graph G is the minimum number of edges needed to disconnect G. One well-known result is that κ(G) ≤ κ'(G) ≤ δ(G). A component is a maximally connected subgraph. A block is either a maximally 2-connected subgraph, a bridge (together with its vertices), or an isolated vertex. A biconnected component is a 2-connected component. An articulation point (also known as a separating vertex) of a graph is a vertex whose removal from the graph increases its number of connected components. A biconnected component can be defined as a subgraph induced by a maximal set of nodes that has no separating vertex.

22.5 Distance

The distance dG(u, v) between two (not necessary distinct) vertices u and v in a graph G is the length of a shortest path (also called a graph geodesic) between them. The subscript G is usually dropped when there is no danger of confusion. When u and v are identical, their distance is 0. When u and v are unreachable from each other, their distance is defined to be infinity ∞. The eccentricity εG(v) of a vertex v in a graph G is the maximum distance from v to any other vertex. The diameter diam(G) of a graph G is the maximum eccentricity over all vertices in a graph; and the radius rad(G), the minimum. When there are two components in G, diam(G) and rad(G) defined to be infinity ∞. Trivially, diam(G) ≤ 2 rad(G). Vertices with maximum eccentricity are called peripheral vertices. Vertices of minimum eccentricity form the center. A tree has at most two center vertices. The Wiener index of a vertex v in a graph G, denoted by WG(v) is the sum of distances between v and all others. The Wiener index of a graph G, denoted by W(G), is the sum of distances over all pairs of vertices. An undirected graph’s Wiener polynomial is defined to be Σ qd(u,v) over all unordered pairs of vertices u and v. Wiener index and Wiener polynomial are of particular interest to mathematical chemists. The k-th power Gk of a graph G is a supergraph formed by adding an edge between all pairs of vertices of G with distance at most k.A second power of a graph is also called a square. A k-spanner is a spanning subgraph, S, in which every two vertices are at most k times as far apart on S than on G. The number k is the dilation. k-spanner is used for studying geometric network optimization.

22.6 Genus

A crossing is a pair of intersecting edges. A graph is embeddable on a surface if its vertices and edges can be arranged on it without any crossing. The genus of a graph is the lowest genus of any surface on which the graph can embed. A planar graph is one which can be drawn on the (Euclidean) plane without any crossing; and a plane graph, one which is drawn in such fashion. In other words, a planar graph is a graph of genus 0. The example labeled simple graph is planar; the complete graph on n vertices, for n> 4, is not planar. Also, a tree is necessarily a planar graph. When a graph is drawn without any crossing, any cycle that surrounds a region without any edges reaching from the cycle into the region forms a face. Two faces on a plane graph are adjacent if they share a common edge. A dual, or planar dual when the context needs to be clarified, G* of a plane graph G is a graph whose vertices represent the faces, including any outerface, of G and are adjacent in G* if and only if their corresponding faces are adjacent in G. The dual of a planar graph is always a planar pseudograph (e.g. consider the dual of a triangle). In the familiar case of a 3-connected simple planar graph G (isomorphic to a convex polyhedron P), the dual G* is also a 3-connected simple planar graph (and isomorphic to the dual polyhedron P*). Furthermore, since we can establish a sense of “inside” and “outside” on a plane, we can identify an “outermost” region that contains the entire graph if the graph does not cover the entire plane. Such outermost region is called 128 CHAPTER 22. GLOSSARY OF GRAPH THEORY

an outer face. An outerplanar graph is one which can be drawn in the planar fashion such that its vertices are all adjacent to the outer face; and an outerplane graph, one which is drawn in such fashion. The minimum number of crossings that must appear when a graph is drawn on a plane is called the crossing number. The minimum number of planar graphs needed to cover a graph is the thickness of the graph.

22.7 Weighted graphs and networks

A weighted graph associates a label (weight) with every edge in the graph. Weights are usually real numbers. They may be restricted to rational numbers or integers. Certain algorithms require further restrictions on weights; for instance, Dijkstra’s algorithm works properly only for positive weights. The weight of a path or the weight of a tree in a weighted graph is the sum of the weights of the selected edges. Sometimes a non-edge (a vertex pair with no connecting edge) is indicated by labeling it with a special weight representing infinity. Sometimes the word cost is used instead of weight. When stated without any qualification, a graph is always assumed to be unweighted. In some writing on graph theory the term network is a synonym for a weighted graph. A network may be directed or undirected, it may contain special vertices (nodes), such as source or sink. The classical network problems include:

• minimum cost spanning tree, • shortest paths, • maximal flow (and the max-flow min-cut theorem)

22.8 Direction

Main article: Digraph (mathematics)

A directed arc, or directed edge, is an ordered pair of endvertices that can be represented graphically as an arrow drawn between the endvertices. In such an ordered pair the first vertex is called the initial vertex or tail; the second one is called the terminal vertex or head (because it appears at the arrow head). An undirected edge disregards any sense of direction and treats both endvertices interchangeably. A loop in a digraph, however, keeps a sense of direction and treats both head and tail identically. A set of arcs are multiple, or parallel, if they share the same head and the same tail. A pair of arcs are anti-parallel if one’s head/tail is the other’s tail/head. A digraph, or directed graph, or oriented graph, is analogous to an undirected graph except that it contains only arcs. A mixed graph may contain both directed and undirected edges; it generalizes both directed and undirected graphs. When stated without any qualification, a graph is almost always assumed to be undirected. A digraph is called simple if it has no loops and at most one arc between any pair of vertices. When stated without any qualification, a digraph is usually assumed to be simple. A quiver is a directed graph which is specifically allowed, but not required, to have loops and more than one arc between any pair of vertices. In a digraph Γ, we distinguish the out degree dΓ+(v), the number of edges leaving a vertex v, and the in degree dΓ−(v), the number of edges entering a vertex v. If the graph is oriented, the degree dΓ(v) of a vertex v is equal to the sum of its out- and in- degrees. When the context is clear, the subscript Γ can be dropped. Maximum and minimum out degrees are denoted by Δ+(Γ) and δ+(Γ); and maximum and minimum in degrees, Δ−(Γ) and δ−(Γ). An out-neighborhood, or successor set, N+Γ(v) of a vertex v is the set of heads of arcs going from v. Likewise, an in-neighborhood, or predecessor set, N−Γ(v) of a vertex v is the set of tails of arcs going into v. A source is a vertex with 0 in-degree; and a sink, 0 out-degree. A vertex v dominates another vertex u if there is an arc from v to u. A vertex subset S is out-dominating if every vertex not in S is dominated by some vertex in S; and in-dominating if every vertex in S is dominated by some vertex not in S. A kernel in a (possibly directed) graph G is an independent set S such that every vertex in V(G) \ S dominates some vertex in S. In undirected graphs, kernels are maximal independent sets.[6] A digraph is kernel perfect if every induced sub-digraph has a kernel.[7] An Eulerian digraph is a digraph with equal in- and out-degrees at every vertex. 22.9. COLOURING 129

The zweieck of an undirected edge e = (u, v) is the pair of diedges (u, v) and (v, u) which form the simple dicircuit. An orientation is an assignment of directions to the edges of an undirected or partially directed graph. When stated without any qualification, it is usually assumed that all undirected edges are replaced by a directed one in an orientation. Also, the underlying graph is usually assumed to be undirected and simple. A tournament is a digraph in which each pair of vertices is connected by exactly one arc. In other words, it is an oriented complete graph. A directed path, or just a path when the context is clear, is an oriented simple path such that all arcs go the same direction, meaning all internal vertices have in- and out-degrees 1. A vertex v is reachable from another vertex u if there is a directed path that starts from u and ends at v. Note that in general the condition that u is reachable from v does not imply that v is also reachable from u. If v is reachable from u, then u is a predecessor of v and v is a successor of u. If there is an arc from u to v, then u is a direct predecessor of v, and v is a direct successor of u. A digraph is strongly connected if every vertex is reachable from every other following the directions of the arcs. On the contrary, a digraph is weakly connected if its underlying undirected graph is connected. A weakly connected graph can be thought of as a digraph in which every vertex is “reachable” from every other but not necessarily following the directions of the arcs. A strong orientation is an orientation that produces a strongly connected digraph. A directed cycle, or just a cycle when the context is clear, is an oriented simple cycle such that all arcs go the same direction, meaning all vertices have in- and out-degrees 1. A digraph is acyclic if it does not contain any directed cycle. A finite, acyclic digraph with no isolated vertices necessarily contains at least one source and at least one sink. An arborescence, or out-tree or branching, is an oriented tree in which all vertices are reachable from a single vertex. Likewise, an in-tree is an oriented tree in which a single vertex is reachable from every other one.

22.8.1 Directed acyclic graphs

Main article: directed acyclic graph

The partial order structure of directed acyclic graphs (or DAGs) gives them their own terminology. If there is a directed edge from u to v, then we say u is a parent of v and v is a child of u. If there is a directed path from u to v, we say u is an ancestor of v and v is a descendant of u. The moral graph of a DAG is the undirected graph created by adding an (undirected) edge between all parents of the same node (sometimes called marrying), and then replacing all directed edges by undirected edges. A DAG is perfect if, for each node, the set of parents is complete (i.e. no new edges need to be added when forming the moral graph).

22.9 Colouring

Main article: Graph colouring Vertices in graphs can be given colours to identify or label them. Although they may actually be rendered in diagrams in different colours, working mathematicians generally pencil in numbers or letters (usually numbers) to represent the colours. Given a graph G (V,E) a k-colouring of G is a map ϕ : V → {1, ..., k} with the property that (u, v) ∈ E ⇒ ϕ(u) ≠ ϕ(v) - in other words, every vertex is assigned a colour with the condition that adjacent vertices cannot be assigned the same colour. The chromatic number χ(G) is the smallest k for which G has a k-colouring. Given a graph and a colouring, the colour classes of the graph are the sets of vertices given the same colour. A graph is called k-critical if its chromatic number is k but all of its proper subgraphs have chromatic number less than k. An odd cycle is 3-critical, and the complete graph on k vertices is k-critical. 130 CHAPTER 22. GLOSSARY OF GRAPH THEORY

This graph is an example of a 4-critical graph. Its chromatic number is 4 but all of its proper subgraphs have a chromatic number less than 4. This graph is also planar

22.10 Various

A graph invariant is a property of a graph G, usually a number or a polynomial, that depends only on the isomorphism class of G. Examples are the order, genus, chromatic number, and chromatic polynomial of a graph.

22.11 See also

• Graph (mathematics)

• List of graph theory topics

22.12 References

[1] Harris, John M. (2000). Combinatorics and Graph Theory. New York: Springer-Verlag. p. 5. ISBN 0-387-98736-3.

[2] Brandstädt, Andreas; Le, Van Bang; Spinrad, Jeremy (1999), “Chapter 7: Forbidden Subgraph”, Graph Classes: A Survey, SIAM Monographs on Discrete Mathematics and Applications, pp. 105–121, ISBN 0-89871-432-X.

[3] Mitchem, John (1969), “Hypo-properties in graphs”, The Many Facets of Graph Theory (Proc. Conf., Western Mich. Univ., Kalamazoo, Mich., 1968), Springer, pp. 223–230, doi:10.1007/BFb0060121, MR 0253932; Bondy, J. A. (1972), “The “graph theory” of the Greek alphabet”, Graph theory and applications (Proc. Conf., Western Michigan Univ., Kalamazoo, Mich., 1972; dedicated to the memory of J. W. T. Youngs), Lecture Notes in Mathematics 303, Springer, pp. 43–54, doi:10.1007/BFb0067356, MR 0335362.

[4] Rosenberg, Arnold L. and Heath, Lenwood S. (2001). Graph separators with applications. (1st edition ed.). Kluwer. ISBN 978-0-306-46464-5. 22.12. REFERENCES 131

[5] Neel, David L. (2006), “The linear complexity of a graph”, The electronic journal of combinatorics

[6] Bondy, J.A., Murty, U.S.R., Graph Theory, p. 298

[7] Béla Bollobás, Modern Graph theory, p. 298

• Bollobás, Béla (1998). Modern Graph Theory. Graduate Texts in Mathematics 184. New York: Springer- Verlag. ISBN 0-387-98488-7. Zbl 0902.05016.. [Packed with advanced topics followed by a historical overview at the end of each chapter.]

• Brandstädt, Andreas; Le, Van Bang; Spinrad, Jeremy P. (1999). Graph classes: a survey. SIAM Monographs on Discrete Mathematics. and Applications 3. Philadelphia, PA: Society for Industrial and Applied Mathe- matics. ISBN 978-0-898714-32-6. Zbl 0919.05001. • Diestel, Reinhard (2010). Graph Theory. Graduate Texts in Mathematics 173 (4th ed.). Springer-Verlag. ISBN 978-3-642-14278-9. Zbl 1204.05001.[Standard textbook, most basic material and some deeper results, exercises of various difficulty and notes at the end of each chapter; known for being quasi error-free.]

• West, Douglas B. (2001). Introduction to Graph Theory (2ed). Upper Saddle River: Prentice Hall. ISBN 0-13-014400-2.[Tons of illustrations, references, and exercises. The most complete introductory guide to the subject.] • Weisstein, Eric W., “Graph”, MathWorld.

• Zaslavsky, Thomas. Glossary of signed and gain graphs and allied areas. Electronic Journal of Combinatorics, Dynamic Surveys in Combinatorics, # DS 8. http://www.combinatorics.org/Surveys/ Chapter 23

Graph (mathematics)

This article is about sets of vertices connected by edges. For graphs of mathematical functions, see Graph of a function. For other uses, see Graph (disambiguation). In mathematics, and more specifically in graph theory, a graph is a representation of a set of objects where some 6 5 4 1

3 2

A drawing of a labeled graph on 6 vertices and 7 edges. pairs of objects are connected by links. The interconnected objects are represented by mathematical abstractions called vertices, and the links that connect some pairs of vertices are called edges.[1] Typically, a graph is depicted in diagrammatic form as a set of dots for the vertices, joined by lines or curves for the edges. Graphs are one of the objects of study in discrete mathematics. The edges may be directed or undirected. For example, if the vertices represent people at a party, and there is an edge between two people if they shake hands, then this is an undirected graph, because if person A shook hands with person B, then person B also shook hands with person A. In contrast, if there is an edge from person A to person B when person A knows of person B, then this graph is directed, because knowledge of someone is not necessarily a symmetric relation (that is, one person knowing another person does not necessarily imply the reverse; for example, many fans may know of a celebrity, but the celebrity is unlikely to know of all their fans). This latter type of graph is called a directed graph and the edges are called directed edges or arcs.

132 23.1. DEFINITIONS 133

Vertices are also called nodes or points, and edges are also called arcs or lines. Graphs are the basic subject studied by graph theory. The word “graph” was first used in this sense by J.J. Sylvester in 1878.[2][3]

23.1 Definitions

Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures.

23.1.1 Graph

In the most common sense of the term,[4] a graph is an ordered pair G = (V, E) comprising a set V of vertices or nodes together with a set E of edges or links, which are 2-element subsets of V (i.e., an edge is related with two vertices, and the relation is represented as an unordered pair of the vertices with respect to the particular edge). To avoid ambiguity, this type of graph may be described precisely as undirected and simple. Other senses of graph stem from different conceptions of the edge set. In one more generalized notion,[5] E is a set together with a relation of incidence that associates with each edge two vertices. In another generalized notion, E is a multiset of unordered pairs of (not necessarily distinct) vertices. Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below. The vertices belonging to an edge are called the ends, endpoints, or end vertices of the edge. A vertex may exist in a graph and not belong to an edge. V and E are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. Moreover, V is often assumed to be non- empty, but E is allowed to be the empty set. The order of a graph is |V | (the number of vertices). A graph’s size is |E| , the number of edges. The degree of a vertex is the number of edges that connect to it, where an edge that connects to the vertex at both ends (a loop) is counted twice. For an edge {u, v}, graph theorists usually use the somewhat shorter notation uv.

23.1.2 Adjacency relation

The edges E of an undirected graph G induce a symmetric binary relation ~ on V that is called the adjacency relation of G. Specifically, for each edge {u, v} the vertices u and v are said to be adjacent to one another, which is denoted u ~ v.

23.2 Types of graphs

23.2.1 Distinction in terms of the main definition

As stated above, in different contexts it may be useful to refine the term graph with different degrees of generality. Whenever it is necessary to draw a strict distinction, the following terms are used. Most commonly, in modern texts in graph theory, unless stated otherwise, graph means “undirected simple finite graph” (see the definitions below).

A directed graph. 134 CHAPTER 23. GRAPH (MATHEMATICS)

A simple undirected graph with three vertices and three edges. Each vertex has degree two, so this is also a regular graph.

Undirected graph

An undirected graph is one in which edges have no orientation. The edge (a, b) is identical to the edge (b, a), i.e., they are not ordered pairs, but sets {u, v} (or 2-multisets) of vertices. The maximum number of edges in an undirected graph without a self-loop is n(n - 1)/2.

Directed graph

Main article: Directed graph

A directed graph or digraph is an ordered pair D = (V, A) with

• V a set whose elements are called vertices or nodes, and

• A a set of ordered pairs of vertices, called arcs, directed edges, or arrows.

An arc a = (x, y) is considered to be directed from x to y; y is called the head and x is called the tail of the arc; y is said to be a direct successor of x, and x is said to be a direct predecessor of y. If a path leads from x to y, then y is said to be a successor of x and reachable from x, and x is said to be a predecessor of y. The arc (y, x) is called the arc (x, y) inverted. A directed graph D is called symmetric if, for every arc in D, the corresponding inverted arc also belongs to D.A symmetric loopless directed graph D = (V, A) is equivalent to a simple undirected graph G = (V, E), where the pairs of inverse arcs in A correspond 1-to-1 with the edges in E; thus the edges in G number |E| = |A|/2, or half the number of arcs in D. An oriented graph is a directed graph in which at most one of (x, y) and (y, x) may be arcs.

Mixed graph

Main article: Mixed graph

A mixed graph G is a graph in which some edges may be directed and some may be undirected. It is written as an ordered triple G = (V, E, A) with V, E, and A defined as above. Directed and undirected graphs are special cases.

Multigraph

A loop is an edge (directed or undirected) which starts and ends on the same vertex; these may be permitted or not permitted according to the application. In this context, an edge with two different ends is called a link. The term "multigraph" is generally understood to mean that multiple edges (and sometimes loops) are allowed. Where graphs are defined so as to allow loops and multiple edges, a multigraph is often defined to mean a graph without loops,[6] however, where graphs are defined so as to disallow loops and multiple edges, the term is often defined to mean a “graph” which can have both multiple edges and loops,[7] although many use the term "pseudograph" for this meaning.[8] 23.2. TYPES OF GRAPHS 135

Quiver

A quiver or “multidigraph” is a directed graph which may have more than one arrow from a given source to a given target. A quiver may also have directed loops in it.

Simple graph

As opposed to a multigraph, a simple graph is an undirected graph that has no loops (edges connected at both ends to the same vertex) and no more than one edge between any two different vertices. In a simple graph the edges of the graph form a set (rather than a multiset) and each edge is a pair of distinct vertices. In a simple graph with n vertices, the degree of every vertex is at most n-1.

Weighted graph

A graph is a weighted graph if a number (weight) is assigned to each edge.[9] Such weights might represent, for example, costs, lengths or capacities, etc. depending on the problem at hand. Some authors call such a graph a network.[10] Weighted correlation networks can be defined by soft-thresholding the pairwise correlations among variables (e.g. gene measurements).

Half-edges, loose edges

In certain situations it can be helpful to allow edges with only one end, called half-edges, or no ends (loose edges); see for example signed graphs and biased graphs.

23.2.2 Important graph classes

Regular graph

Main article: Regular graph

A regular graph is a graph where each vertex has the same number of neighbours, i.e., every vertex has the same degree or valency. A regular graph with vertices of degree k is called a k‑regular graph or regular graph of degree k.

Complete graph

Main article: Complete graph

Complete graphs have the feature that each pair of vertices has an edge connecting them.

Finite and infinite graphs

A finite graph is a graph G = (V, E) such that V and E are finite sets. An infinite graph is one with an infinite set of vertices or edges or both. Most commonly in graph theory it is implied that the graphs discussed are finite. If the graphs are infinite, that is usually specifically stated.

Graph classes in terms of connectivity

Main article: Connectivity (graph theory) 136 CHAPTER 23. GRAPH (MATHEMATICS)

A complete graph with 5 vertices. Each vertex has an edge to every other vertex.

In an undirected graph G, two vertices u and v are called connected if G contains a path from u to v. Otherwise, they are called disconnected. A graph is called connected if every pair of distinct vertices in the graph is connected; otherwise, it is called disconnected. A graph is called k-vertex-connected or k-edge-connected if no set of k-1 vertices (respectively, edges) exists that, when removed, disconnects the graph. A k-vertex-connected graph is often called simply k-connected. A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. It is strongly connected or strong if it contains a directed path from u to v and a directed path from v to u for every pair of vertices u, v.

Category of all graphs

The category of all graphs is the slice category Set ↓ D where D : Set → Set is the functor taking a set s to s × s .

23.3 Properties of graphs

See also: Glossary of graph theory and Graph property 23.4. EXAMPLES 137

Two edges of a graph are called adjacent if they share a common vertex. Two arrows of a directed graph are called consecutive if the head of the first one is at the tail of the second one. Similarly, two vertices are called adjacent if they share a common edge (consecutive if they are at the tail and at the head of an arrow), in which case the common edge is said to join the two vertices. An edge and a vertex on that edge are called incident. The graph with only one vertex and no edges is called the trivial graph. A graph with only vertices and no edges is known as an edgeless graph. The graph with no vertices and no edges is sometimes called the null graph or empty graph, but the terminology is not consistent and not all mathematicians allow this object. In a weighted graph or digraph, each edge is associated with some value, variously called its cost, weight, length or other term depending on the application; such graphs arise in many contexts, for example in optimal routing problems such as the traveling salesman problem. Normally, the vertices of a graph, by their nature as elements of a set, are distinguishable. This kind of graph may be called vertex-labeled. However, for many questions it is better to treat vertices as indistinguishable; then the graph may be called unlabeled. (Of course, the vertices may be still distinguishable by the properties of the graph itself, e.g., by the numbers of incident edges). The same remarks apply to edges, so graphs with labeled edges are called edge-labeled graphs. Graphs with labels attached to edges or vertices are more generally designated as labeled. Consequently, graphs in which vertices are indistinguishable and edges are indistinguishable are called unlabeled. (Note that in the literature the term labeled may apply to other kinds of labeling, besides that which serves only to distinguish different vertices or edges.)

23.4 Examples

6 5 4 1

3 2

A graph with six nodes.

• The diagram at right is a graphic representation of the following graph:

V = {1, 2, 3, 4, 5, 6} E = {{1, 2}, {1, 5}, {2, 3}, {2, 5}, {3, 4}, {4, 5}, {4, 6}}.

• In category theory a small category can be represented by a directed multigraph in which the objects of the 138 CHAPTER 23. GRAPH (MATHEMATICS)

category represented as vertices and the morphisms as directed edges. Then, the functors between categories induce some, but not necessarily all, of the digraph morphisms of the graph.

• In computer science, directed graphs are used to represent knowledge (e.g., Conceptual graph), finite state machines, and many other discrete structures.

• A binary relation R on a set X defines a directed graph. An element x of X is a direct predecessor of an element y of X iff xRy.

• A directed edge can model information networks such as Twitter, with one user following another [11]

23.5 Important graphs

Basic examples are:

• In a complete graph, each pair of vertices is joined by an edge; that is, the graph contains all possible edges.

• In a bipartite graph, the vertex set can be partitioned into two sets, W and X, so that no two vertices in W are adjacent and no two vertices in X are adjacent. Alternatively, it is a graph with a chromatic number of 2.

• In a complete bipartite graph, the vertex set is the union of two disjoint sets, W and X, so that every vertex in W is adjacent to every vertex in X but there are no edges within W or X.

• In a linear graph or of length n, the vertices can be listed in order, v0, v1, ..., v, so that the edges are vᵢ₋₁vᵢ for each i = 1, 2, ..., n. If a linear graph occurs as a subgraph of another graph, it is a path in that graph.

• In a cycle graph of length n ≥ 3, vertices can be named v1, ..., v so that the edges are vᵢ₋₁vi for each i = 2,...,n in addition to vv1. Cycle graphs can be characterized as connected 2-regular graphs. If a cycle graph occurs as a subgraph of another graph, it is a cycle or circuit in that graph.

• A planar graph is a graph whose vertices and edges can be drawn in a plane such that no two of the edges intersect (i.e., embedded in a plane).

• A tree is a connected graph with no cycles.

• A forest is a graph with no cycles (i.e. the disjoint union of one or more trees).

More advanced kinds of graphs are:

• The and its generalizations

• Perfect graphs

• Cographs

• Chordal graphs

• Other graphs with large automorphism groups: vertex-transitive, arc-transitive, and distance-transitive graphs.

• Strongly regular graphs and their generalization distance-regular graphs.

23.6 Operations on graphs

Main article: Operations on graphs

There are several operations that produce new graphs from old ones, which might be classified into the following categories: 23.7. GENERALIZATIONS 139

• Elementary operations, sometimes called “editing operations” on graphs, which create a new graph from the original one by a simple, local change, such as addition or deletion of a vertex or an edge, merging and splitting of vertices, etc. • Graph rewrite operations replacing the occurrence of some pattern graph within the host graph by an instance of the corresponding replacement graph. • Unary operations, which create a significantly new graph from the old one. Examples: • Line graph • Dual graph • Complement graph • Binary operations, which create new graph from two initial graphs. Examples: • Disjoint union of graphs • Cartesian product of graphs • Tensor product of graphs • Strong product of graphs • Lexicographic product of graphs

23.7 Generalizations

In a hypergraph, an edge can join more than two vertices. An undirected graph can be seen as a simplicial complex consisting of 1-simplices (the edges) and 0-simplices (the vertices). As such, complexes are generalizations of graphs since they allow for higher-dimensional simplices. Every graph gives rise to a matroid. In model theory, a graph is just a structure. But in that case, there is no limitation on the number of edges: it can be any cardinal number, see continuous graph. In computational biology, power graph analysis introduces power graphs as an alternative representation of undirected graphs. In geographic information systems, geometric networks are closely modeled after graphs, and borrow many concepts from graph theory to perform spatial analysis on road networks or utility grids.

23.8 See also

• Conceptual graph • Dual graph • Glossary of graph theory • Graph (data structure) • Graph database • Graph drawing • Graph theory • Hypergraph • List of graph theory topics • List of publications in graph theory • Network theory 140 CHAPTER 23. GRAPH (MATHEMATICS)

23.9 Notes

[1] Trudeau, Richard J. (1993). Introduction to Graph Theory (Corrected, enlarged republication. ed.). New York: Dover Pub. p. 19. ISBN 978-0-486-67870-2. Retrieved 8 August 2012. A graph is an object consisting of two sets called its vertex set and its edge set.

[2] See:

• J. J. Sylvester (February 7, 1878) “Chemistry and algebra,” Nature, 17 : 284. From page 284: “Every invariant and covariant thus becomes expressible by a graph precisely identical with a Kekuléan diagram or chemicograph.” • J. J. Sylvester (1878) “On an application of the new atomic theory to the graphical representation of the invariants and covariants of binary quantics, — with three appendices,” American Journal of Mathematics, Pure and Applied, 1 (1) : 64-90. The term “graph” first appears in this paper on page 65.

[3] Gross, Jonathan L.; Yellen, Jay (2004). Handbook of graph theory. CRC Press. p. 35. ISBN 978-1-58488-090-5.

[4] See, for instance, Iyanaga and Kawada, 69 J, p. 234 or Biggs, p. 4.

[5] See, for instance, Graham et al., p. 5.

[6] For example, see Balakrishnan, p. 1, Gross (2003), p. 4, and Zwillinger, p. 220.

[7] For example, see. Bollobás, p. 7 and Diestel, p. 25.

[8] Gross (1998), p. 3, Gross (2003), p. 205, Harary, p.10, and Zwillinger, p. 220.

[9] Fletcher, Peter; Hoyle, Hughes; Patty, C. Wayne (1991). Foundations of Discrete Mathematics (International student ed. ed.). Boston: PWS-KENT Pub. Co. p. 463. ISBN 0-53492-373-9.A weighted graph is a graph in which a number w(e), called its weight, is assigned to each edge e.

[10] Strang, Gilbert (2005), Linear Algebra and Its Applications (4th ed.), Brooks Cole, ISBN 0-03-010567-6

[11] Pankaj Gupta, Ashish Goel, Jimmy Lin, Aneesh Sharma, Dong Wang, and Reza Bosagh Zadeh WTF: The who-to-follow system at Twitter, Proceedings of the 22nd international conference on World Wide Web

23.10 References

• Balakrishnan, V. K. (1997-02-01). Graph Theory (1st ed.). McGraw-Hill. ISBN 0-07-005489-4. • Berge, Claude (1958). Théorie des graphes et ses applications (in French). Dunod, Paris: Collection Universi- taire de Mathématiques, II. pp. viii+277. Translation: -. Dover, New York: Wiley. 2001 [1962]. • Biggs, Norman (1993). Algebraic Graph Theory (2nd ed.). Cambridge University Press. ISBN 0-521-45897-8. • Bollobás, Béla (2002-08-12). Modern Graph Theory (1st ed.). Springer. ISBN 0-387-98488-7. • Bang-Jensen, J.; Gutin, G. (2000). Digraphs: Theory, Algorithms and Applications. Springer. • Diestel, Reinhard (2005). Graph Theory (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-540- 26183-4.. • Graham, R.L., Grötschel, M., and Lovász, L, ed. (1995). Handbook of Combinatorics. MIT Press. ISBN 0-262-07169-X. • Gross, Jonathan L.; Yellen, Jay (1998-12-30). Graph Theory and Its Applications. CRC Press. ISBN 0-8493- 3982-0. • Gross, Jonathan L., & Yellen, Jay, ed. (2003-12-29). Handbook of Graph Theory. CRC. ISBN 1-58488-090- 2. • Harary, Frank (January 1995). Graph Theory. Addison Wesley Publishing Company. ISBN 0-201-41033-8. • Iyanaga, Shôkichi; Kawada, Yukiyosi (1977). Encyclopedic Dictionary of Mathematics. MIT Press. ISBN 0-262-09016-3. • Zwillinger, Daniel (2002-11-27). CRC Standard Mathematical Tables and Formulae (31st ed.). Chapman & Hall/CRC. ISBN 1-58488-291-3. 23.11. FURTHER READING 141

23.11 Further reading

• Trudeau, Richard J. (1993). Introduction to Graph Theory (Corrected, enlarged republication. ed.). New York: Dover Publications. ISBN 978-0-486-67870-2. Retrieved 8 August 2012.

23.12 External links

• Weisstein, Eric W., “Graph”, MathWorld. Chapter 24

Graph automorphism

In the mathematical field of graph theory, an automorphism of a graph is a form of symmetry in which the graph is mapped onto itself while preserving the edge–vertex connectivity. Formally, an automorphism of a graph G = (V,E) is a permutation σ of the vertex set V, such that the pair of vertices (u,v) form an edge if and only if the pair (σ(u),σ(v)) also form an edge. That is, it is a graph isomorphism from G to itself. Automorphisms may be defined in this way both for directed graphs and for undirected graphs. The composition of two automorphisms is another automorphism, and the set of automorphisms of a given graph, under the composition operation, forms a group, the automorphism group of the graph. In the opposite direction, by Frucht’s theorem, all groups can be represented as the automorphism group of a connected graph – indeed, of a .[1][2]

24.1 Computational complexity

Constructing the automorphism group is at least as difficult (in terms of its computational complexity) as solving the graph isomorphism problem, determining whether two given graphs correspond vertex-for-vertex and edge-for-edge. For, G and H are isomorphic if and only if the disconnected graph formed by the disjoint union of graphs G and H has an automorphism that swaps the two components.[3] In fact, just counting the automorphisms is polynomial-time equivalent to graph isomorphism[4] The graph automorphism problem is the problem of testing whether a graph has a nontrivial automorphism. It belongs to the class NP of computational complexity. Similar to the graph isomorphism problem, it is unknown whether it has a polynomial time algorithm or it is NP-complete.[5] There is a polynomial time algorithm for solving the graph automorphism problem for graphs where vertex degrees are bounded by a constant.[3] The graph automorphism problem is polynomial-time many-one reducible to the graph isomorphism problem, but the converse reduction is unknown.[6][7][8] By contrast, hardness is known when the automorphisms are constrained in a certain fashion; for instance, determining the existence of a fixed-point-free automorphism (an automorphism that fixes no vertex) is NP-complete, and the problem of counting such automorphisms is #P-complete.[5][8]

24.2 Algorithms, software and applications

While no worst-case polynomial-time algorithms are known for the general Graph Automorphism problem, finding the automorphism group (and printing out an irredundant set of generators) for many large graphs arising in applications is rather easy. Several open-source software tools are available for this task, including NAUTY,[9] BLISS[10] and SAUCY.[11][12] SAUCY and BLISS are particularly efficient for sparse graphs, e.g., SAUCY processes some graphs with millions of vertices in mere seconds. However, BLISS and NAUTY can also produce Canonical Labeling, whereas SAUCY is currently optimized for solving Graph Automorphism. An important observation is that for a graph on n vertices, the automorphism group can be specified by no more than n-1 generators, and the above software packages are guaranteed to satisfy this bound as a side-effect of their algorithms (minimal sets of generators are harder to find and are not particularly useful in practice). It also appears that the total support (i.e., the number of vertices moved) of all generators is limited by a linear function of n, which is important in runtime analysis of these

142 24.3. SYMMETRY DISPLAY 143

This drawing of the Petersen graph displays a subgroup of its symmetries, isomorphic to the dihedral group D5, but the graph has additional symmetries that are not present in the drawing. For example, since the graph is symmetric, all edges are equivalent.

algorithms. However, this has not been established for a fact, as of March 2012. Practical applications of Graph Automorphism include graph drawing and other visualization tasks, solving structured instances of Boolean Satisfiability arising in the context of Formal verification and Logistics. Molecular symmetry can predict or explain chemical properties.

24.3 Symmetry display

Several graph drawing researchers have investigated algorithms for drawing graphs in such a way that the automor- phisms of the graph become visible as symmetries of the drawing. This may be done either by using a method that is not designed around symmetries, but that automatically generates symmetric drawings when possible,[13] or by ex- plicitly identifying symmetries and using them to guide vertex placement in the drawing.[14] It is not always possible to display all symmetries of the graph simultaneously, so it may be necessary to choose which symmetries to display and which to leave unvisualized. 144 CHAPTER 24. GRAPH AUTOMORPHISM

24.4 Graph families defined by their automorphisms

Several families of graphs are defined by having certain types of automorphisms:

• An asymmetric graph is an undirected graph without any nontrivial automorphisms.

• A vertex-transitive graph is an undirected graph in which every vertex may be mapped by an automorphism into any other vertex.

• An edge-transitive graph is an undirected graph in which every edge may be mapped by an automorphism into any other edge.

• A symmetric graph is a graph such that every pair of adjacent vertices may be mapped by an automorphism into any other pair of adjacent vertices.

• A distance-transitive graph is a graph such that every pair of vertices may be mapped by an automorphism into any other pair of vertices that are the same distance apart.

• A semi-symmetric graph is a graph that is edge-transitive but not vertex-transitive.

• A half-transitive graph is a graph that is vertex-transitive and edge-transitive but not symmetric.

• A skew-symmetric graph is a directed graph together with a permutation σ on the vertices that maps edges to edges but reverses the direction of each edge. Additionally, σ is required to be an involution.

Inclusion relationships between these families are indicated by the following table:

24.5 See also

• Algebraic graph theory

24.6 References

[1] Frucht, R. (1938), “Herstellung von Graphen mit vorgegebener abstrakter Gruppe”, Compositio Mathematica (in German) 6: 239–250, ISSN 0010-437X, Zbl 0020.07804.

[2] Frucht, R. (1949), “Graphs of degree three with a given abstract group”, Canadian Journal of Mathematics 1 (4): 365–378, doi:10.4153/CJM-1949-033-6, ISSN 0008-414X, MR 0032987.

[3] Luks, Eugene M. (1982), “Isomorphism of graphs of bounded valence can be tested in polynomial time”, Journal of Computer and System Sciences 25 (1): 42–65, doi:10.1016/0022-0000(82)90009-5.

[4] R. Mathon, “A note on the graph isomorphism counting problem”.

[5] Lubiw, Anna (1981), “Some NP-complete problems similar to graph isomorphism”, SIAM Journal on Computing 10 (1): 11–21, doi:10.1137/0210002, MR 605600.

[6] R. Mathon, “A note on the graph isomorphism counting problem”, Information Processing Letters, 8 (1979), pp. 131-132

[7] Köbler, Johannes; Uwe Schöning, Jacobo Torán (1993), Graph Isomorphism Problem: The Structural Complexity, Birkhäuser Verlag, ISBN 0-8176-3680-3, OCLC 246882287

[8] Jacobo Torán, "On the hardness of graph isomorphism", SIAM Journal on Computing, vol. 33 (2004), no. 5, pp. 1093- 1108, doi:10.1137/S009753970241096X

[9] McKay, Brendan (1981), “Practical Graph Isomorphism” (PDF), Congressus Numerantium 30: 45–87, retrieved 14 April 2011.

[10] Junttila, Tommi; Kaski, Petteri (2007), “Engineering an efficient canonical labeling tool for large and sparse graphs” (PDF), Proceedings of the Ninth Workshop on Algorithm Engineering and Experiments (ALENEX07). 24.7. EXTERNAL LINKS 145

[11] Darga, Paul; Sakallah, Karem; Markov, Igor L. (June 2008), “Faster Symmetry Discovery using Sparsity of Symmetries” (PDF), Proceedings of the 45th Design Automation Conference: 149–154, doi:10.1145/1391469.1391509, ISBN 978-1- 60558-115-6.

[12] Katebi, Hadi; Sakallah, Karem; Markov, Igor L. (July 2010), “Symmetry and Satisfiability: An Update” (PDF), Proc. Satisfiability Symposium (SAT).

[13] Di Battista, Giuseppe; Tamassia, Roberto; Tollis, Ioannis G. (1992), “Area requirement and symmetry display of planar upward drawings”, Discrete and Computational Geometry 7 (1): 381–401, doi:10.1007/BF02187850; Eades, Peter; Lin, Xuemin (2000), “Spring algorithms and symmetry”, Theoretical Computer Science 240 (2): 379–405, doi:10.1016/S0304- 3975(99)00239-X.

[14] Hong, Seok-Hee (2002), “Drawing graphs symmetrically in three dimensions”, Proc. 9th Int. Symp. Graph Drawing (GD 2001), Lecture Notes in Computer Science 2265, Springer-Verlag, pp. 106–108, doi:10.1007/3-540-45848-4_16, ISBN 978-3-540-43309-5.

24.7 External links

• Weisstein, Eric W., “Graph automorphism”, MathWorld. Chapter 25

Graph factorization

Not to be confused with Factor graph. In graph theory, a factor of a graph G is a spanning subgraph, i.e., a subgraph that has the same vertex set as

1-factorization of Desargues graph: each color class is a 1-factor.

G.A k-factor of a graph is a spanning k-regular subgraph, and a k-factorization partitions the edges of the graph

146 25.1. 1-FACTORIZATION 147

Petersen graph can be partitioned into a 1-factor (red) and a 2-factor (blue). However, the graph is not 1-factorable.

into disjoint k-factors. A graph G is said to be k-factorable if it admits a k-factorization. In particular, a 1-factor is a perfect matching, and a 1-factorization of a k-regular graph is an edge coloring with k colors. A 2-factor is a collection of cycles that spans all vertices of the graph.

25.1 1-factorization

If a graph is 1-factorable, then it has to be a regular graph. However, not all regular graphs are 1-factorable. A k-regular graph is 1-factorable if it has chromatic index k; examples of such graphs include:

• Any regular bipartite graph.[1] Hall’s marriage theorem can be used to show that a k-regular bipartite graph contains a perfect matching. One can then remove the perfect matching to obtain a (k − 1)-regular bipartite graph, and apply the same reasoning repeatedly.

• Any complete graph with an even number of nodes (see below).[2]

However, there are also k-regular graphs that have chromatic index k + 1, and these graphs are not 1-factorable; examples of such graphs include: 148 CHAPTER 25. GRAPH FACTORIZATION

• Any regular graph with an odd number of nodes.

• The Petersen graph.

25.1.1 Complete graphs

1-factorization of K8 in which each 1-factor consists of an edge from the center to a vertex of a heptagon together with all possible perpendicular edges

A 1-factorization of a complete graph corresponds to pairings in a round-robin tournament. The 1-factorization of complete graphs is a special case of Baranyai’s theorem concerning the 1-factorization of complete hypergraphs. One method for constructing a 1-factorization of a complete graph involves placing all but one of the vertices on a circle, forming a regular polygon, with the remaining vertex at the center of the circle. With this arrangement of vertices, one way of constructing a 1-factor of the graph is to choose an edge e from the center to a single polygon vertex together with all possible edges that lie on lines perpendicular to e. The 1-factors that can be constructed in this way form a 1-factorization of the graph.

The number of distinct 1-factorizations of K2, K4, K6, K8, ... is 1, 1, 6, 6240, 1225566720, 252282619805368320, 98758655816833727741338583040, ... A000438. 25.2. 2-FACTORIZATION 149

25.1.2 1-factorization conjecture

Let G be a k-regular graph with 2n nodes. If k is sufficiently large, it is known that G has to be 1-factorable:

• If k = 2n − 1, then G is the complete graph K₂n, and hence 1-factorable (see above).

• If k = 2n − 2, then G can be constructed by removing a perfect matching from K₂n. Again, G is 1-factorable.

• Chetwynd & Hilton (1985) show that if k ≥ 12n/7, then G is 1-factorable.

The 1-factorization conjecture[3] is a long-standing conjecture that states that k ≈ n is sufficient. In precise terms, the conjecture is:

• If n is odd and k ≥ n, then G is 1-factorable. If n is even and k ≥ n − 1 then G is 1-factorable.

The overfull conjecture implies the 1-factorization conjecture.

25.1.3 Perfect 1-factorization

A perfect pair from a 1-factorization is a pair of 1-factors whose union induces a Hamiltonian cycle. A perfect 1-factorization (P1F) of a graph is a 1-factorization having the property that every pair of 1-factors is a perfect pair. A perfect 1-factorization should not be confused with a perfect matching (also called a 1-factor). In 1964, Anton Kotzig conjectured that every complete graph K₂n where n ≥ 2 has a perfect 1-factorization. So far, it is known that the following graphs have a perfect 1-factorization:[4]

• the infinite family of complete graphs K₂p where p is an odd prime (by Anderson and also Nakamura, inde- pendently),

• the infinite family of complete graphs Kp ₊ ₁ where p is an odd prime,

• and sporadic additional results, including K₂n where 2n ∈ {16, 28, 36, 40, 50, 126, 170, 244, 344, 730, 1332, 1370, 1850, 2198, 3126, 6860, 12168, 16808, 29792}. Some newer results are collected here.

If the complete graph Kn ₊ ₁ has a perfect 1-factorization, then the complete bipartite graph Kn,n also has a perfect 1-factorization.[5]

25.2 2-factorization

If a graph is 2-factorable, then it has to be 2k-regular for some integer k. Julius Petersen showed in 1891 that this necessary condition is also sufficient: any 2k-regular graph is 2-factorable.[6] If a connected graph is 2k-regular and has an even number of edges it may also be k-factored, by choosing each of the two factors to be an alternating subset of the edges of an Euler tour.[7] This applies only to connected graphs; disconnected counterexamples include disjoint unions of odd cycles, or of copies of K₂k₊₁.

25.3 Notes

[1] Harary (1969), Theorem 9.2, p. 85. Diestel (2005), Corollary 2.1.3, p. 37.

[2] Harary (1969), Theorem 9.1, p. 85.

[3] Chetwynd & Hilton (1985). Niessen (1994). Perkovic & Reed (1997). West.

[4] Wallis, W. D. (1997), “16. Perfect Factorizations”, One-factorizations, Mathematics and Its Applications 390 (1 ed.), Springer US, p. 125, doi:10.1007/978-1-4757-2564-3_16, ISBN 978-0-7923-4323-3 150 CHAPTER 25. GRAPH FACTORIZATION

[5] Bryant, Darryn; Maenhaut, Barbara M.; Wanless, Ian M. (May 2002), “A Family of Perfect Factorisations of Complete Bipartite Graphs”, Journal of Combinatorial Theory,A 98 (2): 328–342, doi:10.1006/jcta.2001.3240, ISSN 0097-3165

[6] Petersen (1891), §9, p. 200. Harary (1969), Theorem 9.9, p. 90. See Diestel (2005), Corollary 2.1.5, p. 39 for a proof.

[7] Petersen (1891), §6, p. 198.

25.4 References

• Bondy, John Adrian; Murty, U. S. R. (1976), Graph Theory with Applications, North-Holland, ISBN 0-444- 19451-7, Section 5.1: “Matchings”.

• Chetwynd, A. G.; Hilton, A. J. W. (1985), “Regular graphs of high degree are 1-factorizable”, Proceedings of the London Mathematical Society 50 (2): 193–206, doi:10.1112/plms/s3-50.2.193.

• Diestel, Reinhard (2005), Graph Theory (3rd ed.), Springer, ISBN 3-540-26182-6, Chapter 2: “Matching, covering and packing”. Electronic edition.

• Harary, Frank (1969), Graph Theory, Addison-Wesley, ISBN 0-201-02787-9, Chapter 9: “Factorization”. • Hazewinkel, Michiel, ed. (2001), “One-factorization”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4 • Niessen, Thomas (1994), “How to find overfull subgraphs in graphs with large maximum degree”, Discrete Applied Mathematics 51 (1–2): 117–125, doi:10.1016/0166-218X(94)90101-5. • Perkovic, L.; Reed, B. (1997), “Edge coloring regular graphs of high degree”, Discrete Mathematics, 165–166: 567–578, doi:10.1016/S0012-365X(96)00202-6. • Petersen, Julius (1891), “Die Theorie der regulären graphs”, Acta Mathematica 15: 193–220, doi:10.1007/BF02392606.

• West, Douglas B. “1-Factorization Conjecture (1985?)". Open Problems – Graph Theory and Combinatorics. Retrieved 2010-01-09.

• Weisstein, Eric W., “Graph Factor”, MathWorld.

• Weisstein, Eric W., “k-Factor”, MathWorld. • Weisstein, Eric W., “k-Factorable Graph”, MathWorld.

25.5 Further reading

• Plummer, Michael D. (2007), “Graph factors and factorization: 1985–2003: A survey”, Discrete Mathematics 307 (7–8): 791–821, doi:10.1016/j.disc.2005.11.059. Chapter 26

Graph homomorphism

Not to be confused with graph homeomorphism.

In the mathematical field of graph theory a graph homomorphism is a mapping between two graphs that respects their structure. More concretely it maps adjacent vertices to adjacent vertices.

26.1 Definitions

A graph homomorphism f from a graph G = (V,E) to a graph G′ = (V ′,E′) , written f : G → G′ , is a mapping f : V → V ′ from the vertex set of G to the vertex set of G′ such that {u, v} ∈ E implies {f(u), f(v)} ∈ E′ . The above definition is extended to directed graphs. Then, for a homomorphism f : G → G′ , (f(u), f(v)) is an arc of G′ if (u, v) is an arc of G . If there exists a homomorphism f : G → H we shall write G → H , and G ̸→ H otherwise. If G → H , G is said to be homomorphic to H or H -colourable. If the homomorphism f : G → G′ is a bijection whose inverse function is also a graph homomorphism, then f is a graph isomorphism. Two graphs G and G′ are homomorphically equivalent if G → G′ and G′ → G . A retract of a graph G is a subgraph H of G such that there exists a homomorphism r : G → H , called retraction with r(x) = x for any vertex x of H .A core is a graph which does not retract to a proper subgraph. Any graph is homomorphically equivalent to a unique core.

26.2 Properties

The composition of homomorphisms are homomorphisms. Graph homomorphism preserves connectedness. The tensor product of graphs is the category-theoretic product for the category of graphs and graph homomorphisms.

26.3 Connection to coloring and girth

A graph coloring is an assignment of one of k colors to each vertex of a graph G so that the endpoints of each edge have different colors, for some number k. Any coloring corresponds to a homomorphism f : G → Kk from G to a complete graph Kk: the vertices of Kk correspond to the colors of G, and f maps each vertex of G with color c to the vertex of Kk that corresponds to c. This is a valid homomorphism because the endpoints of each edge of G are mapped to distinct vertices of Kk, and every two distinct vertices of Kk are connected by an edge, so every edge in G is mapped to an adjacent pair of vertices in Kk. Conversely if f is a homomorphism from G to Kk, then one can color

151 152 CHAPTER 26. GRAPH HOMOMORPHISM

G by using the same color for two vertices in G whenever they are both mapped to the same vertex in Kk. Because Kk has no edges that connect a vertex to itself, it is not possible for two adjacent vertices in G to both be mapped to the same vertex in Kk, so this gives a valid coloring. That is, G has a k-coloring if and only if it has a homomorphism to Kk.

If there are two homomorphisms H → G → Kk , then their composition H → Kk is also a homomorphism. In other words, if a graph G can be colored with k colors, and there is a homomorphism H → G , then H can also be k-colored. Therefore, whenever a homomorphism H → G exists, the chromatic number of H is less than or equal to the chromatic number of G. Homomorphisms can also be used very similarly to characterize the odd girth of a graph G, the length of its shortest odd-length cycle. The odd girth is, equivalently, the smallest odd number g for which there exists a homomorphism Cg → G . For this reason, if G → H , then the odd girth of G is greater than or equal to the corresponding invariant of H.[1]

26.4 Complexity

The associated decision problem, i.e. deciding whether there exists a homomorphism from one graph to another, is NP-complete. Determining whether there is an isomorphism between two graphs is also an important problem in computational complexity theory; see graph isomorphism problem.

26.5 See also

• Hadwiger’s conjecture.

• Graph rewriting • Median graphs, definable as the retracts of hypercubes.

26.6 Notes

[1] Hell & Nešetřil (2004), p. 7.

26.7 References

• Hell, Pavol; Nešetřil, Jaroslav (2004), Graphs and Homomorphisms (Oxford Lecture Series in Mathematics and Its Applications), Oxford University Press, ISBN 0-19-852817-5 • Brown, R.; Morris, I.; Shrimpton, J.; Wensley, C. D. (2008), “Graphs of morphisms of graphs”, Electronic Journal of Combinatorics 15 (1): A1 Chapter 27

Graph isomorphism

In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of G and H

f : V (G) → V (H)

such that any two vertices u and v of G are adjacent in G if and only if ƒ(u) and ƒ(v) are adjacent in H. This kind of bijection is generally called “edge-preserving bijection”, in accordance with the general notion of isomorphism being a structure-preserving bijection. If an isomorphism exists between two graphs, then the graphs are called isomorphic and we write G ≃ H . In the case when the bijection is a mapping of a graph onto itself, i.e., when G and H are one and the same graph, the bijection is called an automorphism of G. A graph isomorphism is an equivalence relation on graphs and as such it partitions the class of all graphs into equivalence classes. A set of graphs isomorphic to each other is called an isomorphism class of graphs. The two graphs shown below are isomorphic, despite their different looking drawings.

27.1 Variations

In the above definition, graphs are understood to be undirected non-labeled non-weighted graphs. However, the notion of isomorphism may be applied to all other variants of the notion of graph, by adding the requirements to preserve the corresponding additional elements of structure: arc directions, edge weights, etc., with the following exception. When spoken about graph labeling with unique labels, commonly taken from the integer range 1,...,n, where n is the number of the vertices of the graph, two labeled graphs are said to be isomorphic if the corresponding underlying unlabeled graphs are isomorphic.

27.2 Motivation

The formal notion of “isomorphism”, e.g., of “graph isomorphism”, captures the informal notion that some objects have “the same structure” if one ignores individual distinctions of “atomic” components of objects in question. When- ever individuality of “atomic” components (vertices and edges, for graphs) is important for correct representation of whatever is modeled by graphs, the model is refined by imposing additional restrictions on the structure, and other mathematical objects are used: digraphs, labeled graphs, colored graphs, rooted trees and so on. The isomorphism relation may also be defined for all these generalizations of graphs: the isomorphism bijection must preserve the elements of structure which define the object type in question: arcs, labels, vertex/edge colors, the root of the rooted tree, etc. The notion of “graph isomorphism” allows us to distinguish graph properties inherent to the structures of graphs themselves from properties associated with graph representations: graph drawings, data structures for graphs, graph labelings, etc. For example, if a graph has exactly one cycle, then all graphs in its isomorphism class also have exactly

153 154 CHAPTER 27. GRAPH ISOMORPHISM

one cycle. On the other hand, in the common case when the vertices of a graph are (represented by) the integers 1, 2,... N, then the expression

∑ v · degv v∈V (G) may be different for two isomorphic graphs.

27.3 Whitney theorem

Main article: Whitney graph isomorphism theorem The Whitney graph isomorphism theorem,[1] shown by H. Whitney, states that two connected graphs are isomor-

The exception of Whitney’s theorem: these two graphs are not isomorphic but have isomorphic line graphs.

phic if and only if their line graphs are isomorphic, with a single exception: K3, the complete graph on three vertices, and the complete bipartite graph K₁,₃, which are not isomorphic but both have K3 as their line graph. The Whitney graph theorem can be extended to hypergraphs.[2]

27.4 Recognition of graph isomorphism

Main article: Graph isomorphism problem

While graph isomorphism may be studied in a classical mathematical way, as exemplified by the Whitney theorem, it is recognized that it is a problem to be tackled with an algorithmic approach. The computational problem of determining whether two finite graphs are isomorphic is called the graph isomorphism problem. Its practical applications include primarily cheminformatics, mathematical chemistry (identification of chemical com- pounds), and electronic design automation (verification of equivalence of various representations of the design of an electronic circuit). The graph isomorphism problem is one of few standard problems in computational complexity theory belonging to NP, but not known to belong to either of its well-known (and, if P ≠ NP, disjoint) subsets: P and NP-complete. It is one of only two, out of 12 total, problems listed in Garey & Johnson (1979) whose complexity remains unresolved, the other being integer factorization. It is however known that if the problem is NP-complete then the polynomial hierarchy collapses to a finite level.[3] Its generalization, the subgraph isomorphism problem, is known to be NP-complete. The main areas of research for the problem are design of fast algorithms and theoretical investigations of its computational complexity, both for the general problem and for special classes of graphs. 27.5. SEE ALSO 155

27.5 See also

• Graph homomorphism

• Graph automorphism problem • Graph canonization

27.6 Notes

[1] Whitney, Hassler (January 1932). “Congruent Graphs and the Connectivity of Graphs”. American Journal of Mathematics (The Johns Hopkins University Press) 54 (1): 150–168. doi:10.2307/2371086. Retrieved 17 August 2012.

[2] Dirk L. Vertigan, Geoffrey P. Whittle: A 2-Isomorphism Theorem for Hypergraphs. J. Comb. Theory, Ser. B 71(2): 215–230. 1997.

[3] Schöning, Uwe (1988). “Graph isomorphism is in the low hierarchy”. Journal of Computer and System Sciences 37: 312– 323. doi:10.1016/0022-0000(88)90010-4.

27.7 References

• Garey, Michael R.; Johnson, David S. (1979), Computers and Intractability: A Guide to the Theory of NP- Completeness, W. H. Freeman, ISBN 0-7167-1045-5 Chapter 28

Graph labeling

In the mathematical discipline of graph theory, a graph labeling is the assignment of labels, traditionally represented by integers, to the edges or vertices, or both, of a graph.[1] Formally, given a graph G = (V, E), a vertex labeling is a function of V to a set of labels. A graph with such a function defined is called a vertex-labeled graph. Likewise, an edge labeling is a function mapping E to a set of labels. In this case, G is called an edge-labeled graph. When the edge labels are members of an ordered set (e.g., the real numbers), it may be called a weighted graph. When used without qualification, the term labeled graph generally refers to a vertex-labeled graph with all labels distinct. Such a graph may equivalently be labeled by the consecutive integers {1, ..., n}, where n is the number of vertices in the graph.[1] For many applications, the edges or vertices are given labels that are meaningful in the associated domain. For example, the edges may be assigned weights representing the “cost” of traversing between the incident vertices.[2] In the above definition a graph is understood to be a finite undirected simple graph. However, the notion of labeling may be applied to all extensions and generalizations of graphs. For example, in automata theory and formal language theory it is convenient to consider labeled multigraphs, i.e., a pair of vertices may be connected by several labeled edges.[3]

28.1 History

Most graph labelings trace their origins to labelings presented by Alex Rosa in his 1967 paper.[4] Rosa identified three types of labelings, which he called α-, β-, and ρ-labelings.[5] β-Labelings were later renamed graceful by S.W. Golomb and the name has been popular since.

28.2 Special cases

28.2.1 Graceful labeling

Main article: Graceful labeling A graph is known as graceful when its vertices are labeled from 0 to ∥E∥ , the size of the graph, and this labeling induces an edge labeling from 1 to ∥E∥ . For any edge e, e's label is the positive difference between the two vertices incident with e. In other words, if e is incident with vertices labeled k and j, e will be labeled ∥k − j∥ . Thus, a graph G := (V,E) is graceful if and only if there exists an injection that induces a bijection from E to the positive integers up to ∥E∥ . In his original paper, Rosa proved that all eulerian graphs with order equivalent to 1 or 2 (mod 4) are not graceful. Whether or not certain families of graphs are graceful is an area of graph theory under extensive study. Arguably, the largest unproven conjecture in Graph Labeling is the Ringel-Kotzig conjecture, which hypothesizes that all trees are graceful. This has been proven for all paths, caterpillars, and many other infinite families of trees. Kotzig himself has called the effort to prove the conjecture a “disease.”[6]

156 28.2. SPECIAL CASES 157

5

1 5

4 3 2 4 0 3 1

A graceful labeling. Vertex labels are in black, edge labels in red

28.2.2 Edge-graceful labeling

Main article: Edge-graceful labeling

An edge-graceful labeling on a simple graph (no loops or multiple edges) on p vertices and q edges is a labelling of the edges by distinct integers in {1,...,q} such that the labeling on the vertices induced by labeling a vertex with the sum of the incident edges taken modulo p assigns all values from 0 to p−1 to the vertices. A graph G is said to be edge-graceful if it admits an edge-graceful labeling. Edge-graceful labelings were first introduced by S. Lo in 1985.[7] A necessary condition for a graph to be edge-graceful is Lo’s condition:

q(q+1) =p/(p-1)2 mod p

28.2.3 Harmonious labelings

A harmonious labeling on a graph G is an injection from the vertices of G to the group of integers modulo k, where k is the number of edges of G, that induces a bijection between the edges of G and the numbers modulo k by taking the edge label for an edge xy to be the sum of the labels of the two vertices x, y (m od k). A harmonious graph is one that has a harmonious labeling. Odd cycles are harmonious, as is the Petersen graph. It is conjectured that trees are all harmonious if one vertex label is allowed to be reused.[8]

28.2.4 Graph coloring

Graph colorings form a subclass of graph labelings. A vertex coloring assigns different labels to adjacent vertices; an edge colouring different labels to adjacent edges.

28.2.5 Lucky labeling

A lucky labeling of a graph G is an assignment of positive integers to the vertices of G such that if S(v) denotes the sum of the labels on the neighbours of v, then S is a vertex coloring of G. The lucky number of G is the least k such that G has a lucky labeling with the integers {1,...,k}.[9] 158 CHAPTER 28. GRAPH LABELING

28.3 References

[1] Weisstein, Eric W., “Labeled graph”, MathWorld.

[2] “Different Aspects of Coding Theory”, by Robert Calderbank (1995) ISBN 0-8218-0379-4, p. 53"

[3] "Developments in Language Theory", Proc. 9th. Internat.Conf., 2005, ISBN 3-540-26546-5, p. 313

[4] Gallian, J. “A Dynamic Survey of Graph Labelings, 1996-2005”. The Electronic Journal of Combinatorics.

[5] Rosa, A. “On certain valuations of vertices in a graph”.

[6] Vietri, Andrea (2008). “Sailing towards, and then against, the graceful tree conjecture: some promiscuous results”. Bull. Inst. Comb. Appl. 53: 31–46. ISSN 1183-1278. Zbl 1163.05007.

[7] Lo, Sheng-Ping (1985). “On edge-graceful labelings of graphs”. Congressus Numerantium 50: 231–241. Zbl 0597.05054.

[8] Guy (2004) pp.190–191

[9] Czerwiński, Sebastian; Grytczuk, Jarosław; Ẓelazny, Wiktor (2009). “Lucky labelings of graphs”. Inf. Process. Lett. 109 (18): 1078–1081. Zbl 1197.05125.

• Rosa, A. (1967). On certain valuations of the vertices of a graph. Theory of Graphs, Int. Symp. Rome July 1966. Gordon and Breach. pp. 349–355. Zbl 0193.53204.

• Gallian, Joseph A. “A Dynamic Survey of Graph Labeling.” The Electronic Journal of Combinatorics (2005). 20 Dec. 2006 .

• Guy, Richard K. (2004). Unsolved problems in number theory (3rd ed.). Springer-Verlag. C13. ISBN 0-387- 20860-7. Zbl 1058.11001. Chapter 29

Graph minor

In graph theory, an undirected graph H is called a minor of the graph G if H can be formed from G by deleting edges and vertices and by contracting edges. The theory of graph minors began with Wagner’s theorem that a graph is planar if and only if its minors do not [1] include the complete graph K5 nor the complete bipartite graph K₃,₃. The Robertson–Seymour theorem implies that an analogous forbidden minor characterization exists for every property of graphs that is preserved by deletions and edge contractions.[2] For every fixed graph H, it is possible to test whether H is a minor of an input graph G in polynomial time;[3] together with the forbidden minor characterization this implies that every graph property preserved by deletions and contractions may be recognized in polynomial time.[4] Other results and conjectures involving graph minors include the , according to which the graphs that do not have H as a minor may be formed by gluing together simpler pieces, and Hadwiger’s conjecture relating the inability to color a graph to the existence of a large complete graph as a minor of it. Important variants of graph minors include the topological minors and immersion minors.

29.1 Definitions

An is an operation which removes an edge from a graph while simultaneously merging the two vertices it used to connect. An undirected graph H is a minor of another undirected graph G if a graph isomorphic to H can be obtained from G by contracting some edges, deleting some edges, and deleting some isolated vertices. The order in which a sequence of such contractions and deletions is performed on G does not affect the resulting graph H. Graph minors are often studied in the more general context of matroid minors. In this context, it is common to assume that all graphs are connected, with self-loops and multiple edges allowed (that is, they are multigraphs rather than simple graphs); the contraction of a loop and the deletion of a cut-edge are forbidden operations. This point of view has the advantage that edge deletions leave the rank of a graph unchanged, and edge contractions always reduce the rank by one. In other contexts (such as with the study of ) it makes more sense to allow the deletion of a cut-edge, and to allow disconnected graphs, but to forbid multigraphs. In this variation of graph minor theory, a graph is always simplified after any edge contraction to eliminate its self-loops and multiple edges.[5]

29.2 Example

In the following example, graph H is a minor of graph G:

159 160 CHAPTER 29. GRAPH MINOR

H.

G. The following diagram illustrates this. First construct a subgraph of G by deleting the dashed edges (and the resulting isolated vertex), and then contract the gray edge (merging the two vertices it connects):

29.3 Major results and conjectures

It is straightforward to verify that the graph minor relation forms a partial order on the isomorphism classes of undirected graphs: it is transitive (a minor of a minor of G is a minor of G itself), and G and H can only be minors of each other if they are isomorphic because any nontrivial minor operation removes edges or vertices. A deep result by Neil Robertson and Paul Seymour states that this partial order is actually a well-quasi-ordering: if an infinite list G1, G2,... of finite graphs is given, then there always exist two indices i < j such that Gi is a minor of Gj. Another equivalent way of stating this is that any set of graphs can have only a finite number of minimal elements under the minor ordering.[6] This result proved a conjecture formerly known as Wagner’s conjecture, after ; Wagner had conjectured it long earlier, but only published it in 1970.[7] In the course of their proof, Seymour and Robertson also prove the graph structure theorem in which they determine, for any fixed graph H, the rough structure of any graph which does not have H as a minor. The statement of the theorem is itself long and involved, but in short it establishes that such a graph must have the structure of a clique- sum of smaller graphs that are modified in small ways from graphs embedded on surfaces of bounded genus. Thus, their theory establishes fundamental connections between graph minors and topological embeddings of graphs.[8] For any graph H, the simple H-minor-free graphs must be sparse, which means that the number of edges is less than [9] some constant multiple of the number of vertices.√ More specifically, if H has h vertices, then a simple n-vertex simple H-minor-free graph can have at most O(nh log h) edges, and some Kh-minor-free graphs√ have at least this [10] many edges. √Thus, if H has h vertices, then H-minor-free graphs have average degree O(h log h) and furthermore degeneracy O(h log h) . Additionally, the H-minor-free graphs have a separator theorem similar to the planar separa- tor theorem for planar graphs: for any fixed H, and any n-vertex H-minor-free graph G, it is possible to find a subset √ of O( n) vertices whose removal splits G into two (possibly disconnected) subgraphs with at most 2n/3 vertices per [11] √ [12] subgraph. Even stronger, for any fixed H, H-minor-free graphs have treewidth O( n) . The Hadwiger conjecture in graph theory proposes that if a graph G does not contain a minor isomorphic to the complete graph on k vertices, then G has a proper coloring with k − 1 colors.[13] The case k = 5 is a restatement of the . The Hadwiger conjecture has been proven for k ≤ 6,[14] but is unknown in the general case. Bollobás, Catlin & Erdős (1980) call it “one of the deepest unsolved problems in graph theory.” Another result relating the four-color theorem to graph minors is the theorem announced by Robertson, Sanders, Seymour, and Thomas, a strengthening of the four-color theorem conjectured by W. T. Tutte and stating that any bridgeless 29.4. MINOR-CLOSED GRAPH FAMILIES 161

3-regular graph that requires four colors in an edge coloring must have the Petersen graph as a minor.[15]

29.4 Minor-closed graph families

For more details on minor-closed graph families, including a list of some, see Robertson–Seymour theorem.

Many families of graphs have the property that every minor of a graph in F is also in F; such a class is said to be minor-closed. For instance, in any planar graph, or any embedding of a graph on a fixed topological surface, neither the removal of edges nor the contraction of edges can increase the genus of the embedding; therefore, planar graphs and the graphs embeddable on any fixed surface form minor-closed families. If F is a minor-closed family, then (because of the well-quasi-ordering property of minors) among the graphs that do not belong to F there is a finite set X of minor-minimal graphs. These graphs are forbidden minors for F: a graph belongs to F if and only if it does not contain as a minor any graph in X. That is, every minor-closed family F can be characterized as the family of X-minor-free graphs for some finite set X of forbidden minors.[2] The best-known example of a characterization of this type is Wagner’s theorem characterizing the planar graphs as the graphs having [1] neither K5 nor K₃,₃ as minors. In some cases, the properties of the graphs in a minor-closed family may be closely connected to the properties of their excluded minors. For example a minor-closed graph family F has bounded if and only if its forbidden minors include a forest,[16] F has bounded tree-depth if and only if its forbidden minors include a disjoint union of path graphs, F has bounded treewidth if and only if its forbidden minors include a planar graph,[17] and F has bounded local treewidth (a functional relationship between diameter and treewidth) if and only if its forbidden minors include an (a graph that can be made planar by the removal of a single vertex).[18] If H can be drawn in the plane with only a single crossing (that is, it has crossing number one) then the H-minor-free graphs have a simplified structure theorem in which they are formed as clique-sums of planar graphs and graphs of bounded treewidth.[19] For instance, both K5 and K₃,₃ have crossing number one, and as Wagner showed the K5-free graphs are exactly the 3-clique-sums of planar graphs and the eight-vertex , while the K₃,₃-free graphs are exactly the [20] 2-clique-sums of planar graphs and K5.

29.5 Variations

29.5.1 Topological minors

A graph H is called a topological minor of a graph G if a subdivision of H is isomorphic to a subgraph of G.[21] It is easy to see that every topological minor is also a minor. The converse however is not true in general (for instance the complete graph K5 in the Petersen graph is a minor but not a topological one), but holds for graph with maximum degree not greater than three.[22] The topological minor relation is not a well-quasi-ordering on the set of finite graphs[23] and hence the result of Robertson and Seymour does not apply to topological minors. However it is straightforward to construct finite for- bidden topological minor characterizations from finite forbidden minor characterizations by replacing every branch set with k outgoing edges by every tree on k leaves that has down degree at least two.

29.5.2 Immersion minor

A graph operation called lifting is central in a concept called immersions. The lifting is an operation on adjacent edges. Given three vertices v, u, and w, where (v,u) and (u,w) are edges in the graph, the lifting of vuw, or equivalent of (v,u), (u,w) is the operation that deletes the two edges (v,u) and (u,w) and adds the edge (v,w). In the case where (v,w) already was present, v and w will now be connected by more than one edge, and hence this operation is intrinsically a multi-graph operation. In the case where a graph H can be obtained from a graph G by a sequence of lifting operations (on G) and then finding an isomorphic subgraph, we say that H is an immersion minor of G. There is yet another way of defining immersion minors, which is equivalent to the lifting operation. We say that H is an immersion minor of G if there exists an 162 CHAPTER 29. GRAPH MINOR

injective mapping from vertices in H to vertices in G where the images of adjacent elements of H are connected in G by edge-disjoint paths. The immersion minor relation is a well-quasi-ordering on the set of finite graphs and hence the result of Robertson and Seymour applies to immersion minors. This furthermore means that every immersion minor-closed family is characterized by a finite family of forbidden immersion minors. In graph drawing, immersion minors arise as the of non-planar graphs: from a drawing of a graph in the plane, with crossings, one can form an immersion minor by replacing each crossing point by a new vertex, and in the process also subdividing each crossed edge into a path. This allows drawing methods for planar graphs to be extended to non-planar graphs.[24]

29.5.3 Shallow minors

A of a graph G is a minor in which the edges of G that were contracted to form the minor form a collection of disjoint subgraphs with low diameter. Shallow minors interpolate between the theories of graph minors and subgraphs, in that shallow minors with high depth coincide with the usual type of graph minor, while the shallow minors with depth zero are exactly the subgraphs.[25] They also allow the theory of graph minors to be extended to classes of graphs such as the 1-planar graphs that are not closed under taking minors.[26]

29.6 Algorithms

The problem of deciding whether a graph G contains H as a minor is NP-complete in general; for instance, if H is a cycle graph with the same number of vertices as G, then H is a minor of G if and only if G contains a Hamiltonian cycle. However, when G is part of the input but H is fixed, it can be solved in polynomial time. More specifically, the running time for testing whether H is a minor of G in this case is O(n3), where n is the number of vertices in G and the hides a constant that depends superexponentially on H;[3] since the original Graph Minors result, this algorithm has been improved to O(n2) time.[27] Thus, by applying the polynomial time algorithm for testing whether a given graph contains any of the forbidden minors, it is possible to recognize the members of any minor-closed family in polynomial time. However, in order to apply this result constructively, it is necessary to know what the forbidden minors of the graph family are.[4]

29.7 Notes

[1] Lovász (2006), p. 77; Wagner (1937a).

[2] Lovász (2006), theorem 4, p. 78; Robertson & Seymour (2004).

[3] Robertson & Seymour (1995).

[4] Fellows & Langston (1988).

[5] Lovász (2006) is inconsistent about whether to allow self-loops and multiple adjacencies: he writes on p. 76 that “parallel edges and loops are allowed” but on p. 77 he states that “a graph is a forest if and only if it does not contain the triangle K3 as a minor”, true only for simple graphs.

[6] Diestel (2005), Chapter 12: Minors, Trees, and WQO; Robertson & Seymour (2004).

[7] Lovász (2006), p. 76.

[8] Lovász (2006), pp. 80–82; Robertson & Seymour (2003).

[9] Mader (1967).

[10] Kostochka (1982); Kostochka (1984); Thomason (1984); Thomason (2001).

[11] Alon, Seymour & Thomas (1990); Plotkin, Rao & Smith (1994); Reed & Wood (2009).

[12] Grohe (2003)

[13] Hadwiger (1943). 29.8. REFERENCES 163

[14] Robertson, Seymour & Thomas (1993).

[15] Thomas (1999); Pegg (2002).

[16] Robertson & Seymour (1983).

[17] Lovász (2006), Theorem 9, p. 81; Robertson & Seymour (1986).

[18] Eppstein (2000); Demaine & Hajiaghayi (2004).

[19] Robertson & Seymour (1993); Demaine, Hajiaghayi & Thilikos (2002).

[20] Wagner (1937a); Wagner (1937b); Hall (1943).

[21] Diestel 2005, p. 20

[22] Diestel 2005, p. 22

[23] Ding (1996).

[24] Buchheim et al. (2014).

[25] Nešetřil & Ossona de Mendez (2012).

[26] Nešetřil & Ossona de Mendez (2012), pp. 319–321.

[27] Kawarabayashi, Kobayashi & Reed (2012).

29.8 References

• Alon, Noga; Seymour, Paul; Thomas, Robin (1990), “A separator theorem for nonplanar graphs”, Journal of the American Mathematical Society 3 (4): 801–808, doi:10.2307/1990903, JSTOR 1990903, MR 1065053.

• Bollobás, B.; Catlin, P. A.; Erdős, Paul (1980), “Hadwiger’s conjecture is true for almost every graph” (PDF), European Journal of Combinatorics 1: 195–199.

• Buchheim, Christoph; Chimani, Markus; Gutwenger, Carsten; Jünger, Michael; Mutzel, Petra (2014), “Cross- ings and ”, in Tamassia, Roberto, Handbook of Graph Drawing and Visualization, Discrete Math- ematics and its Applications (Boca Raton), CRC Press, Boca Raton, FL.

• Demaine, Erik D.; Hajiaghayi, MohammadTaghi (2004), “Diameter and treewidth in minor-closed graph fam- ilies, revisited”, Algorithmica 40 (3): 211–215, doi:10.1007/s00453-004-1106-1.

• Demaine, Erik D.; Hajiaghayi, MohammadTaghi; Thilikos, Dimitrios M. (2002), “1.5-Approximation for treewidth of graphs excluding a graph with one crossing as a minor”, Proc. 5th International Workshop on Ap- proximation Algorithms for Combinatorial Optimization (APPROX 2002), Lecture Notes in Computer Science 2462, Springer-Verlag, pp. 67–80, doi:10.1007/3-540-45753-4_8

• Diestel, Reinhard (2005), Graph Theory (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540- 26183-4.

• Ding, Guoli (1996), “Excluding a long double path minor”, Journal of Combinatorial Theory, Series B 66 (1): 11–23, doi:10.1006/jctb.1996.0002, MR 1368512.

• Eppstein, David (2000), “Diameter and treewidth in minor-closed graph families”, Algorithmica 27: 275–291, arXiv:math.CO/9907126, doi:10.1007/s004530010020, MR 2001c:05132.

• Fellows, Michael R.; Langston, Michael A. (1988), “Nonconstructive tools for proving polynomial-time decid- ability”, Journal of the ACM 35 (3): 727–739, doi:10.1145/44483.44491.

• Grohe, Martin (2003), “Local tree-width, excluded minors, and approximation algorithms”, Combinatorica 23 (4): 613–632, doi:10.1007/s00493-003-0037-9.

• Hadwiger, Hugo (1943), "Über eine Klassifikation der Streckenkomplexe”, Vierteljschr. Naturforsch. Ges. Zürich 88: 133–143. 164 CHAPTER 29. GRAPH MINOR

• Hall, Dick Wick (1943), “A note on primitive skew curves”, Bulletin of the American Mathematical Society 49 (12): 935–936, doi:10.1090/S0002-9904-1943-08065-2. • Kawarabayashi, Ken-ichi; Kobayashi, Yusuke; Reed, Bruce (March 2012), “The disjoint paths problem in quadratic time”, Journal of Combinatorial Theory, Series B 102 (2): 424–435, doi:10.1016/j.jctb.2011.07.004 • Kostochka, Alexandr V. (1982), “The minimum for graphs with a given mean degree of vertices”, Metody Diskret. Analiz. (in Russian) 38: 37–58. • Kostochka, Alexandr V. (1984), “Lower bound of the Hadwiger number of graphs by their average degree”, Combinatorica 4: 307–316, doi:10.1007/BF02579141. • Lovász, László (2006), “Graph minor theory”, Bulletin of the American Mathematical Society 43 (1): 75–86, doi:10.1090/S0273-0979-05-01088-8. • Mader, Wolfgang (1967), “Homomorphieeigenschaften und mittlere Kantendichte von Graphen”, Mathema- tische Annalen 174 (4): 265–268, doi:10.1007/BF01364272. • Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2012), Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics 28, Springer, pp. 62–65, doi:10.1007/978-3-642-27875-4, ISBN 978-3-642-27874-7, MR 2920058. • Pegg, Ed, Jr. (2002), “Book Review: The Colossal Book of Mathematics” (PDF), Notices of the American Mathematical Society 49 (9): 1084–1086, doi:10.1109/TED.2002.1003756. • Plotkin, Serge; Rao, Satish; Smith, Warren D. (1994), “Shallow excluded minors and improved graph decom- positions”, Proc. 5th ACM–SIAM Symp. on Discrete Algorithms (SODA 1994), pp. 462–470. • Reed, Bruce; Wood, David R. (2009), “A linear-time algorithm to find a separator in a graph excluding a minor”, ACM Transactions on Algorithms 5 (4): Article 39, doi:10.1145/1597036.1597043. • Robertson, Neil; Seymour, Paul (1983), “Graph minors. I. Excluding a forest”, Journal of Combinatorial Theory, Series B 35 (1): 39–61, doi:10.1016/0095-8956(83)90079-5. • Robertson, Neil; Seymour, Paul D. (1986), “Graph minors. V. Excluding a planar graph”, Journal of Combi- natorial Theory, Series B 41 (1): 92–114, doi:10.1016/0095-8956(86)90030-4 • Robertson, Neil; Seymour, Paul D. (1993), “Excluding a graph with one crossing”, in Robertson, Neil; Sey- mour, Paul, Graph Structure Theory: Proc. AMS–IMS–SIAM Joint Summer Research Conference on Graph Minors, Contemporary Mathematics 147, American Mathematical Society, pp. 669–675. • Robertson, Neil; Seymour, Paul D. (1995), “Graph Minors. XIII. The disjoint paths problem”, Journal of Combinatorial Theory, Series B 63 (1): 65–110, doi:10.1006/jctb.1995.1006. • Robertson, Neil; Seymour, Paul D. (2003), “Graph Minors. XVI. Excluding a non-planar graph”, Journal of Combinatorial Theory, Series B 89 (1): 43–76, doi:10.1016/S0095-8956(03)00042-X. • Robertson, Neil; Seymour, Paul D. (2004), “Graph Minors. XX. Wagner’s conjecture”, Journal of Combina- torial Theory, Series B 92 (2): 325–357, doi:10.1016/j.jctb.2004.08.001.

• Robertson, Neil; Seymour, Paul; Thomas, Robin (1993), “Hadwiger’s conjecture for K6-free graphs” (PDF), Combinatorica 13: 279–361, doi:10.1007/BF01202354. • Thomas, Robin (1999), “Recent excluded minor theorems for graphs”, Surveys in combinatorics, 1999 (Canter- bury) (PDF), London Math. Soc. Lecture Note Ser. 267, Cambridge: Cambridge Univ. Press, pp. 201–222, MR 1725004. • Thomason, Andrew (1984), “An extremal function for contractions of graphs”, Mathematical Proceedings of the Cambridge Philosophical Society 95 (2): 261–265, doi:10.1017/S0305004100061521. • Thomason, Andrew (2001), “The extremal function for complete minors”, Journal of Combinatorial Theory, Series B 81 (2): 318–338, doi:10.1006/jctb.2000.2013. • Wagner, Klaus (1937a), "Über eine Eigenschaft der ebenen Komplexe”, Math. Ann. 114: 570–590, doi:10.1007/BF01594196. • Wagner, Klaus (1937b), "Über eine Erweiterung des Satzes von Kuratowski”, Deutsche Mathematik 2: 280– 285. 29.9. EXTERNAL LINKS 165

29.9 External links

• Weisstein, Eric W., “Graph Minor”, MathWorld. Chapter 30

Graph theory

This article is about sets of vertices connected by edges. For graphs of mathematical functions, see Graph of a function. For other uses, see Graph (disambiguation). In mathematics and computer science, graph theory is the study of graphs, which are mathematical structures used

6 5 4 1

3 2

A drawing of a graph

to model pairwise relations between objects. A “graph” in this context is made up of "vertices" or “nodes” and lines called edges that connect them. A graph may be undirected, meaning that there is no distinction between the two vertices associated with each edge, or its edges may be directed from one vertex to another; see graph (mathematics) for more detailed definitions and for other variations in the types of graph that are commonly considered. Graphs are one of the prime objects of study in discrete mathematics. Refer to the glossary of graph theory for basic definitions in graph theory.

166 30.1. DEFINITIONS 167

30.1 Definitions

Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures.

30.1.1 Graph

In the most common sense of the term,[1] a graph is an ordered pair G = (V,E) comprising a set V of vertices or nodes together with a set E of edges or lines, which are 2-element subsets of V (i.e., an edge is related with two vertices, and the relation is represented as an unordered pair of the vertices with respect to the particular edge). To avoid ambiguity, this type of graph may be described precisely as undirected and simple. Other senses of graph stem from different conceptions of the edge set. In one more generalized notion,[2] V is a set together with a relation of incidence that associates with each edge two vertices. In another generalized notion, E is a multiset of unordered pairs of (not necessarily distinct) vertices. Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below. The vertices belonging to an edge are called the ends, endpoints, or end vertices of the edge. A vertex may exist in a graph and not belong to an edge. V and E are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is |V | (the number of vertices). A graph’s size is |E| , the number of edges. The degree or valency of a vertex is the number of edges that connect to it, where an edge that connects to the vertex at both ends (a loop) is counted twice. For an edge {u, v} , graph theorists usually use the somewhat shorter notation uv .

30.2 Applications

Graphs can be used to model many types of relations and processes in physical, biological,[4] social and information systems. Many practical problems can be represented by graphs. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in travel, biology, computer chip design, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction- safe, persistent storing and querying of graph-structured data. Graph-theoretic methods, in various forms, have proven particularly useful in linguistics, since natural language often lends itself well to discrete structure. Traditionally, syntax and compositional semantics follow tree-based structures, whose expressive power lies in the principle of compositionality, modeled in a hierarchical graph. More contemporary approaches such as head-driven phrase structure grammar model the syntax of natural language using typed feature structures, which are directed acyclic graphs. Within lexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words; semantic networks are therefore important in computational linguistics. Still other methods in phonology (e.g. optimality theory, which uses lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are common in the analysis of language as a graph. Indeed, the usefulness of this area of mathematics to linguistics has borne organizations such as TextGraphs, as well as various 'Net' projects, such as WordNet, VerbNet, and others. Graph theory is also used to study molecules in chemistry and physics. In condensed matter physics, the three- dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. In chemistry a graph makes a natural model for a molecule, where vertices represent atoms and edges bonds. This approach is especially used in computer processing of molecular structures, ranging from chemical editors to database searching. In statistical physics, graphs can represent 168 CHAPTER 30. GRAPH THEORY

pt hu ca es ro no pl sv fr fi ko it he en da de ja bg nl ar tr id zh uk ru fa cs

The network graph formed by Wikipedia editors (edges) contributing to different Wikipedia language versions (nodes) during one month in summer 2013.[3]

local connections between interacting parts of a system, as well as the dynamics of a physical process on such systems. Graphs are also used to represent the micro-scale channels of porous media, in which the vertices represent the pores and the edges represent the smaller channels connecting the pores. Graph theory is also widely used in sociology as a way, for example, to measure actors’ prestige or to explore rumor spreading, notably through the use of social network analysis software. Under the umbrella of social networks are many different types of graphs:[5] Acquaintanceship and friendship graphs describe whether people know each other. Influence graphs model whether certain people can influence the behavior of others. Finally, collaboration graphs model whether two people work together in a particular way, such as acting in a movie together. Likewise, graph theory is useful in biology and conservation efforts where a vertex can represent regions where certain species exist (or habitats) and the edges represent migration paths, or movement between the regions. This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species. In mathematics, graphs are useful in geometry and certain parts of topology such as knot theory. Algebraic graph theory has close links with group theory. A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with weights, or weighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example if a graph represents a road network, the weights could represent the length of each road. 30.3. HISTORY 169

30.3 History

The Königsberg Bridge problem

The paper written by Leonhard Euler on the Seven Bridges of Königsberg and published in 1736 is regarded as the first paper in the history of graph theory.[6] This paper, as well as the one written by Vandermonde on the knight problem, carried on with the analysis situs initiated by Leibniz. Euler’s formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy[7] and L'Huillier,[8] and is at the origin of topology. More than one century after Euler’s paper on the bridges of Königsberg and while Listing introduced topology, Cayley was led by the study of particular analytical forms arising from differential calculus to study a particular class of graphs, the trees.[9] This study had many implications in theoretical chemistry. The involved techniques mainly concerned the enumeration of graphs having particular properties. Enumerative graph theory then rose from the results of Cayley and the fundamental results published by Pólya between 1935 and 1937 and the generalization of these by De Bruijn in 1959. Cayley linked his results on trees with the contemporary studies of chemical composition.[10] The fusion of the ideas coming from mathematics with those coming from chemistry is at the origin of a part of the standard terminology of graph theory. In particular, the term “graph” was introduced by Sylvester in a paper published in 1878 in Nature, where he draws an analogy between “quantic invariants” and “co-variants” of algebra and molecular diagrams:[11]

"[...] Every invariant and co-variant thus becomes expressible by a graph precisely identical with a Kekuléan diagram or chemicograph. [...] I give a rule for the geometrical multiplication of graphs, i.e. for constructing a graph to the product of in- or co-variants whose separate graphs are given. [...]" (italics as in the original).

The first textbook on graph theory was written by Dénes Kőnig, and published in 1936.[12] Another book by Frank Harary, published in 1969, was “considered the world over to be the definitive textbook on the subject”,[13] and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund the Pólya Prize.[14] 170 CHAPTER 30. GRAPH THEORY

One of the most famous and stimulating problems in graph theory is the four color problem: “Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait’s reformulation generated a new class of problems, the factorization problems, particularly studied by Petersen and Kőnig. The works of Ramsey on colorations and more specially the results obtained by Turán in 1941 was at the origin of another branch of graph theory, extremal graph theory. The four color problem remained unsolved for more than a century. In 1969 Heinrich Heesch published a method for solving the problem using computers.[15] A computer-aided proof produced in 1976 by Kenneth Appel and Wolfgang Haken makes fundamental use of the notion of “discharging” developed by Heesch.[16][17] The proof involved check- ing the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later by Robertson, Seymour, Sanders and Thomas.[18] The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicist Gustav Kirchhoff, who published in 1845 his Kirchhoff’s circuit laws for calculating the voltage and current in electric circuits. The introduction of probabilistic methods in graph theory, especially in the study of Erdős and Rényi of the asymptotic probability of graph connectivity, gave rise to yet another branch, known as random graph theory, which has been a fruitful source of graph-theoretic results.

30.4 Graph drawing

Main article: Graph drawing

Graphs are represented visually by drawing a dot or circle for every vertex, and drawing an arc between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow. A graph drawing should not be confused with the graph itself (the abstract, non-visual structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others. The pioneering work of W. T. Tutte was very influential in the subject of graph drawing. Among other achievements, he introduced the use of linear algebraic methods to obtain graph drawings. Graph drawing also can be said to encompass problems that deal with the crossing number and its various general- izations. The crossing number of a graph is the minimum number of intersections between edges that a drawing of the graph in the plane must contain. For a planar graph, the crossing number is zero by definition. Drawings on surfaces other than the plane are also studied.

30.5 Graph-theoretic data structures

Main article: Graph (abstract data type)

There are different ways to store graphs in a computer system. The data structure used depends on both the graph structure and the algorithm used for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred for sparse graphs as they have smaller memory requirements. Matrix structures on the other hand provide faster access for some applications but can consume huge amounts of memory. List structures include the incidence list, an array of pairs of vertices, and the , which separately lists 30.6. PROBLEMS IN GRAPH THEORY 171

the neighbors of each vertex: Much like the incidence list, each vertex has a list of which vertices it is adjacent to. Matrix structures include the incidence matrix, a matrix of 0’s and 1’s whose rows represent vertices and whose columns represent edges, and the adjacency matrix, in which both the rows and columns are indexed by vertices. In both cases a 1 indicates two adjacent objects and a 0 indicates two non-adjacent objects. The Laplacian matrix is a modified form of the adjacency matrix that incorporates information about the degrees of the vertices, and is useful in some calculations such as Kirchhoff’s theorem on the number of spanning trees of a graph. The distance matrix, like the adjacency matrix, has both its rows and columns indexed by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of a shortest path between two vertices.

30.6 Problems in graph theory

30.6.1 Enumeration

There is a large literature on graphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973).

30.6.2 Subgraphs, induced subgraphs, and minors

A common problem, called the subgraph isomorphism problem, is finding a fixed graph as a subgraph in a given graph. One reason to be interested in such a question is that many graph properties are hereditary for subgraphs, which means that a graph has the property if and only if all subgraphs have it too. Unfortunately, finding maximal subgraphs of a certain kind is often an NP-complete problem.

• Finding the largest complete subgraph is called the (NP-complete).

A similar problem is finding induced subgraphs in a given graph. Again, some important graph properties are hered- itary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example,

• Finding the largest edgeless induced subgraph, or independent set, called the independent set problem (NP- complete).

Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. A minor or subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. A famous example:

• A graph is planar if it contains as a minor neither the complete bipartite graph K3,3 (See the Three-cottage problem) nor the complete graph K5 .

Another class of problems has to do with the extent to which various species and generalizations of graphs are deter- mined by their point-deleted subgraphs, for example:

• The reconstruction conjecture.

30.6.3 Graph coloring

Many problems have to do with various ways of coloring graphs, for example:

• The four-color theorem • The strong perfect graph theorem • The Erdős–Faber–Lovász conjecture(unsolved) 172 CHAPTER 30. GRAPH THEORY

• The total coloring conjecture, also called Behzad's conjecture) (unsolved)

• The list coloring conjecture (unsolved)

• The Hadwiger conjecture (graph theory) (unsolved)

30.6.4 Subsumption and unification

Constraint modeling theories concern families of directed graphs related by a partial order. In these applications, graphs are ordered by specificity, meaning that more constrained graphs—which are more specific and thus contain a greater amount of information—are subsumed by those that are more general. Operations between graphs include evaluating the direction of a subsumption relationship between two graphs, if any, and computing graph unification. The unification of two argument graphs is defined as the most general graph (or the computation thereof) that is consistent with (i.e. contains all of the information in) the inputs, if such a graph exists; efficient unification algorithms are known. For constraint frameworks which are strictly compositional, graph unification is the sufficient satisfiability and com- bination function. Well-known applications include automatic theorem proving and modeling the elaboration of linguistic structure.

30.6.5 Route problems

• Hamiltonian path and cycle problems

• Minimum spanning tree

• Route inspection problem (also called the “Chinese Postman Problem”)

• Seven Bridges of Königsberg

• Shortest path problem

• Steiner tree

• Three-cottage problem

• Traveling salesman problem (NP-hard)

30.6.6 Network flow

There are numerous problems arising especially from applications that have to do with various notions of flows in networks, for example:

• Max flow min cut theorem

30.6.7 Visibility problems

• Museum guard problem

30.6.8 Covering problems

Covering problems in graphs are specific instances of subgraph-finding problems, and they tend to be closely related to the clique problem or the independent set problem.

• Set cover problem

• Vertex cover problem 30.7. SEE ALSO 173

30.6.9 Decomposition problems

Decomposition, defined as partitioning the edge set of a graph (with as many vertices as necessary accompanying the edges of each part of the partition), has a wide variety of question. Often, it is required to decompose a graph into subgraphs isomorphic to a fixed graph; for instance, decomposing a complete graph into Hamiltonian cycles. Other problems specify a family of graphs into which a given graph should be decomposed, for instance, a family of cycles, or decomposing a complete graph Kn into n − 1 specified trees having, respectively, 1, 2, 3, ..., n − 1 edges. Some specific decomposition problems that have been studied include:

• Arboricity, a decomposition into as few forests as possible • Cycle double cover, a decomposition into a collection of cycles covering each edge exactly twice • Edge coloring, a decomposition into as few matchings as possible • Graph factorization, a decomposition of a regular graph into regular subgraphs of given degrees

30.6.10 Graph classes

Many problems involve characterizing the members of various classes of graphs. Some examples of such questions are below:

• Enumerating the members of a class • Characterizing a class in terms of forbidden substructures • Ascertaining relationships among classes (e.g., does one property of graphs imply another) • Finding efficient algorithms to decide membership in a class • Finding representations for members of a class.

30.7 See also

• Gallery of named graphs • Glossary of graph theory • List of graph theory topics • Publications in graph theory

30.7.1 Related topics

• Algebraic graph theory • Citation graph • Conceptual graph • Data structure • Disjoint-set data structure • Dual-phase evolution • Entitative graph • Existential graph • Graph algebras 174 CHAPTER 30. GRAPH THEORY

• Graph automorphism

• Graph coloring

• Graph database

• Graph data structure

• Graph drawing

• Graph equation

• Graph rewriting

• Graph sandwich problem

• Graph property

• Intersection graph

• Logical graph

• Loop

• Network theory

• Null graph

• Pebble motion problems

• Percolation

• Perfect graph

• Quantum graph

• Random regular graphs

• Semantic networks

• Spectral graph theory

• Strongly regular graphs

• Symmetric graphs

• Transitive reduction

• Tree data structure

30.7.2 Algorithms

• Bellman–Ford algorithm

• Dijkstra’s algorithm

• Ford–Fulkerson algorithm

• Kruskal’s algorithm

• Nearest neighbour algorithm

• Prim’s algorithm

• Depth-first search

• Breadth-first search 30.7. SEE ALSO 175

30.7.3 Subareas

• Algebraic graph theory • Geometric graph theory • Extremal graph theory • Probabilistic graph theory • Topological graph theory

30.7.4 Related areas of mathematics

• Combinatorics • Group theory • Knot theory • Ramsey theory

30.7.5 Generalizations

• Hypergraph • Abstract simplicial complex

30.7.6 Prominent graph theorists

• Alon, Noga • Berge, Claude • Bollobás, Béla • Bondy, Adrian John • Brightwell, Graham • Chudnovsky, Maria • Chung, Fan • Dirac, Gabriel Andrew • Erdős, Paul • Euler, Leonhard • Faudree, Ralph • Golumbic, Martin • Graham, Ronald • Harary, Frank • Heawood, Percy John • Kotzig, Anton • Kőnig, Dénes • Lovász, László 176 CHAPTER 30. GRAPH THEORY

• Murty, U. S. R. • Nešetřil, Jaroslav • Rényi, Alfréd • Ringel, Gerhard • Robertson, Neil • Seymour, Paul • Szemerédi, Endre • Thomas, Robin • Thomassen, Carsten • Turán, Pál • Tutte, W. T. • Whitney, Hassler

30.8 Notes

[1] See, for instance, Iyanaga and Kawada, 69 J, p. 234 or Biggs, p. 4.

[2] See, for instance, Graham et al., p. 5.

[3] Hale, Scott A. (2013). “Multilinguals and Wikipedia Editing”. arXiv:1312.0976 [cs.CY].

[4] Mashaghi, A. et al. (2004). “Investigation of a protein ”. European Physical Journal B 41 (1): 113–121. doi:10.1140/epjb/e2004-00301-0.

[5] Rosen, Kenneth H. Discrete mathematics and its applications (7th ed.). New York: McGraw-Hill. ISBN 978-0-07-338309- 5.

[6] Biggs, N.; Lloyd, E. and Wilson, R. (1986), Graph Theory, 1736-1936, Oxford University Press

[7] Cauchy, A.L. (1813), “Recherche sur les polyèdres - premier mémoire”, Journal de l'École Polytechnique, 9 (Cahier 16): 66–86.

[8] L'Huillier, S.-A.-J. (1861), “Mémoire sur la polyèdrométrie”, Annales de Mathématiques 3: 169–189.

[9] Cayley, A. (1857), “On the theory of the analytical forms called trees”, Philosophical Magazine, Series IV 13 (85): 172– 176, doi:10.1017/CBO9780511703690.046.

[10] Cayley, A. (1875), “Ueber die Analytischen Figuren, welche in der Mathematik Bäume genannt werden und ihre An- wendung auf die Theorie chemischer Verbindungen”, Berichte der deutschen Chemischen Gesellschaft 8 (2): 1056–1059, doi:10.1002/cber.18750080252.

[11] Joseph Sylvester, John (1878). “Chemistry and Algebra”. Nature 17: 284. doi:10.1038/017284a0.

[12] Tutte, W.T. (2001), Graph Theory, Cambridge University Press, p. 30, ISBN 978-0-521-79489-3.

[13] Gardner, Martin (1992), Fractal Music, Hypercards, and more...Mathematical Recreations from Scientific American, W. H. Freeman and Company, p. 203.

[14] Society for Industrial and Applied Mathematics (2002), “The George Polya Prize”, Looking Back, Looking Ahead: A SIAM History (PDF), p. 26.

[15] Heinrich Heesch: Untersuchungen zum Vierfarbenproblem. Mannheim: Bibliographisches Institut 1969.

[16] Appel, K. and Haken, W. (1977), “Every planar map is four colorable. Part I. Discharging”, Illinois J. Math. 21: 429–490.

[17] Appel, K. and Haken, W. (1977), “Every planar map is four colorable. Part II. Reducibility”, Illinois J. Math. 21: 491–567.

[18] Robertson, N.; Sanders, D.; Seymour, P. and Thomas, R. (1997), “The four color theorem”, Journal of Combinatorial Theory Series B 70: 2–44, doi:10.1006/jctb.1997.1750. 30.9. REFERENCES 177

30.9 References

• Berge, Claude (1958), Théorie des graphes et ses applications, Collection Universitaire de Mathématiques II, Paris: Dunod. English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition, Dover, New York 2001. • Biggs, N.; Lloyd, E.; Wilson, R. (1986), Graph Theory, 1736–1936, Oxford University Press.

• Bondy, J.A.; Murty, U.S.R. (2008), Graph Theory, Springer, ISBN 978-1-84628-969-9.

• Bondy, Riordan, O.M (2003), Mathematical results on scale-free random graphs in “Handbook of Graphs and Networks” (S. Bornholdt and H.G. Schuster (eds)), Wiley VCH, Weinheim, 1st ed..

• Chartrand, Gary (1985), Introductory Graph Theory, Dover, ISBN 0-486-24775-9. • Gibbons, Alan (1985), Algorithmic Graph Theory, Cambridge University Press.

• Reuven Cohen, Shlomo Havlin (2010), Complex Networks: Structure, Robustness and Function, Cambridge University Press

• Golumbic, Martin (1980), Algorithmic Graph Theory and Perfect Graphs, Academic Press. • Harary, Frank (1969), Graph Theory, Reading, MA: Addison-Wesley.

• Harary, Frank; Palmer, Edgar M. (1973), Graphical Enumeration, New York, NY: Academic Press. • Mahadev, N.V.R.; Peled, Uri N. (1995), Threshold Graphs and Related Topics, North-Holland.

• Mark Newman (2010), Networks: An Introduction, Oxford University Press.

30.10 External links

• Graph theory with examples

• Hazewinkel, Michiel, ed. (2001), “Graph theory”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608- 010-4 • Graph theory tutorial

• A searchable database of small connected graphs • Image gallery: graphs at the Wayback Machine (archived February 6, 2006)

• Concise, annotated list of graph theory resources for researchers • rocs — a graph theory IDE

• The Social Life of Routers — non-technical paper discussing graphs of people and computers • Graph Theory Software — tools to teach and learn graph theory

• Online books, and library resources in your library and in other libraries about graph theory

30.10.1 Online textbooks

• Phase Transitions in Combinatorial Optimization Problems, Section 3: Introduction to Graphs (2006) by Hart- mann and Weigt • Digraphs: Theory Algorithms and Applications 2007 by Jorgen Bang-Jensen and Gregory Gutin

• Graph Theory, by Reinhard Diestel Chapter 31

Homotopy

This article is about topology. For chemistry, see Homotopic groups. In topology, two continuous functions from one topological space to another are called homotopic (Greek ὁμός

The two dashed paths shown above are homotopic relative to their endpoints. The animation represents one possible homotopy.

(homós) = same, similar, and τόπος (tópos) = place) if one can be “continuously deformed” into the other, such a deformation being called a homotopy between the two functions. A notable use of homotopy is the definition of homotopy groups and cohomotopy groups, important invariants in algebraic topology. In practice, there are technical difficulties in using homotopies with certain spaces. Algebraic topologists work with compactly generated spaces, CW complexes, or spectra.

178 31.1. FORMAL DEFINITION 179

A homotopy between two embeddings of the torus into R3: as “the surface of a doughnut” and as “the surface of a coffee mug”. This is also an example of an isotopy.

31.1 Formal definition

Formally, a homotopy between two continuous functions f and g from a topological space X to a topological space Y is defined to be a continuous function H : X × [0,1] → Y from the product of the space X with the unit interval [0,1] to Y such that, if x ∈ X then H(x,0) = f(x) and H(x,1) = g(x). If we think of the second parameter of H as time then H describes a continuous deformation of f into g: at time 0 we have the function f and at time 1 we have the function g. We can also think of the second parameter as a “slider control” that allows us to smoothly transition from f to g as the slider moves from 0 to 1, and vice versa. An alternative notation is to say that a homotopy between two continuous functions f, g : X → Y is a family of continuous functions ht : X → Y for t ∈ [0,1] such that h0 = f and h1 = g, and the map (x,t) ↦ ht(x) is continuous from X × [0,1] to Y. The two versions coincide by setting ht(x) = H(x,t). It is not sufficient to require each map ht(x) to be continuous.[1] The animation that is looped above right provides an example of a homotopy between two embeddings, f and g, of the torus into R3. X is the torus, Y is R3, f is some continuous function from the torus to R3 that takes the torus to the embedded surface-of-a-doughnut shape with which the animation starts; g is some continuous function that takes the torus to the embedded surface-of-a-coffee-mug shape. The animation shows the image of h(x) as a function of 180 CHAPTER 31. HOMOTOPY the parameter t, where t varies with time from 0 to 1 over each cycle of the animation loop. It pauses, then shows the image as t varies back from 1 to 0, pauses, and repeats this cycle.

31.1.1 Properties

Continuous functions f and g are said to be homotopic if and only if there is a homotopy H taking f to g as described above. Being homotopic is an equivalence relation on the set of all continuous functions from X to Y. This homotopy relation is compatible with function composition in the following sense: if f1, g1 : X → Y are homotopic, and f2, g2 : Y → Z are homotopic, then their compositions f2 ∘ f1 and g2 ∘ g1 : X → Z are also homotopic.

31.2 Homotopy equivalence

Given two spaces X and Y, we say they are homotopy equivalent, or of the same homotopy type, if there exist continuous maps f : X → Y and g : Y → X such that g ∘ f is homotopic to the identity map idX and f ∘ g is homotopic to idY. The maps f and g are called homotopy equivalences in this case. Every homeomorphism is a homotopy equivalence, but the converse is not true: for example, a solid disk is not homeomorphic to a single point (since there is no bijection between them), although the disk and the point are homotopy equivalent (since you can deform the disk along radial lines continuously to a single point). Spaces that are homotopy equivalent to a point are called contractible. Intuitively, two spaces X and Y are homotopy equivalent if they can be transformed into one another (i.e., made homeomorphic) by bending, shrinking and expanding operations. For example, a solid disk or solid ball is homo- topy equivalent to a point, and R2 − {(0,0)} is homotopy equivalent to the unit circle S1. However, one has to be careful to not think of such transformations in terms of embeddings only — for example, the double torus and the double torus with the rings interlinked are homotopy equivalent (since they are homeomorphic), even though the said transformation cannot be embedded in three-dimensional Euclidean space without the rings “passing through” each other.

31.2.1 Null-homotopy

A function f is said to be null-homotopic if it is homotopic to a constant function. (The homotopy from f to a constant function is then sometimes called a null-homotopy.) For example, a map f from the unit circle S1 to any space X is null-homotopic precisely when it can be continuously extended to a map from the unit disk D2 to X that agrees with f on the boundary. It follows from these definitions that a space X is contractible if and only if the identity map from X to itself—which is always a homotopy equivalence—is null-homotopic.

31.3 Invariance

Homotopy equivalence is important because in algebraic topology many concepts are homotopy invariant, that is, they respect the relation of homotopy equivalence. For example, if X and Y are homotopy equivalent spaces, then:

• If X is path-connected then so is Y. • If X is simply connected then so is Y. • The (singular) homology and cohomology groups of X and Y are isomorphic. • If X and Y are path-connected, then the fundamental groups of X and Y are isomorphic, and so are the higher homotopy groups. (Without the path-connectedness assumption, one has π1(X,x0) isomorphic to π1(Y,f(x0)) where f : X → Y is a homotopy equivalence and x0 ∈ X.)

An example of an algebraic invariant of topological spaces which is not homotopy-invariant is compactly supported homology (which is, roughly speaking, the homology of the compactification, and compactification is not homotopy- invariant). 31.4. RELATIVE HOMOTOPY 181

31.4 Relative homotopy

In order to define the fundamental group, one needs the notion of homotopy relative to a subspace. These are homotopies which keep the elements of the subspace fixed. Formally: if f and g are continuous maps from X to Y and K is a subset of X, then we say that f and g are homotopic relative to K if there exists a homotopy H : X × [0,1] → Y between f and g such that H(k,t) = f(k) = g(k) for all k ∈ K and t ∈ [0,1]. Also, if g is a retract from X to K and f is the identity map, this is known as a strong deformation retract of X to K. When K is a point, the term pointed homotopy is used.

31.5 Groups

Main article: Homotopy group

Since the relation of two functions f, g : X → Y being homotopic relative to a subspace is an equivalence relation, we can look at the equivalence classes of maps between a fixed X and Y. If we fix X = [0,1]n, the unit interval [0,1] crossed with itself n times, and we take its boundary ∂([0,1]n) as a subspace then the equivalence classes form a group, n denoted πn(Y,y0), where y0 is in the image of the subspace ∂([0,1] ). We can define the action of one equivalence class on another, and so we get a group. These groups are called the homotopy groups. In the case n = 1, it is also called the fundamental group.

31.6 Category

The idea of homotopy can be turned into a formal category of category theory. The homotopy category is the category whose objects are topological spaces, and whose morphisms are homotopy equivalence classes of continuous maps. Two topological spaces X and Y are isomorphic in this category if and only if they are homotopy-equivalent. Then a functor on the category of topological spaces is homotopy invariant if it can be expressed as a functor on the homotopy category. For example, homology groups are a functorial homotopy invariant: this means that if f and g from X to Y are homotopic, then the group homomorphisms induced by f and g on the level of homology groups are the same: Hn(f) = Hn(g):Hn(X) → Hn(Y) for all n. Likewise, if X and Y are in addition path connected, and the homotopy between f and g is pointed, then the group homomorphisms induced by f and g on the level of homotopy groups are also the same: πn(f) = πn(g) : πn(X) → πn(Y).

31.7 Timelike

On a Lorentzian manifold, certain curves are distinguished as timelike.A timelike homotopy between two timelike curves is a homotopy such that each intermediate curve is timelike. No closed timelike curve (CTC) on a Lorentzian manifold is timelike homotopic to a point (that is, null timelike homotopic); such a manifold is therefore said to be multiply connected by timelike curves. A manifold such as the 3-sphere can be simply connected (by any type of curve), and yet be timelike multiply connected.

31.8 Lifting property

Main article: Homotopy lifting property

If we have a homotopy H : X × [0,1] → Y and a cover p : Y → Y and we are given a map h0 : X → Y such that H0 = p ○ h0 (h0 is called a lift of h0), then we can lift all H to a map H : X × [0,1] → Y such that p ○ H = H. The homotopy lifting property is used to characterize fibrations. 182 CHAPTER 31. HOMOTOPY

31.9 Extension property

Another useful property involving homotopy is the homotopy extension property, which characterizes the extension of a homotopy between two functions from a subset of some set to the set itself. It is useful when dealing with cofibrations.

31.10 Isotopy

In case the two given continuous functions f and g from the topological space X to the topological space Y are embeddings, one can ask whether they can be connected 'through embeddings’. This gives rise to the concept of isotopy, which is a homotopy, H, in the notation used before, such that for each fixed t, H(x,t) gives an embedding.[2] A related, but different, concept is that of ambient isotopy. Requiring that two embeddings be isotopic is a stronger requirement than that they be homotopic. For example, the map from the interval [−1,1] into the real numbers defined by f(x) = −x is not isotopic to the identity g(x) = x. Any homotopy from f to the identity would have to exchange the endpoints, which would mean that they would have to 'pass through' each other. Moreover, f has changed the orientation of the interval and g has not, which is impossible under an isotopy. However, the maps are homotopic; one homotopy from f to the identity is H: [−1,1] × [0,1] → [−1,1] given by H(x,y) = 2yx-x. Two homeomorphisms (which are special cases of embeddings) of the unit ball which agree on the boundary can be shown to be isotopic using Alexander’s trick. For this reason, the map of the unit disc in R2 defined by f(x,y) = (−x, −y) is isotopic to a 180-degree rotation around the origin, and so the identity map and f are isotopic because they can be connected by rotations.

The unknot is not equivalent to the Trefoil knot since one cannot be deformed into the other through a continuous path of embeddings. Thus they are not ambient isotopic. In geometric topology—for example in knot theory—the idea of isotopy is used to construct equivalence relations. For example, when should two knots be considered the same? We take two knots, K1 and K2, in three-dimensional space. A knot is an embedding of a one-dimensional space, the “loop of string” (or the circle), into this space, and this embedding gives a homeomorphism between the circle and its image in the embedding space. The intuitive idea behind the notion of knot equivalence is that one can deform one embedding to another through a path of embeddings: a continuous function starting at t=0 giving the K1 embedding, ending at t=1 giving the K2 embedding, with all intermediate values corresponding to embeddings. This corresponds to the definition of isotopy. An ambient isotopy, studied in this context, is an isotopy of the larger space, considered in light of its action on the embedded submanifold. Knots K1 and K2 are considered equivalent when there is an ambient isotopy which moves K1 to K2. This is the appropriate definition in the topological category. Similar language is used for the equivalent concept in contexts where one has a stronger notion of equivalence. For example a path between two smooth embeddings is a smooth isotopy.

31.11 Applications

Based on the concept of the homotopy, computation methods for algebraic and differential equations have been developed. The methods for algebraic equations include the homotopy continuation method [3] and the continuation method. The methods for differential equations include the homotopy analysis method.

31.12 See also

• Homeotopy 31.13. REFERENCES 183

• Homotopy analysis method

• Homotopy type theory • Mapping class group

• Poincaré conjecture • Regular homotopy

31.13 References

[1] Path homotopy and separately continuous functions

[2] Weisstein, Eric W., “Isotopy”, MathWorld.

[3] Allgower, Eugene. “Introduction to Numerical Continuation Methods” (PDF). CSU. Retrieved 3 January 2013.

31.14 Sources

• Armstrong, M.A. (1979). Basic Topology. Springer. ISBN 0-387-90839-0. • Hazewinkel, Michiel, ed. (2001), “Homotopy”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608- 010-4 • Hazewinkel, Michiel, ed. (2001), “Isotopy (in topology)", Encyclopedia of Mathematics, Springer, ISBN 978- 1-55608-010-4

• Spanier, Edwin (December 1994). Algebraic Topology. Springer. ISBN 0-387-94426-5. Chapter 32

Hypergraph

e 1 e 2 v2 v3 v1

e 3 e 4 v5 v4 v6 v7

An example of a hypergraph, with X = {v1, v2, v3, v4, v5, v6, v7} and E = {e1, e2, e3, e4} = {{v1, v2, v3}, {v2, v3}, {v3, v5, v6}, {v4}} .

In mathematics, a hypergraph is a generalization of a graph in which an edge can connect any number of vertices. Formally, a hypergraph H is a pair H = (X,E) where X is a set of elements called nodes or vertices, and E is a set of non-empty subsets of X called hyperedges or edges. Therefore, E is a subset of P(X) \ {∅} , where P(X) is the power set of X . While graph edges are pairs of nodes, hyperedges are arbitrary sets of nodes, and can therefore contain an arbitrary number of nodes. However, it is often desirable to study hypergraphs where all hyperedges have the same cardinality; a k-uniform hypergraph is a hypergraph such that all its hyperedges have size k. (In other words, one such hypergraph is a collection of sets, each such set a hyperedge connecting k nodes.) So a 2-uniform hypergraph is a graph, a 3- uniform hypergraph is a collection of unordered triples, and so on. A hypergraph is also called a set system or a family of sets drawn from the universal set X. The difference between a set system and a hypergraph is in the questions being asked. Hypergraph theory tends to concern questions similar

184 32.1. TERMINOLOGY 185

to those of graph theory, such as connectivity and colorability, while the theory of set systems tends to ask non-graph- theoretical questions, such as those of Sperner theory. There are variant definitions; sometimes edges must not be empty, and sometimes multiple edges, with the same set of nodes, are allowed. Hypergraphs can be viewed as incidence structures. In particular, there is a bipartite “incidence graph” or "Levi graph" corresponding to every hypergraph, and conversely, most, but not all, bipartite graphs can be regarded as incidence graphs of hypergraphs. Hypergraphs have many other names. In computational geometry, a hypergraph may sometimes be called a range space and then the hyperedges are called ranges.[1] In cooperative game theory, hypergraphs are called simple games (voting games); this notion is applied to solve problems in social choice theory. In some literature edges are referred to as hyperlinks or connectors.[2] Special kinds of hypergraphs include, besides k-uniform ones, clutters, where no edge appears as a subset of another edge; and abstract simplicial complexes, which contain all subsets of every edge. The collection of hypergraphs is a category with hypergraph homomorphisms as morphisms.

32.1 Terminology

Because hypergraph links can have any cardinality, there are several notions of the concept of a subgraph, called subhypergraphs, partial hypergraphs and section hypergraphs. Let H = (X,E) be the hypergraph consisting of vertices

X = {xi|i ∈ Iv}, and having edge set

E = {ei|i ∈ Ie, ei ⊆ X},

where Iv and Ie are the index sets of the vertices and edges respectively.

A subhypergraph is a hypergraph with some vertices removed. Formally, the subhypergraph HA induced by a subset A of X is defined as

HA = (A, {ei ∩ A|ei ∩ A ≠ ∅}) .

The partial hypergraph is a hypergraph with some edges removed. Given a subset J ⊂ Ie of the edge index set, the partial hypergraph generated by J is the hypergraph

(X, {ei|i ∈ J}) . Given a subset A ⊆ X , the section hypergraph is the partial hypergraph

H × A = (A, {ei|i ∈ Ie, ei ⊆ A}) .

∗ The dual H of H is a hypergraph whose vertices and edges are interchanged, so that the vertices are given by {ei} and whose edges are given by {Xm} where

Xm = {xm|xm ∈ ei}. When a notion of equality is properly defined, as done below, the operation of taking the dual of a hypergraph is an involution, i.e., 186 CHAPTER 32. HYPERGRAPH

∗ (H∗) = H.

A connected graph G with the same vertex set as a connected hypergraph H is a host graph for H if every hyperedge of H induces a connected subgraph in G. For a disconnected hypergraph H, G is a host graph if there is a bijection between the connected components of G and of H, such that each connected component G' of G is a host of the corresponding H'. A hypergraph is bipartite if and only if its vertices can be partitioned into two classes U and V in such a way that each hyperedge with cardinality at least 2 contains at least one vertex from both classes. The 2-section (or clique graph, representing graph, primal graph, Gaifman graph) of a hypergraph is the graph with the same vertices of the hypergraph, and edges between all pairs of vertices contained in the same hyperedge.

32.2 Bipartite graph model

A hypergraph H may be represented by a bipartite graph BG as follows: the sets X and E are the partitions of BG, and (x1, e1) are connected with an edge if and only if vertex x1 is contained in edge e1 in H. Conversely, any bipartite graph with fixed parts and no unconnected nodes in the second part represents some hypergraph in the manner described above. This bipartite graph is also called incidence graph.

32.3 Acyclicity

In contrast with ordinary undirected graphs for which there is a single natural notion of cycles and acyclic graphs, there are multiple natural non-equivalent definitions of acyclicity for hypergraphs which collapse to ordinary graph acyclicity for the special case of ordinary graphs. A first definition of acyclicity for hypergraphs was given by Claude Berge:[3] a hypergraph is Berge-acyclic if its incidence graph (the bipartite graph defined above) is acyclic. This definition is very restrictive: for instance, if a hypergraph has some pair v ≠ v′ of vertices and some pair f ≠ f ′ of hyperedges such that v, v′ ∈ f and v, v′ ∈ f ′ , then it is Berge-cyclic. Berge-cyclicity can obviously be tested in linear time by an exploration of the incidence graph. We can define a weaker notion of hypergraph acyclicity,[4] later termed α-acyclicity. This notion of acyclicity is equivalent to the hypergraph being conformal (every clique of the primal graph is covered by some hyperedge) and its primal graph being chordal; it is also equivalent to reducibility to the empty graph through the GYO algorithm[5][6] (also known as Graham’s algorithm), a confluent iterative process which removes hyperedges using a generalized definition of ears. In the domain of database theory, it is known that a database schema enjoys certain desirable properties if its underlying hypergraph is α-acyclic.[7] Besides, α-acyclicity is also related to the expressiveness of the guarded fragment of first-order logic. We can test in linear time if a hypergraph is α-acyclic.[8] Note that α-acyclicity has the counter-intuitive property that adding hyperedges to an α-cyclic hypergraph may make it α-acyclic (for instance, adding a hyperedge containing all vertices of the hypergraph will always make it α-acyclic). Motivated in part by this perceived shortcoming, Ronald Fagin[9] defined the stronger notions of β-acyclicity and γ-acyclicity. We can state β-acyclicity as the requirement that all subhypergraphs of the hypergraph are α-acyclic, which is equivalent[9] to an earlier definition by Graham.[6] The notion of γ-acyclicity is a more restrictive condition which is equivalent to several desirable properties of database schemas and is related to Bachman diagrams. Both β-acyclicity and γ-acyclicity can be tested in polynomial time. Those four notions of acyclicity are comparable: Berge-acyclicity implies γ-acyclicity which implies β-acyclicity which implies α-acyclicity. However, none of the reverse implications hold, so those four notions are different.[9]

32.4 Isomorphism and equality

A hypergraph homomorphism is a map from the vertex set of one hypergraph to another such that each edge maps to one other edge. 32.4. ISOMORPHISM AND EQUALITY 187

A hypergraph H = (X,E) is isomorphic to a hypergraph G = (Y,F ) , written as H ≃ G if there exists a bijection

ϕ : X → Y

and a permutation π of I such that

ϕ(ei) = fπ(i)

The bijection ϕ is then called the isomorphism of the graphs. Note that

H ≃ G if and only if H∗ ≃ G∗ .

When the edges of a hypergraph are explicitly labeled, one has the additional notion of strong isomorphism. One ∼ says that H is strongly isomorphic to G if the permutation is the identity. One then writes H = G . Note that all strongly isomorphic graphs are isomorphic, but not vice versa. When the vertices of a hypergraph are explicitly labeled, one has the notions of equivalence, and also of equality. One says that H is equivalent to G , and writes H ≡ G if the isomorphism ϕ has

ϕ(xn) = yn

and

ϕ(ei) = fπ(i)

Note that

∗ ∼ ∗ H ≡ G if and only if H = G

If, in addition, the permutation π is the identity, one says that H equals G , and writes H = G . Note that, with this definition of equality, graphs are self-dual:

∗ (H∗) = H

A hypergraph automorphism is an isomorphism from a vertex set into itself, that is a relabeling of vertices. The set of automorphisms of a hypergraph H (= (X, E)) is a group under composition, called the automorphism group of the hypergraph and written Aut(H).

32.4.1 Examples

Consider the hypergraph H with edges

H = {e1 = {a, b}, e2 = {b, c}, e3 = {c, d}, e4 = {d, a}, e5 = {b, d}, e6 = {a, c}}

and

G = {f1 = {α, β}, f2 = {β, γ}, f3 = {γ, δ}, f4 = {δ, α}, f5 = {α, γ}, f6 = {β, δ}}

Then clearly H and G are isomorphic (with ϕ(a) = α , etc.), but they are not strongly isomorphic. So, for example, in H , vertex a meets edges 1, 4 and 6, so that, 188 CHAPTER 32. HYPERGRAPH

e1 ∩ e4 ∩ e6 = {a} In graph G , there does not exist any vertex that meets edges 1, 4 and 6:

f1 ∩ f4 ∩ f6 = ∅ ∗ ∼ ∗ In this example, H and G are equivalent, H ≡ G , and the duals are strongly isomorphic: H = G .

32.5 Symmetric hypergraphs

The rank r(H) of a hypergraph H is the maximum cardinality of any of the edges in the hypergraph. If all edges have the same cardinality k, the hypergraph is said to be uniform or k-uniform, or is called a k-hypergraph.A graph is just a 2-uniform hypergraph. The degree d(v) of a vertex v is the number of edges that contain it. H is k-regular if every vertex has degree k. The dual of a uniform hypergraph is regular and vice versa.

Two vertices x and y of H are called symmetric if there exists an automorphism such that ϕ(x) = y . Two edges ei and ej are said to be symmetric if there exists an automorphism such that ϕ(ei) = ej . A hypergraph is said to be vertex-transitive (or vertex-symmetric) if all of its vertices are symmetric. Similarly, a hypergraph is edge-transitive if all edges are symmetric. If a hypergraph is both edge- and vertex-symmetric, then the hypergraph is simply transitive. Because of hypergraph duality, the study of edge-transitivity is identical to the study of vertex-transitivity.

32.6 Transversals

A transversal (or "hitting set") of a hypergraph H = (X, E) is a set T ⊆ X that has nonempty intersection with every edge. A transversal T is called minimal if no proper subset of T is a transversal. The transversal hypergraph of H is the hypergraph (X, F) whose edge set F consists of all minimal transversals of H. Computing the transversal hypergraph has applications in combinatorial optimization, in game theory, and in several fields of computer science such as machine learning, indexing of databases, the satisfiability problem, data mining, and computer program optimization.

32.7 Incidence matrix

Let V = {v1, v2, . . . , vn} and E = {e1, e2, . . . em} . Every hypergraph has an n×m incidence matrix A = (aij) where

{ 1 if v ∈ e a = i j ij 0 otherwise.

The transpose At of the incidence matrix defines a hypergraph H∗ = (V ∗,E∗) called the dual of H , where V ∗ is ∗ ∗ ∗ ∈ ∗ ∗ ∈ ∗ ∗ ∈ ∗ an m-element set and E is an n-element set of subsets of V . For vj V and ei E , vj ei if and only if aij = 1 .

32.8 Hypergraph coloring

Classic hypergraph coloring is assigning one of the colors from set {1, 2, 3, ...λ} to every vertex of a hypergraph in such a way that each hyperedge contains at least two vertices of distinct colors. In other words, there must be no 32.9. PARTITIONS 189

monochromatic hyperedge with cardinality at least 2. In this sense it is a direct generalization of graph coloring. Minimum number of used distinct colors over all colorings is called the chromatic number of a hypergraph. Hypergraphs for which there exists a coloring using up to k colors are referred to as k-colorable. The 2-colorable hypergraphs are exactly the bipartite ones. There are many generalizations of classic hypergraph coloring. One of them is the so-called mixed hypergraph coloring, when monochromatic edges are allowed. Some mixed hypergraphs are uncolorable for any number of colors. A general criterion for uncolorability is unknown. When a mixed hypergraph is colorable, then the minimum and maximum number of used colors are called the lower and upper chromatic numbers respectively. See http: //spectrum.troy.edu/voloshin/mh.html for details.

32.9 Partitions

A partition theorem due to E. Dauber[10] states that, for an edge-transitive hypergraph H = (X,E) , there exists a partition

(X1,X2, ··· ,XK ) ≤ ≤ of the vertex set X such that the subhypergraph HXk generated by Xk is transitive for each 1 k K , and such that

∑K

r (HXk ) = r(H) k=1 where r(H) is the rank of H. As a corollary, an edge-transitive hypergraph that is not vertex-transitive is bicolorable. Graph partitioning (and in particular, hypergraph partitioning) has many applications to IC design[11] and parallel computing.[12][13][14]

32.10 Theorems

Many theorems and concepts involving graphs also hold for hypergraphs. Ramsey’s theorem and Line graph of a hypergraph are typical examples. Some methods for studying symmetries of graphs extend to hypergraphs. Two prominent theorems are the Erdős–Ko–Rado theorem and the Kruskal–Katona theorem on uniform hypergraphs.

32.11 Hypergraph drawing

Although hypergraphs are more difficult to draw on paper than graphs, several researchers have studied methods for the visualization of hypergraphs. In one possible visual representation for hypergraphs, similar to the standard graph drawing style in which curves in the plane are used to depict graph edges, a hypergraph’s vertices are depicted as points, disks, or boxes, and its hyperedges are depicted as trees that have the vertices as their leaves.[15][16] If the vertices are represented as points, the hyperedges may also be shown as smooth curves that connect sets of points, or as simple closed curves that enclose sets of points.[17][18] In another style of hypergraph visualization, the subdivision model of hypergraph drawing,[19] the plane is subdivided into regions, each of which represents a single vertex of the hypergraph. The hyperedges of the hypergraph are represented by contiguous subsets of these regions, which may be indicated by coloring, by drawing outlines around them, or both. An order-n Venn diagram, for instance, may be viewed as a subdivision drawing of a hypergraph with n hyperedges (the curves defining the diagram) and 2n − 1 vertices (represented by the regions into which these curves subdivide the plane). In contrast with the polynomial-time recognition of planar graphs, it is NP-complete to 190 CHAPTER 32. HYPERGRAPH

This circuit diagram can be interpreted as a drawing of a hypergraph in which four vertices (depicted as white rectangles and disks) are connected by three hyperedges drawn as trees.

determine whether a hypergraph has a planar subdivision drawing,[20] but the existence of a drawing of this type may be tested efficiently when the adjacency pattern of the regions is constrained to be a path, cycle, or tree.[21]

32.12 Generalizations

One possible generalization of a hypergraph is to allow edges to point at other edges. There are two variations of this generalization. In one, the edges consist not only of a set of vertices, but may also contain subsets of vertices, ad infinitum. In essence, every edge is just an internal node of a tree or directed acyclic graph, and vertexes are the leaf nodes. A hypergraph is then just a collection of trees with common, shared nodes (that is, a given internal node or leaf may occur in several different trees). Conversely, every collection of trees can be understood as this generalized hypergraph. Since trees are widely used throughout computer science and many other branches of mathematics, one could say that hypergraphs appear naturally as well. So, for example, this generalization arises naturally as a model of term algebra; edges correspond to terms and vertexes correspond to constants or variables. For such a hypergraph, set membership then provides an ordering, but the ordering is neither a partial order nor a preorder, since it is not transitive. The graph corresponding to the Levi graph of this generalization is a directed acyclic graph. Consider, for example, the generalized hypergraph whose vertex set is V = {a, b} and whose edges are e1 = {a, b} and e2 = {a, e1} . Then, although b ∈ e1 and e1 ∈ e2 , it is not true that b ∈ e2 . However, the transitive closure of set membership for such hypergraphs does induce a partial order, and “flattens” the hypergraph into a partially ordered set. Alternately, edges can be allowed to point at other edges, (irrespective of the requirement that the edges be ordered as directed, acyclic graphs). This allows graphs with edge-loops, which need not contain vertices at all. For example, consider the generalized hypergraph consisting of two edges e1 and e2 , and zero vertices, so that e1 = {e2} and e2 = {e1} . As this loop is infinitely recursive, sets that are the edges violate the axiom of foundation. In particular, there is no transitive closure of set membership for such hypergraphs. Although such structures may seem strange at first, they can be readily understood by noting that the equivalent generalization of their Levi graph is no longer bipartite, but is rather just some general directed graph. The generalized incidence matrix for such hypergraphs is, by definition, a square matrix, of a rank equal to the total number of vertices plus edges. Thus, for the above example, the incidence matrix is simply 32.13. SEE ALSO 191

An order-4 Venn diagram, which can be interpreted as a subdivision drawing of a hypergraph with 15 vertices (the 15 colored regions) and 4 hyperedges (the 4 ellipses).

[ ] 0 1 . 1 0

32.13 See also

• Combinatorial design

• P system

• Factor graph

• Greedoid

• Incidence structure

• Matroid

• Multigraph

• Sparse matrix-vector multiplication 192 CHAPTER 32. HYPERGRAPH

32.14 Notes

[1] Haussler, David; Welzl, Emo (1987), "ε-nets and simplex range queries”, Discrete and Computational Geometry 2 (2): 127–151, doi:10.1007/BF02187876, MR 884223.

[2] Judea Pearl, in HEURISTICS Intelligent Search Strategies for Computer Problem Solving, Addison Wesley (1984), p. 25.

[3] Claude Berge, Graphs and Hypergraphs

[4] C. Beeri, R. Fagin, D. Maier, M. Yannakakis, On the Desirability of Acyclic Database Schemes

[5] C. T. Yu and M. Z. Özsoyoğlu. An algorithm for tree-query membership of a distributed query. In Proc. IEEE COMPSAC, pages 306-312, 1979

[6] M. H. Graham. On the universal relation. Technical Report, University of Toronto, Toronto, Ontario, Canada, 1979

[7] S. Abiteboul, R. B. Hull, V. Vianu, Foundations of Databases

[8] R. E. Tarjan, M. Yannakakis. Simple linear-time algorithms to test chordality of graphs, test acyclicity of hypergraphs, and selectively reduce acyclic hypergraphs. SIAM J. on Computing, 13(3):566-579, 1984.

[9] Ronald Fagin, Degrees of Acyclicity for Hypergraphs and Relational Database Schemes

[10] E. Dauber, in Graph theory, ed. F. Harary, Addison Wesley, (1969) p. 172.

[11] Karypis, G., Aggarwal, R., Kumar, V., and Shekhar, S. (March 1999), “Multilevel hypergraph partitioning: applications in VLSI domain”, IEEE Transactions on Very Large Scale Integration (VLSI) Systems 7 (1): 69–79, doi:10.1109/92.748202.

[12] Hendrickson, B., Kolda, T.G. (2000), “Graph partitioning models for parallel computing”, Parallel Computing 26 (12): 1519–1545, doi:10.1016/S0167-8191(00)00048-X.

[13] Catalyurek, U.V.; C. Aykanat (1995). A Hypergraph Model for Mapping Repeated Sparse Matrix-Vector Product Compu- tations onto Multicomputers. Proc. International Conference on Hi Performance Computing (HiPC'95).

[14] Catalyurek, U.V.; C. Aykanat (1999), “Hypergraph-Partitioning Based Decomposition for Parallel Sparse-Matrix Vector Multiplication”, IEEE Transactions on Parallel and Distributed Systems (IEEE) 10 (7): 673–693, doi:10.1109/71.780863.

[15] Sander, G. (2003), “Layout of directed hypergraphs with orthogonal hyperedges”, Proc. 11th International Symposium on Graph Drawing (GD 2003), Lecture Notes in Computer Science 2912, Springer-Verlag, pp. 381–386.

[16] Eschbach, Thomas; Günther, Wolfgang; Becker, Bernd (2006), “Orthogonal hypergraph drawing for improved visibility” (PDF), Journal of Graph Algorithms and Applications 10 (2): 141–157.

[17] Mäkinen, Erkki (1990), “How to draw a hypergraph”, International Journal of Computer Mathematics 34 (3): 177–185, doi:10.1080/00207169008803875.

[18] Bertault, François; Eades, Peter (2001), “Drawing hypergraphs in the subset standard”, Proc. 8th International Symposium on Graph Drawing (GD 2000), Lecture Notes in Computer Science 1984, Springer-Verlag, pp. 45–76, doi:10.1007/3- 540-44541-2_15.

[19] Kaufmann, Michael; van Kreveld, Marc; Speckmann, Bettina (2009), “Subdivision drawings of hypergraphs”, Proc. 16th International Symposium on Graph Drawing (GD 2008), Lecture Notes in Computer Science 5417, Springer-Verlag, pp. 396–407, doi:10.1007/978-3-642-00219-9_39.

[20] Johnson, David S.; Pollak, H. O. (2006), “Hypergraph planarity and the complexity of drawing Venn diagrams”, Journal of graph theory 11 (3): 309–325, doi:10.1002/jgt.3190110306.

[21] Buchin, Kevin; van Kreveld, Marc; Meijer, Henk; Speckmann, Bettina; Verbeek, Kevin (2010), “On planar supports for hypergraphs”, Proc. 17th International Symposium on Graph Drawing (GD 2009), Lecture Notes in Computer Science 5849, Springer-Verlag, pp. 345–356, doi:10.1007/978-3-642-11805-0_33.

[1] 32.15. REFERENCES 193

32.15 References

• Claude Berge, “Hypergraphs: Combinatorics of finite sets”. North-Holland, 1989.

• Claude Berge, Dijen Ray-Chaudhuri, “Hypergraph Seminar, Ohio State University 1972”, Lecture Notes in Mathematics 411 Springer-Verlag

• Hazewinkel, Michiel, ed. (2001), “Hypergraph”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608- 010-4 • Alain Bretto, “Hypergraph Theory: an Introduction”, Springer, 2013.

• Vitaly I. Voloshin. “Coloring Mixed Hypergraphs: Theory, Algorithms and Applications”. Fields Institute Monographs, American Mathematical Society, 2002.

• Vitaly I. Voloshin. “Introduction to Graph and Hypergraph Theory”. Nova Science Publishers, Inc., 2009. • This article incorporates material from hypergraph on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

[1] Alain Bretto; Hypergraph Theory: An Introduction; Springer Chapter 33

If and only if

“Iff” redirects here. For other uses, see IFF (disambiguation). "↔" redirects here. It is not to be confused with Bidirectional traffic.

↔⇔≡ Logical symbols representing iff In logic and related fields such as mathematics and philosophy, if and only if (shortened iff) is a biconditional logical connective between statements. In that it is biconditional, the connective can be likened to the standard material conditional (“only if”, equal to “if ... then”) combined with its reverse (“if”); hence the name. The result is that the truth of either one of the connected statements requires the truth of the other (i.e. either both statements are true, or both are false). It is controversial whether the connective thus defined is properly rendered by the English “if and only if”, with its pre-existing meaning. There is nothing to stop one from stipulating that we may read this connective as “only if and if”, although this may lead to confusion. In writing, phrases commonly used, with debatable propriety, as alternatives to P “if and only if” Q include Q is necessary and sufficient for P, P is equivalent (or materially equivalent) to Q (compare material implication), P precisely if Q, P precisely (or exactly) when Q, P exactly in case Q, and P just in case Q. Many authors regard “iff” as unsuitable in formal writing;[1] others use it freely.[2] In logic formulae, logical symbols are used instead of these phrases; see the discussion of notation.

33.1 Definition

The truth table of p ↔ q is as follows:[3] Note that it is equivalent to that produced by the XNOR gate, and opposite to that produced by the XOR gate.

33.2 Usage

33.2.1 Notation

The corresponding logical symbols are "↔", "⇔", and "≡", and sometimes “iff”. These are usually treated as equiv- alent. However, some texts of mathematical logic (particularly those on first-order logic, rather than propositional logic) make a distinction between these, in which the first, ↔, is used as a symbol in logic formulas, while ⇔ is used in reasoning about those logic formulas (e.g., in metalogic). In Łukasiewicz's notation, it is the prefix symbol 'E'. Another term for this logical connective is exclusive nor.

194 33.3. DISTINCTION FROM “IF” AND “ONLY IF” 195

33.2.2 Proofs

In most logical systems, one proves a statement of the form “P iff Q” by proving “if P, then Q” and “if Q, then P”. Proving this pair of statements sometimes leads to a more natural proof, since there are not obvious conditions in which one would infer a biconditional directly. An alternative is to prove the disjunction "(P and Q) or (not-P and not-Q)", which itself can be inferred directly from either of its disjuncts—that is, because “iff” is truth-functional, “P iff Q” follows if P and Q have both been shown true, or both false.

33.2.3 Origin of iff

Usage of the abbreviation “iff” first appeared in print in John L. Kelley's 1955 book General Topology.[4] Its invention is often credited to Paul Halmos, who wrote “I invented 'iff,' for 'if and only if'—but I could never believe I was really its first inventor.”[5]

33.3 Distinction from “if” and “only if”

1. “Madison will eat the fruit if it is an apple.” (equivalent to “Only if Madison will eat the fruit, is it an apple;" or “Madison will eat the fruit ← fruit is an apple”) This states simply that Madison will eat fruits that are apples. It does not, however, exclude the possibility that Madison might also eat bananas or other types of fruit. All that is known for certain is that she will eat any and all apples that she happens upon. That the fruit is an apple is a sufficient condition for Madison to eat the fruit. 2. “Madison will eat the fruit only if it is an apple.” (equivalent to “If Madison will eat the fruit, then it is an apple” or “Madison will eat the fruit → fruit is an apple”) This states that the only fruit Madison will eat is an apple. It does not, however, exclude the possi- bility that Madison will refuse an apple if it is made available, in contrast with (1), which requires Madison to eat any available apple. In this case, that a given fruit is an apple is a necessary condition for Madison to be eating it. It is not a sufficient condition since Madison might not eat all the apples she is given. 3. “Madison will eat the fruit if and only if it is an apple” (equivalent to “Madison will eat the fruit ↔ fruit is an apple”) This statement makes it clear that Madison will eat all and only those fruits that are apples. She will not leave any apple uneaten, and she will not eat any other type of fruit. That a given fruit is an apple is both a necessary and a sufficient condition for Madison to eat the fruit.

Sufficiency is the inverse of necessity. That is to say, given P→Q (i.e. if P then Q), P would be a sufficient condition for Q, and Q would be a necessary condition for P. Also, given P→Q, it is true that ¬Q→¬P (where ¬ is the negation operator, i.e. “not”). This means that the relationship between P and Q, established by P→Q, can be expressed in the following, all equivalent, ways:

P is sufficient for Q Q is necessary for P ¬Q is sufficient for ¬P ¬P is necessary for ¬Q

As an example, take (1), above, which states P→Q, where P is “the fruit in question is an apple” and Q is “Madison will eat the fruit in question”. The following are four equivalent ways of expressing this very relationship:

If the fruit in question is an apple, then Madison will eat it. Only if Madison will eat the fruit in question, is it an apple. If Madison will not eat the fruit in question, then it is not an apple. Only if the fruit in question is not an apple, will Madison not eat it. 196 CHAPTER 33. IF AND ONLY IF

So we see that (2), above, can be restated in the form of if...then as “If Madison will eat the fruit in question, then it is an apple"; taking this in conjunction with (1), we find that (3) can be stated as “If the fruit in question is an apple, then Madison will eat it; AND if Madison will eat the fruit, then it is an apple”.

33.4 More general usage

Iff is used outside the field of logic, wherever logic is applied, especially in mathematical discussions. It has the same meaning as above: it is an abbreviation for if and only if, indicating that one statement is both necessary and sufficient for the other. This is an example of mathematical jargon. (However, as noted above, if, rather than iff, is more often used in statements of definition.) The elements of X are all and only the elements of Y is used to mean: “for any z in the domain of discourse, z is in X if and only if z is in Y.”

33.5 See also

• Covariance

• Logical biconditional • Logical equality

• Necessary and sufficient condition • Polysyllogism

33.6 Footnotes

[1] E.g. Daepp, Ulrich; Gorkin, Pamela (2011), Reading, Writing, and Proving: A Closer Look at Mathematics, Undergraduate Texts in Mathematics, Springer, p. 52, ISBN 9781441994790, While it can be a real time-saver, we don't recommend it in formal writing.

[2] Rothwell, Edward J.; Cloud, Michael J. (2014), Engineering Writing by Design: Creating Formal Documents of Lasting Value, CRC Press, p. 98, ISBN 9781482234312, It is common in mathematical writing.

[3] p <=> q. Wolfram|Alpha

[4] General Topology, reissue ISBN 978-0-387-90125-1

[5] Nicholas J. Higham (1998). Handbook of writing for the mathematical sciences (2nd ed.). SIAM. p. 24. ISBN 978-0- 89871-420-3.

33.7 External links

• Language Log: “Just in Case”

• Southern California Philosophy for philosophy graduate students: “Just in Case” Chapter 34

Isomorphism

This article is about mathematics. For other uses, see Isomorphism (disambiguation).

+i

−1 0 +1

−i

108°

72°

The group of fifth roots of unity under multiplication is isomorphic to the group of rotations of the regular pentagon under composition.

In mathematics, an isomorphism (from the Ancient Greek: ἴσος isos “equal”, and μορφή morphe “shape”) is a homomorphism (or more generally a morphism) that admits an inverse.[note 1] Two mathematical objects are isomor- phic if an isomorphism exists between them. An automorphism is an isomorphism whose source and target coincide. The interest of isomorphisms lies in the fact that two isomorphic objects cannot be distinguished by using only the properties used to define morphisms; thus isomorphic objects may be considered the same as long as one considers only these properties and their consequences. For most algebraic structures, including groups and rings, a homomorphism is an isomorphism if and only if it is bijective. In topology, where the morphisms are continuous functions, isomorphisms are also called homeomorphisms or bicon- tinuous functions. In mathematical analysis, where the morphisms are differentiable functions, isomorphisms are also called diffeomorphisms.

197 198 CHAPTER 34. ISOMORPHISM

A canonical isomorphism is a canonical map that is an isomorphism. Two objects are said to be canonically isomorphic if there is a canonical isomorphism between them. For example, the canonical map from a finite- dimensional vector space V to its second dual space is a canonical isomorphism; on the other hand, V is isomorphic to its dual space but not canonically in general. Isomorphisms are formalized using category theory. A morphism f : X → Y in a category is an isomorphism if it admits a two-sided inverse, meaning that there is another morphism g : Y → X in that category such that gf = 1X and fg = 1Y, where 1X and 1Y are the identity morphisms of X and Y, respectively.[1]

34.1 Examples

34.1.1 Logarithm and exponential

R× R Let >0 be the multiplicative group of positive real numbers, and let be the additive group of real numbers. R× → R ∈ R× The logarithm function log: >0 satisfies log(xy) = log x + log y for all x, y >0 , so it is a group R → R× ∈ R homomorphism. The exponential function exp: >0 satisfies exp(x + y) = (exp x)(exp y) for all x, y , so it too is a homomorphism. The identities log exp x = x and exp log y = y show that log and exp are inverses of each other. Since log is a homomorphism that has an inverse that is also a homomorphism, log is an isomorphism of groups. Because log is an isomorphism, it translates multiplication of positive real numbers into addition of real numbers. This is what makes it possible to multiply real numbers using a ruler and a table of logarithms, or using a slide rule with a logarithmic scale.

34.1.2 Integers modulo 6

Consider the group (Z6, +) , the integers from 0 to 5 with addition modulo 6. Also consider the group (Z2 × Z3, +) , the ordered pairs where the x coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the x-coordinate is modulo 2 and addition in the y-coordinate is modulo 3. These structures are isomorphic under addition, if you identify them using the following scheme:

(0,0) → 0 (1,1) → 1 (0,2) → 2 (1,0) → 3 (0,1) → 4 (1,2) → 5

or in general (a,b) → (3a + 4b) mod 6. For example note that (1,1) + (1,0) = (0,1), which translates in the other system as 1 + 3 = 4. Even though these two groups “look” different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups Zm and Zn is isomorphic to (Zmn, +) if and only if m and n are coprime.

34.1.3 Relation-preserving isomorphism

If one object consists of a set X with a binary relation R and the other object consists of a set Y with a binary relation S then an isomorphism from X to Y is a bijective function ƒ: X → Y such that:[2]

S(f(u), f(v)) ⇐⇒ R(u, v) 34.2. ISOMORPHISM VS. BIJECTIVE MORPHISM 199

S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is. For example, R is an ordering ≤ and S an ordering ⊑ , then an isomorphism from X to Y is a bijective function ƒ: X → Y such that

f(u) ⊑ f(v) ⇐⇒ u ≤ v.

Such an isomorphism is called an order isomorphism or (less commonly) an isotone isomorphism. If X = Y, then this is a relation-preserving automorphism.

34.2 Isomorphism vs. bijective morphism

In a concrete category (that is, roughly speaking, a category whose objects are sets and morphisms are mappings between sets), such as the category of topological spaces or categories of algebraic objects like groups, rings, and modules, an isomorphism must be bijective on the underlying sets. In algebraic categories (specifically, categories of varieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on un- derlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces), and there are categories in which each object admits an underlying set but in which isomorphisms need not be bijective (such as the homotopy category of CW-complexes).

34.3 Applications

In abstract algebra, two basic isomorphisms are defined:

• Group isomorphism, an isomorphism between groups • Ring isomorphism, an isomorphism between rings. (Note that isomorphisms between fields are actually ring isomorphisms)

Just as the automorphisms of an algebraic structure form a group, the isomorphisms between two algebras sharing a common structure form a heap. Letting a particular isomorphism identify the two structures turns this heap into a group. In mathematical analysis, the Laplace transform is an isomorphism mapping hard differential equations into easier algebraic equations. In category theory, Iet the category C consist of two classes, one of objects and the other of morphisms. Then a general definition of isomorphism that covers the previous and many other cases is: an isomorphism is a morphism ƒ: a → b that has an inverse, i.e. there exists a morphism g: b → a with ƒg = 1b and gƒ = 1a. For example, a bijective linear map is an isomorphism between vector spaces, and a bijective continuous function whose inverse is also continuous is an isomorphism between topological spaces, called a homeomorphism. In graph theory, an isomorphism between two graphs G and H is a bijective map f from the vertices of G to the vertices of H that preserves the “edge structure” in the sense that there is an edge from vertex u to vertex v in G if and only if there is an edge from ƒ(u) to ƒ(v) in H. See graph isomorphism. In mathematical analysis, an isomorphism between two Hilbert spaces is a bijection preserving addition, scalar mul- tiplication, and inner product. In early theories of logical atomism, the formal relationship between facts and true propositions was theorized by Bertrand Russell and Ludwig Wittgenstein to be isomorphic. An example of this line of thinking can be found in Russell’s Introduction to Mathematical Philosophy. In cybernetics, the Good Regulator or Conant-Ashby theorem is stated “Every Good Regulator of a system must be a model of that system”. Whether regulated or self-regulating an isomorphism is required between regulator part and the processing part of the system. 200 CHAPTER 34. ISOMORPHISM

34.4 Relation with equality

See also: Equality (mathematics)

In certain areas of mathematics, notably category theory, it is valuable to distinguish between equality on the one hand and isomorphism on the other.[3] Equality is when two objects are exactly the same, and everything that’s true about one object is true about the other, while an isomorphism implies everything that’s true about a designated part of one object’s structure is true about the other’s. For example, the sets

A = {x ∈ Z | x2 < 2} and B = {−1, 0, 1} are equal; they are merely different presentations—the first an intensional one (in set builder notation), and the second extensional (by explicit enumeration)—of the same subset of the integers. By contrast, the sets {A,B,C} and {1,2,3} are not equal—the first has elements that are letters, while the second has elements that are numbers. These are isomorphic as sets, since finite sets are determined up to isomorphism by their cardinality (number of elements) and these both have three elements, but there are many choices of isomorphism—one isomorphism is

A 7→ 1, B 7→ 2, C 7→ 3, while another is A 7→ 3, B 7→ 2, C 7→ 1,

and no one isomorphism is intrinsically better than any other.[note 2][note 3] On this view and in this sense, these two sets are not equal because one cannot consider them identical: one can choose an isomorphism between them, but that is a weaker claim than identity—and valid only in the context of the chosen isomorphism. Sometimes the isomorphisms can seem obvious and compelling, but are still not equalities. As a simple example, the genealogical relationships among Joe, John, and Bobby Kennedy are, in a real sense, the same as those among the American football quarterbacks in the Manning family: Archie, Peyton, and Eli. The father-son pairings and the elder-brother-younger-brother pairings correspond perfectly. That similarity between the two family structures illustrates the origin of the word isomorphism (Greek iso-, “same,” and -morph, “form” or “shape”). But because the Kennedys are not the same people as the Mannings, the two genealogical structures are merely isomorphic and not equal. Another example is more formal and more directly illustrates the motivation for distinguishing equality from isomor- phism: the distinction between a finite-dimensional vector space V and its dual space V* = { φ: V → K} of linear maps from V to its field of scalars K. These spaces have the same dimension, and thus are isomorphic as abstract vector spaces (since algebraically, vector spaces are classified by dimension, just as sets are classified by cardinality), ∼ but there is no “natural” choice of isomorphism V → V ∗ . If one chooses a basis for V, then this yields an isomorphism: For all u. v ∈ V,

∼ ∗ T v 7→ ϕv ∈ V that such ϕv(u) = v u

This corresponds to transforming a column vector (element of V) to a row vector (element of V*) by transpose, but a different choice of basis gives a different isomorphism: the isomorphism “depends on the choice of basis”. More subtly, there is a map from a vector space V to its double dual V** = { x: V* → K} that does not depend on the choice of basis: For all v ∈ V and φ ∈ V*,

∼ ∗∗ v 7→ xv ∈ V that such xv(ϕ) = ϕ(v)

This leads to a third notion, that of a natural isomorphism: while V and V** are different sets, there is a “natural” choice of isomorphism between them. This intuitive notion of “an isomorphism that does not depend on an arbitrary choice” is formalized in the notion of a natural transformation; briefly, that one may consistently identify, or more ∼ generally map from, a vector space to its double dual, V → V ∗∗ , for any vector space in a consistent way. Formalizing this intuition is a motivation for the development of category theory. However, there is a case where the distinction between natural isomorphism and equality is usually not made. That is for the objects that may be characterized by a universal property. In fact, there is a unique isomorphism, necessarily natural, between two objects sharing the same universal property. A typical example is the set of real numbers, which may be defined through infinite decimal expansion, infinite binary expansion, Cauchy sequences, Dedekind 34.5. SEE ALSO 201

cuts and many other ways. Formally these constructions define different objects, which all are solutions of the same universal property. As these objects have exactly the same properties, one may forget the method of construction and considering them as equal. This is what everybody does when talking of "the set of the real numbers”. The same occurs with quotient spaces: they are commonly constructed as sets of equivalence classes. However, talking of set of sets may be counterintuitive, and quotient spaces are commonly considered as a pair of a set of undetermined objects, often called “points”, and a surjective map onto this set. If one wishes to draw a distinction between an arbitrary isomorphism (one that depends on a choice) and a natural isomorphism (one that can be done consistently), one may write ≈ for an unnatural isomorphism and ≅ for a natural isomorphism, as in V ≈ V* and V ≅ V**. This convention is not universally followed, and authors who wish to distinguish between unnatural isomorphisms and natural isomorphisms will generally explicitly state the distinction. Generally, saying that two objects are equal is reserved for when there is a notion of a larger (ambient) space that these objects live in. Most often, one speaks of equality of two subsets of a given set (as in the integer set example above), but not of two objects abstractly presented. For example, the 2-dimensional unit sphere in 3-dimensional space

S2 := {(x, y, z) ∈ R3 | x2 + y2 + z2 = 1} and the Riemann sphere Cb

which can be presented as the one-point compactification of the complex plane C ∪ {∞} or as the complex projective line (a quotient space)

1 2 ∗ PC := (C \{(0, 0)})/(C ) are three different descriptions for a mathematical object, all of which are isomorphic, but not equal because they are not all subsets of a single space: the first is a subset of R3, the second is C ≅ R2[note 4] plus an additional point, and the third is a subquotient of C2 In the context of category theory, objects are usually at most isomorphic—indeed, a motivation for the development of category theory was showing that different constructions in homology theory yielded equivalent (isomorphic) groups. Given maps between two objects X and Y, however, one asks if they are equal or not (they are both elements of the set Hom(X, Y), hence equality is the proper relationship), particularly in commutative diagrams.

34.5 See also

• Bisimulation

• Heap (mathematics)

• Isometry

• Isomorphism class

• Isomorphism theorem

• Universal property

34.6 Notes

[1] For clarity, by inverse is meant inverse homomorphism or inverse morphism respectively, not inverse function.

[2] The careful reader may note that A, B, C have a conventional order, namely alphabetical order, and similarly 1, 2, 3 have the order from the integers, and thus one particular isomorphism is “natural”, namely

A 7→ 1, B 7→ 2, C 7→ 3

More formally, as sets these are isomorphic, but not naturally isomorphic (there are multiple choices of isomorphism), while as ordered sets they are naturally isomorphic (there is a unique isomorphism, given above), since finite total orders are uniquely determined up to unique isomorphism by cardinality. This intuition can be formalized by saying that any two 202 CHAPTER 34. ISOMORPHISM

finite totally ordered sets of the same cardinality have a natural isomorphism, the one that sends the least element of the first to the least element of the second, the least element of what remains in the first to the least element of what remains in the second, and so forth, but in general, pairs of sets of a given finite cardinality are not naturally isomorphic because there is more than one choice of map—except if the cardinality is 0 or 1, where there is a unique choice.

[3] In fact, there are precisely 3! = 6 different isomorphisms between two sets with three elements. This is equal to the number of automorphisms of a given three-element set (which in turn is equal to the order of the symmetric group on three letters), and more generally one has that the set of isomorphisms between two objects, denoted Iso(A, B), is a torsor for the automorphism group of A, Aut(A) and also a torsor for the automorphism group of B. In fact, automorphisms of an object are a key reason to be concerned with the distinction between isomorphism and equality, as demonstrated in the effect of change of basis on the identification of a vector space with its dual or with its double dual, as elaborated in the sequel.

[4] Being precise, the identification of the complex numbers with the real plane,

∼ 2 C = R · 1 ⊕ R · i = R

depends on a choice of i; one can just as easily choose (−i), , which yields a different identification—formally, complex conjugation is an automorphism—but in practice one often assumes that one has made such an identification.

34.7 References

[1] Awodey, Steve (2006). “Isomorphisms”. Category theory. Oxford University Press. p. 11. ISBN 9780198568612.

[2] Vinberg, Ėrnest Borisovich (2003). A Course in Algebra. American Mathematical Society. p. 3. ISBN 9780821834138.

[3] Mazur 2007

34.8 Further reading

• Mazur, Barry (12 June 2007), When is one thing equal to some other thing? (PDF)

34.9 External links

• Hazewinkel, Michiel, ed. (2001), “Isomorphism”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608- 010-4

• Isomorphism at PlanetMath.org. • Weisstein, Eric W., “Isomorphism”, MathWorld. Chapter 35

Isomorphism class

An isomorphism class is a collection of mathematical objects isomorphic to each other. Isomorphism classes are often defined if the exact identity of the elements of the set is considered irrelevant, and the properties of the structure of the mathematical object are studied. Examples of this are ordinals and graphs. However, there are circumstances in which the isomorphism class of an object conceals vital internal information about it; consider these examples:

• The associative algebras consisting of coquaternions and 2 × 2 real matrices are isomorphic as rings. Yet they appear in different contexts for application (plane mapping and kinematics) so the isomorphism is insufficient to merge the concepts.

• In homotopy theory, the fundamental group of a space X at a point p , though technically denoted π1(X, p) to emphasize the dependence on the base point, is often written lazily as simply π1(X) if X is path connected. The reason for this is that the existence of a path between two points allows one to identify loops at one with loops at the other; however, unless π1(X, p) is abelian this isomorphism is non-unique. Furthermore, the classification of covering spaces makes strict reference to particular subgroups of π1(X, p) , specifically distinguishing between isomorphic but conjugate subgroups, and therefore amalgamating the elements of an isomorphism class into a single featureless object seriously decreases the level of detail provided by the theory.

203 Chapter 36

Loop (graph theory)

6 3 4

2 5

1

A graph with a loop on vertex 1

204 36.1. DEGREE 205

In graph theory, a loop (also called a self-loop or a “buckle”) is an edge that connects a vertex to itself. A simple graph contains no loops. Depending on the context, a graph or a multigraph may be defined so as to either allow or disallow the presence of loops (often in concert with allowing or disallowing multiple edges between the same vertices):

• Where graphs are defined so as to allow loops and multiple edges, a graph without loops or multiple edges is often distinguished from other graphs by calling it a “simple graph”.

• Where graphs are defined so as to disallow loops and multiple edges, a graph that does have loops or multi- ple edges is often distinguished from the graphs that satisfy these constraints by calling it a “multigraph” or "pseudograph".

36.1 Degree

For an undirected graph, the degree of a vertex is equal to the number of adjacent vertices. A special case is a loop, which adds two to the degree. This can be understood by letting each connection of the loop edge count as its own adjacent vertex. In other words, a vertex with a loop “sees” itself as an adjacent vertex from both ends of the edge thus adding two, not one, to the degree. For a directed graph, a loop adds one to the in degree and one to the out degree

36.2 Notes

36.3 References

• Balakrishnan, V. K.; Graph Theory, McGraw-Hill; 1 edition (February 1, 1997). ISBN 0-07-005489-4.

• Bollobás, Béla; Modern Graph Theory, Springer; 1st edition (August 12, 2002). ISBN 0-387-98488-7.

• Diestel, Reinhard; Graph Theory, Springer; 2nd edition (February 18, 2000). ISBN 0-387-98976-5.

• Gross, Jonathon L, and Yellen, Jay; Graph Theory and Its Applications, CRC Press (December 30, 1998). ISBN 0-8493-3982-0.

• Gross, Jonathon L, and Yellen, Jay; (eds); Handbook of Graph Theory. CRC (December 29, 2003). ISBN 1-58488-090-2.

• Zwillinger, Daniel; CRC Standard Mathematical Tables and Formulae, Chapman & Hall/CRC; 31st edition (November 27, 2002). ISBN 1-58488-291-3.

36.4 External links

• Black, Paul E. “Self loop”. Dictionary of Algorithms and Data Structures. NIST.

36.5 See also

Loops in Graph Theory

• Cycle (graph theory)

• Graph theory

• Glossary of graph theory 206 CHAPTER 36. LOOP (GRAPH THEORY)

Loops in Topology

• Möbius ladder

• Möbius strip • Strange loop

• Klein bottle Chapter 37

Matrix (mathematics)

For other uses, see Matrix. “Matrix theory” redirects here. For the physics topic, see Matrix string theory. In mathematics, a matrix (plural matrices) is a rectangular array[1]—of numbers, symbols, or expressions, arranged

ai,j n columns j changes m rows . . a1,1 a1,2 a1,3 . i c . . a2,1 . h a2,2 a2,3 a n . . a3,1 a3,2 a3,3 . g e . . . . s ......

Each element of a matrix is often denoted by a variable with two subscripts. For instance, a2,1 represents the element at the second row and first column of a matrix A. in rows and columns[2][3]—that is treated in certain prescribed ways. One such way is to state the order of the matrix. For example, the order of the matrix below is a 2x3 matrix because there are two rows and three columns. The individual items in a matrix are called its elements or entries.[4]

207 208 CHAPTER 37. MATRIX (MATHEMATICS)

[ ] 1 9 −13 . 20 5 −6

Provided that they are the same size (have the same number of rows and the same number of columns), two matrices can be added or subtracted element by element. The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three dimensional space is a linear transformation which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrix’s eigenvalues and eigenvectors. Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, , and quantum electrodynamics, they are used to study physical phe- nomena, such as the motion of rigid bodies. In computer graphics, they are used to project a 3-dimensional image onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[5] Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions. A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function.

37.1 Definition

A matrix is a rectangular array of numbers or other mathematical objects, for which operations such as addition and multiplication are defined.[6] Most commonly, a matrix over a field F is a rectangular array of scalars from F.[7][8] Most of this article focuses on real and complex matrices, i.e., matrices whose elements are real numbers or complex numbers, respectively. More general types of entries are discussed below. For instance, this is a real matrix:

  −1.3 0.6 A =  20.4 5.5 . 9.7 −6.2

The numbers, symbols or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively.

37.1.1 Size

The size of a matrix is defined by the number of rows and columns that it contains. A matrix with m rows and n columns is called an m × n matrix or m-by-n matrix, while m and n are called its dimensions. For example, the matrix A above is a 3 × 2 matrix. Matrices which have a single row are called row vectors, and those which have a single column are called column vectors. A matrix which has the same number of rows and columns is called a square matrix. A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix. 37.2. NOTATION 209

37.2 Notation

Matrices are commonly written in box brackets or an alternative notation uses large parentheses instead of box brack- ets:

    a11 a12 ··· a1n a11 a12 ··· a1n     a21 a22 ··· a2n a21 a22 ··· a2n     m×n A =  . . . .  =  . . . .  ∈ R .  . . .. .   . . .. .  am1 am2 ··· amn am1 am2 ··· amn

The specifics of symbolic matrix notation varies widely, with some prevailing trends. Matrices are usually symbolized using upper-case letters (such as A in the examples above), while the corresponding lower-case letters, with two subscript indices (e.g., a11, or a₁,₁), represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface upright (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with the variable name, with or without boldface style, (e.g., A ). The entry in the i-th row and j-th column of a matrix A is sometimes referred to as the i,j,(i,j), or (i,j)th entry of the matrix, and most commonly denoted as ai,j, or aij. Alternative notations for that entry are A[i,j] or Ai,j. For example, the (1,3) entry of the following matrix A is 5 (also denoted a13, a₁,₃, A[1,3] or A1,3):

  4 −7 5 0 A = −2 0 11 8  19 1 −3 12

Sometimes, the entries of a matrix can be defined by a formula such as ai,j = f(i, j). For example, each of the entries of the following matrix A is determined by aij = i − j.

  0 −1 −2 −3 A = 1 0 −1 −2 2 1 0 −1

In this case, the matrix itself is sometimes defined by that formula, within square brackets or double parenthesis. For example, the matrix above is defined as A = [i-j], or A = ((i-j)). If matrix size is m × n, the above-mentioned formula f(i, j) is valid for any i = 1, ..., m and any j = 1, ..., n. This can be either specified separately, or using m × n as a subscript. For instance, the matrix A above is 3 × 4 and can be defined as A = [i − j](i = 1, 2, 3; j = 1, ..., 4), or A = [i − j]3×4. Some programming languages utilize doubly subscripted arrays (or arrays of arrays) to represent an m-×-n matrix. Some programming languages start the numbering of array indexes at zero, in which case the entries of an m-by-n matrix are indexed by 0 ≤ i ≤ m − 1 and 0 ≤ j ≤ n − 1.[9] This article follows the more common convention in mathematical writing where enumeration starts from 1. The set of all m-by-n matrices is denoted 필(m, n).

37.3 Basic operations

There are a number of basic operations that can be applied to modify matrices, called matrix addition, scalar multi- plication, transposition, matrix multiplication, row operations, and submatrix.[11]

37.3.1 Addition, scalar multiplication and transposition

Main articles: Matrix addition, Scalar multiplication and Transpose 210 CHAPTER 37. MATRIX (MATHEMATICS)

Familiar properties of numbers extend to these operations of matrices: for example, addition is commutative, i.e., the matrix sum does not depend on the order of the summands: A + B = B + A.[12] The transpose is compatible with addition and scalar multiplication, as expressed by (cA)T = c(AT) and (A + B)T = AT + BT. Finally, (AT)T = A.

37.3.2 Matrix multiplication

Main article: Matrix multiplication Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the B

b1,1 b1,2 b1,3

b2,1 b2,2 b2,3

a1,1 a1,2

a a A 2,1 2,2 a3,1 a3,2

a4,1 a4,2

Schematic depiction of the matrix product AB of two matrices A and B.

number of rows of the right matrix. If A is an m-by-n matrix and B is an n-by-p matrix, then their matrix product AB is the m-by-p matrix whose entries are given by dot product of the corresponding row of A and the corresponding column of B: ∑ ··· n [AB]i,j = Ai,1B1,j + Ai,2B2,j + + Ai,nBn,j = r=1 Ai,rBr,j ,

where 1 ≤ i ≤ m and 1 ≤ j ≤ p.[13] For example, the underlined entry 2340 in the product is calculated as (2 × 1000) + (3 × 100) + (4 × 10) = 2340:

  [ ] 0 1000 [ ] 2 3 4 3 2340 1 100  = . 1 0 0 0 1000 0 10 37.3. BASIC OPERATIONS 211

Matrix multiplication satisfies the rules (AB)C = A(BC)(associativity), and (A+B)C = AC+BC as well as C(A+B) = CA+CB (left and right distributivity), whenever the size of the matrices is such that the various products are defined.[14] The product AB may be defined without BA being defined, namely if A and B are m-by-n and n-by-k matrices, respectively, and m ≠ k. Even if both products are defined, they need not be equal, i.e., generally

AB ≠ BA,

i.e., matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers whose product is independent of the order of the factors. An example of two matrices not commuting with each other is:

[ ][ ] [ ] 1 2 0 1 0 1 = , 3 4 0 0 0 3

whereas

[ ][ ] [ ] 0 1 1 2 3 4 = . 0 0 3 4 0 0

Besides the ordinary matrix multiplication just described, there exist other less frequently used operations on matrices that can be considered forms of multiplication, such as the Hadamard product and the .[15] They arise in solving matrix equations such as the Sylvester equation.

37.3.3 Row operations

Main article: Row operations

There are three types of row operations:

1. row addition, that is adding a row to another.

2. row multiplication, that is multiplying all entries of a row by a non-zero constant;

3. row switching, that is interchanging two rows of a matrix;

These operations are used in a number of ways, including solving linear equations and finding matrix inverses.

37.3.4 Submatrix

A submatrix of a matrix is obtained by deleting any collection of rows and/or columns.[16][17][18] For example, from the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2:

  1 2 3 4 [ ] 1 3 4 A = 5 6 7 8  → . 5 7 8 9 10 11 12

The minors and cofactors of a matrix are found by computing the determinant of certain submatrices.[18][19] A principal submatrix is a square submatrix obtained by removing certain rows and columns. The definition varies from author to author. According to some authors, a principal submatrix is a submatrix in which the set of row indices that remain is the same as the set of column indices that remain.[20][21] Other authors define a principal submatrix to be one in which the first k rows and columns, for some number k, are the ones that remain;[22] this type of submatrix has also been called a leading principal submatrix.[23] 212 CHAPTER 37. MATRIX (MATHEMATICS)

37.4 Linear equations

Main articles: Linear equation and System of linear equations

Matrices can be used to compactly write and work with multiple linear equations, i.e., systems of linear equations. For example, if A is an m-by-n matrix, x designates a column vector (i.e., n×1-matrix) of n variables x1, x2, ..., xn, and b is an m×1-column vector, then the matrix equation

Ax = b

is equivalent to the system of linear equations

A₁,₁x1 + A₁,₂x2 + ... + A₁,nxn = b1 ... [24] Am,₁x1 + Am,₂x2 + ... + Am,nxn = bm .

37.5 Linear transformations

Main articles: Linear transformation and Transformation matrix Matrices and matrix multiplication reveal their essential features when related to linear transformations, also known as linear maps. A real m-by-n matrix A gives rise to a linear transformation Rn → Rm mapping each vector x in Rn to the (matrix) product Ax, which is a vector in Rm. Conversely, each linear transformation f: Rn → Rm arises from a unique m-by-n matrix A: explicitly, the (i, j)-entry of A is the ith coordinate of f(ej), where ej = (0,...,0,1,0,...,0) is the unit vector with 1 in the jth position and 0 elsewhere. The matrix A is said to represent the linear map f, and A is called the transformation matrix of f. For example, the 2×2 matrix

[ ] a c A = b d can be viewed as the transform of the unit square into a parallelogram with vertices at (0, 0), (a, b), (a + c, b + d), and[ ] ([c, ]d).[ The] parallelogram[ ] pictured at the right is obtained by multiplying A with each of the column vectors 0 1 1 0 , , and in turn. These vectors define the vertices of the unit square. 0 0 1 1 The following table shows a number of 2-by-2 matrices with the associated linear maps of R2. The blue original is mapped to the green grid and shapes. The origin (0,0) is marked with a black point. Under the 1-to-1 correspondence between matrices and linear maps, matrix multiplication corresponds to composition of maps:[25] if a k-by-m matrix B represents another linear map g : Rm → Rk, then the composition g ∘ f is represented by BA since

(g ∘ f)(x) = g(f(x)) = g(Ax) = B(Ax) = (BA)x.

The last equality follows from the above-mentioned associativity of matrix multiplication. The rank of a matrix A is the maximum number of linearly independent row vectors of the matrix, which is the same as the maximum number of linearly independent column vectors.[26] Equivalently it is the dimension of the image of the linear map represented by A.[27] The rank-nullity theorem states that the dimension of the kernel of a matrix plus the rank equals the number of columns of the matrix.[28]

37.6 Square matrices

Main article: Square matrix 37.6. SQUARE MATRICES 213

(a+c,b+d)

(c,d)

ad−bc

(a,b)

(0,0)

The vectors represented by a 2-by-2 matrix correspond to the sides of a unit square transformed into a parallelogram.

A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. The entries aii form the main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix.

37.6.1 Main types 214 CHAPTER 37. MATRIX (MATHEMATICS)

Diagonal and triangular matrices

If all entries of A below the main diagonal are zero, A is called an upper triangular matrix. Similarly if all entries of A above the main diagonal are zero, A is called a lower triangular matrix. If all entries outside the main diagonal are zero, A is called a diagonal matrix.

Identity matrix

The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g.

  1 0 ··· 0 [ ]   [ ] 0 1 ··· 0 1 0 ··· I1 = 1 ,I2 = , ,In = . . . . 0 1 . . .. . 0 0 ··· 1 It is a square matrix of order n, and also a special kind of diagonal matrix. It is called an identity matrix because multiplication with it leaves a matrix unchanged:

AIn = ImA = A for any m-by-n matrix A.

Symmetric or skew-symmetric matrix

A square matrix A that is equal to its transpose, i.e., A = AT, is a symmetric matrix. If instead, A was equal to the negative of its transpose, i.e., A = −AT, then A is a skew-symmetric matrix. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfy A∗ = A, where the star or asterisk denotes the conjugate transpose of the matrix, i.e., the transpose of the complex conjugate of A. By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.[29] This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below.

Invertible matrix and its inverse

A square matrix A is called invertible or non-singular if there exists a matrix B such that

AB = BA = In.[30][31]

If B exists, it is unique and is called the inverse matrix of A, denoted A−1.

Definite matrix

A symmetric n×n-matrix is called positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors x ∈ Rn the associated quadratic form given by

Q(x) = xTAx

takes only positive values (respectively only negative values; both some negative and some positive values).[32] If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite. A symmetric matrix is positive-definite if and only if all its eigenvalues are positive, i.e., the matrix is positive- semidefinite and it is invertible.[33] The table at the right shows two possibilities for 2-by-2 matrices. Allowing as input two different vectors instead yields the bilinear form associated to A: 37.6. SQUARE MATRICES 215

BA (x, y) = xTAy.[34]

Orthogonal matrix

An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:

AT = A−1, which entails

ATA = AAT = I, where I is the identity matrix. An orthogonal matrix A is necessarily invertible (with inverse A−1 = AT), unitary (A−1 = A*), and normal (A*A = AA*). The determinant of any orthogonal matrix is either +1 or −1. A special orthogonal matrix is an orthogonal matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant +1 is a pure rotation, while every orthogonal matrix with determinant −1 is either a pure reflection, or a composition of reflection and rotation. The complex analogue of an orthogonal matrix is a unitary matrix.

37.6.2 Main operations

Trace

The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not commutative as mentioned above, the trace of the product of two matrices is independent of the order of the factors:

tr(AB) = tr(BA).

This is immediate from the definition of matrix multiplication:

∑ ∑ m n tr(AB)= i=1 j=1 Aij Bji=tr(BA). Also, the trace of a matrix is equal to that of its transpose, i.e.,

tr(A) = tr(AT).

Determinant

Main article: Determinant The determinant det(A) or |A| of a square matrix A is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in R2) or volume (in R3) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved. The determinant of 2-by-2 matrices is given by

[ ] a b det = ad − bc. c d The determinant of 3-by-3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalises these two formulae to all dimensions.[35] The determinant of a product of square matrices equals the product of their : 216 CHAPTER 37. MATRIX (MATHEMATICS)

0 1 ()1 −1

x2 f(x1 )

x1

f(x2 )

A linear transformation on R2 given by the indicated matrix. The determinant of this matrix is −1, as the area of the green parallelogram at the right is 1, but the map reverses the orientation, since it turns the counterclockwise orientation of the vectors to a clockwise one.

det(AB) = det(A) · det(B).[36]

Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1.[37] Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, i.e., determinants of smaller matrices.[38] This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer’s rule, where the division of the determinants of two related square matrices equates to the value of each of the system’s variables.[39]

Eigenvalues and eigenvectors

Main article: Eigenvalues and eigenvectors

A number λ and a non-zero vector v satisfying

Av = λv

are called an eigenvalue and an eigenvector of A, respectively.[nb 1][40] The number λ is an eigenvalue of an n×n-matrix A if and only if A−λIn is not invertible, which is equivalent to

det(A − λI) = 0. [41]

The polynomial pA in an indeterminate X given by evaluation the determinant det(XIn−A) is called the characteristic polynomial of A. It is a monic polynomial of degree n. Therefore the polynomial equation pA(λ) = 0 has at most n different solutions, i.e., eigenvalues of the matrix.[42] They may be complex even if the entries of A are real. According to the Cayley–Hamilton theorem, pA(A) = 0, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the zero matrix.

37.7 Computational aspects

Matrix calculations can be often performed with different techniques. Many problems can be solved by both direct algorithms or iterative approaches. For example, the eigenvectors of a square matrix can be obtained by finding a 37.8. DECOMPOSITION 217

sequence of vectors xn converging to an eigenvector when n tends to infinity.[43] To be able to choose the more appropriate algorithm for each specific problem, it is important to determine both the effectiveness and precision of all the available algorithms. The domain studying these matters is called numerical linear algebra.[44] As with other numerical situations, two main aspects are the complexity of algorithms and their numerical stability. Determining the complexity of an algorithm means finding upper bounds or estimates of how many elementary oper- ations such as additions and multiplications of scalars are necessary to perform some algorithm, e.g., multiplication of matrices. For example, calculating the matrix product of two n-by-n matrix using the definition given above needs n3 multiplications, since for any of the n2 entries of the product, n multiplications are necessary. The Strassen algo- rithm outperforms this “naive” algorithm; it needs only n2.807 multiplications.[45] A refined approach also incorporates specific features of the computing devices. In many practical situations additional information about the matrices involved is known. An important case are sparse matrices, i.e., matrices most of whose entries are zero. There are specifically adapted algorithms for, say, solving linear systems Ax = b for sparse matrices A, such as the conjugate gradient method.[46] An algorithm is, roughly speaking, numerically stable, if little deviations in the input values do not lead to big de- viations in the result. For example, calculating the inverse of a matrix via Laplace’s formula (Adj (A) denotes the adjugate matrix of A)

A−1 = Adj(A) / det(A)

may lead to significant rounding errors if the determinant of the matrix is very small. The norm of a matrix can be used to capture the conditioning of linear algebraic problems, such as computing a matrix’s inverse.[47] Although most computer languages are not designed with commands or libraries for matrices, as early as the 1970s, some engineering desktop computers such as the HP 9830 had ROM cartridges to add BASIC commands for matrices. Some computer languages such as APL were designed to manipulate matrices, and various mathematical programs can be used to aid computing with matrices.[48]

37.8 Decomposition

Main articles: Matrix decomposition, Matrix diagonalization, Gaussian elimination and Montante’s method

There are several methods to render matrices into a more easily accessible form. They are generally referred to as matrix decomposition or matrix factorization techniques. The interest of all these techniques is that they preserve certain properties of the matrices in question, such as determinant, rank or inverse, so that these quantities can be calculated after applying the transformation, or that certain matrix operations are algorithmically easier to carry out for some types of matrices. The LU decomposition factors matrices as a product of lower (L) and an upper triangular matrices (U).[49] Once this decomposition is calculated, linear systems can be solved more efficiently, by a simple technique called forward and back substitution. Likewise, inverses of triangular matrices are algorithmically easier to calculate. The Gaus- sian elimination is a similar algorithm; it transforms any matrix to row echelon form.[50] Both methods proceed by multiplying the matrix by suitable elementary matrices, which correspond to permuting rows or columns and adding multiples of one row to another row. Singular value decomposition expresses any matrix A as a product UDV∗, where U and V are unitary matrices and D is a diagonal matrix. The eigendecomposition or diagonalization expresses A as a product VDV−1, where D is a diagonal matrix and V is a suitable invertible matrix.[51] If A can be written in this form, it is called diagonalizable. More generally, and applicable to all matrices, the Jordan decomposition transforms a matrix into Jordan normal form, that is to say matrices whose only nonzero entries are the eigenvalues λ1 to λ of A, placed on the main diagonal and possibly entries equal to one directly above the main diagonal, as shown at the right.[52] Given the eigendecomposition, the nth power of A (i.e., n-fold iterated matrix multiplication) can be calculated via

An = (VDV−1)n = VDV−1VDV−1...VDV−1 = VDnV−1

and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries, which is much easier than doing the exponentiation for A instead. This can be used to compute the matrix exponential eA, a 218 CHAPTER 37. MATRIX (MATHEMATICS)

An example of a matrix in Jordan normal form. The grey blocks are called Jordan blocks. need frequently arising in solving linear differential equations, matrix logarithms and square roots of matrices.[53] To avoid numerically ill-conditioned situations, further algorithms such as the can be employed.[54]

37.9 Abstract algebraic aspects and generalizations

Matrices can be generalized in different ways. Abstract algebra uses matrices with entries in more general fields or even rings, while linear algebra codifies properties of matrices in the notion of linear maps. It is possible to consider matrices with infinitely many columns and rows. Another extension are tensors, which can be seen as higher-dimensional arrays of numbers, as opposed to vectors, which can often be realised as sequences of numbers, while matrices are rectangular or two-dimensional arrays of numbers.[55] Matrices, subject to certain requirements tend to form groups known as matrix groups.

37.9.1 Matrices with more general entries

This article focuses on matrices whose entries are real or complex numbers. However, matrices can be considered with much more general types of entries than real or complex numbers. As a first step of generalization, any field, i.e., a set where addition, subtraction, multiplication and division operations are defined and well-behaved, may be used instead of R or C, for example rational numbers or finite fields. For example, coding theory makes use of matrices over finite fields. Wherever eigenvalues are considered, as these are roots of a polynomial they may exist only in a 37.9. ABSTRACT ALGEBRAIC ASPECTS AND GENERALIZATIONS 219 larger field than that of the entries of the matrix; for instance they may be complex in case of a matrix with real entries. The possibility to reinterpret the entries of a matrix as elements of a larger field (e.g., to view a real matrix as a complex matrix whose entries happen to be all real) then allows considering each square matrix to possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an algebraically closed field, such as C, from the outset. More generally, abstract algebra makes great use of matrices with entries in a ring R.[56] Rings are a more general notion than fields in that a division operation need not exist. The very same addition and multiplication operations of matrices extend to this setting, too. The set M(n, R) of all square n-by-n matrices over R is a ring called matrix ring, isomorphic to the endomorphism ring of the left R-module Rn.[57] If the ring R is commutative, i.e., its multiplication is commutative, then M(n, R) is a unitary noncommutative (unless n = 1) associative algebra over R. The determinant of square matrices over a commutative ring R can still be defined using the Leibniz formula; such a matrix is invertible if and only if its determinant is invertible in R, generalising the situation over a field F, where every nonzero element is invertible.[58] Matrices over superrings are called supermatrices.[59] Matrices do not always have all their entries in the same ring – or even in any ring at all. One special but common case is block matrices, which may be considered as matrices whose entries themselves are matrices. The entries need not be quadratic matrices, and thus need not be members of any ordinary ring; but their sizes must fulfil certain compatibility conditions.

37.9.2 Relationship to linear maps

Linear maps Rn → Rm are equivalent to m-by-n matrices, as described above. More generally, any linear map f: V → W between finite-dimensional vector spaces can be described by a matrix A = (aij), after choosing bases v1, ..., vn of V, and w1, ..., wm of W (so n is the dimension of V and m is the dimension of W), which is such that

∑m f(vj) = ai,jwi for j = 1, . . . , n. i=1 In other words, column j of A expresses the image of vj in terms of the basis vectors wi of W; thus this relation uniquely determines the entries of the matrix A. Note that the matrix depends on the choice of the bases: different choices of bases give rise to different, but equivalent matrices.[60] Many of the above concrete notions can be reinterpreted in this light, for example, the transpose matrix AT describes the transpose of the linear map given by A, with respect to the dual bases.[61] These properties can be restated in a more natural way: the category of all matrices with entries in a field k with multiplication as composition is equivalent to the category of finite dimensional vector spaces and linear maps over this field. More generally, the set of m×n matrices can be used to represent the R-linear maps between the free modules Rm and Rn for an arbitrary ring R with unity. When n = m composition of these maps is possible, and this gives rise to the matrix ring of n×n matrices representing the endomorphism ring of Rn.

37.9.3 Matrix groups

Main article: Matrix group

A group is a mathematical structure consisting of a set of objects together with a binary operation, i.e., an operation combining any two objects to a third, subject to certain requirements.[62] A group in which the objects are matrices and the group operation is matrix multiplication is called a matrix group.[nb 2][63] Since in a group every element has to be invertible, the most general matrix groups are the groups of all invertible matrices of a given size, called the general linear groups. Any property of matrices that is preserved under matrix products and inverses can be used to define further matrix groups. For example, matrices with a given size and with a determinant of 1 form a subgroup of (i.e., a smaller group contained in) their general linear group, called a special linear group.[64] Orthogonal matrices, determined by the condition

MTM = I, 220 CHAPTER 37. MATRIX (MATHEMATICS)

form the orthogonal group.[65] Every orthogonal matrix has determinant 1 or −1. Orthogonal matrices with determi- nant 1 form a subgroup called special orthogonal group. Every finite group is isomorphic to a matrix group, as one can see by considering the regular representation of the symmetric group.[66] General groups can be studied using matrix groups, which are comparatively well-understood, by means of representation theory.[67]

37.9.4 Infinite matrices

It is also possible to consider matrices with infinitely many rows and/or columns[68] even if, being infinite objects, one cannot write down such matrices explicitly. All that matters is that for every element in the set indexing rows, and every element in the set indexing columns, there is a well-defined entry (these index sets need not even be subsets of the natural numbers). The basic operations of addition, subtraction, scalar multiplication and transposition can still be defined without problem; however matrix multiplication may involve infinite summations to define the resulting entries, and these are not defined in general. ⊕ If R is any ring with unity, then the ring of endomorphisms of M = i∈I R as a right R module is isomorphic to the ring of column finite matrices CFMI (R) whose entries are indexed by I × I , and whose columns each contain only finitely many nonzero entries. The endomorphisms of M considered as a left R module result in an analogous object, the row finite matrices RFMI (R) whose rows each only have finitely many nonzero entries. If infinite matrices are used to describe linear maps, then only those matrices can be used all of whose columns have but a finite number of nonzero entries, for the following reason. For a matrix A to describe a linear map f: V→W, bases for both spaces must have been chosen; recall that by definition this means that every vector in the space can be written uniquely as a (finite) linear combination of basis vectors, so that written as a (column) vector v of coefficients, only finitely many entries vi are nonzero. Now the columns of A describe the images by f of individual basis vectors of V in the basis of W, which is only meaningful if these columns have only finitely many nonzero entries. There is no restriction on the rows of A however: in the product A·v there are only finitely many nonzero coefficients of v involved, so every one of its entries, even if it is given as an infinite sum of products, involves only finitely many nonzero terms and is therefore well defined. Moreover this amounts to forming a linear combination of the columns of A that effectively involves only finitely many of them, whence the result has only finitely many nonzero entries, because each of those columns do. One also sees that products of two matrices of the given type is well defined (provided as usual that the column-index and row-index sets match), is again of the same type, and corresponds to the composition of linear maps. If R is a normed ring, then the condition of row or column finiteness can be relaxed. With the norm in place, absolutely convergent series can be used instead of finite sums. For example, the matrices whose column sums are absolutely convergent sequences form a ring. Analogously of course, the matrices whose row sums are absolutely convergent series also form a ring. In that vein, infinite matrices can also be used to describe operators on Hilbert spaces, where convergence and continuity questions arise, which again results in certain constraints that have to be imposed. However, the explicit point of view of matrices tends to obfuscate the matter,[nb 3] and the abstract and more powerful tools of functional analysis can be used instead.

37.9.5 Empty matrices

An empty matrix is a matrix in which the number of rows or columns (or both) is zero.[69][70] Empty matrices help dealing with maps involving the zero vector space. For example, if A is a 3-by-0 matrix and B is a 0-by-3 matrix, then AB is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space V to itself, while BA is a 0-by-0 matrix. There is no common notation for empty matrices, but most computer algebra systems allow creating and computing with them. The determinant of the 0-by-0 matrix is 1 as follows from regarding the empty product occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the identity map from any finite dimensional space to itself has determinant 1, a fact that is often used as a part of the characterization of determinants. 37.10. APPLICATIONS 221

37.10 Applications

There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take advantage of the compact representation of a set of numbers in a matrix. For example, in game theory and economics, the payoff matrix encodes the payoff for two players, depending on which out of a given (finite) set of alternatives the players choose.[71] Text mining and automated thesaurus compilation makes use of document-term matrices such as tf-idf to track frequencies of certain words in several documents.[72] Complex numbers can be represented by particular real 2-by-2 matrices via

[ ] a −b a + ib ↔ , b a

under which addition and multiplication of complex numbers and matrices correspond to each other. For example, 2-by-2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above.A similar interpretation is possible for quaternions[73] and Clifford algebras in general. Early encryption techniques such as the Hill cipher also used matrices. However, due to the linear nature of matrices, these codes are comparatively easy to break.[74] Computer graphics uses matrices both to represent objects and to calculate transformations of objects using affine rotation matrices to accomplish tasks such as projecting a three- dimensional object onto a two-dimensional screen, corresponding to a theoretical camera observation.[75] Matrices over a polynomial ring are important in the study of control theory. Chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular bonding and spectroscopy. Examples are the overlap matrix and the Fock matrix used in solving the Roothaan equations to obtain the molecular orbitals of the Hartree–Fock method.

37.10.1 Graph theory

The adjacency matrix of a finite graph is a basic notion of graph theory.[76] It records which vertices of the graph are connected by an edge. Matrices containing just two different values (1 and 0 meaning for example “yes” and “no”, respectively) are called logical matrices. The distance (or cost) matrix contains information about distances of the edges.[77] These concepts can be applied to websites connected by hyperlinks or cities connected by roads etc., in which case (unless the connection network is extremely dense) the matrices tend to be sparse, i.e., contain few nonzero entries. Therefore, specifically tailored matrix algorithms can be used in network theory.

37.10.2 Analysis and geometry

The Hessian matrix of a differentiable function ƒ: Rn → R consists of the second derivatives of ƒ with respect to the several coordinate directions, i.e.[78]

[ ] ∂2f H(f) = . ∂xi ∂xj

It encodes information about the local growth behaviour of the function: given a critical point x = (x1, ..., xn), i.e., a point where the first partial derivatives ∂f/∂xi of ƒ vanish, the function has a local minimum if the Hessian matrix is positive definite. Quadratic programming can be used to find global minima or maxima of quadratic functions closely related to the ones attached to matrices (see above).[79] Another matrix frequently used in geometrical situations is the Jacobi matrix of a differentiable map f: Rn → Rm. If [80] f1, ..., fm denote the components of f, then the Jacobi matrix is defined as

[ ] ∂f J(f) = i . ∂xj 1≤i≤m,1≤j≤n If n > m, and if the rank of the Jacobi matrix attains its maximal value m, f is locally invertible at that point, by the implicit function theorem.[81] 222 CHAPTER 37. MATRIX (MATHEMATICS)

2 3

1

  1 1 0 An undirected graph with adjacency matrix 1 0 1. 0 1 0

Partial differential equations can be classified by considering the matrix of coefficients of the highest-order differential operators of the equation. For elliptic partial differential equations this matrix is positive definite, which has decisive influence on the set of possible solutions of the equation in question.[82] The finite element method is an important numerical method to solve partial differential equations, widely applied in simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear functions, where the pieces are chosen with respect to a sufficiently fine grid, which in turn can be recast as a matrix equation.[83]

37.10.3 Probability theory and statistics

Stochastic matrices are square matrices whose rows are probability vectors, i.e., whose entries are non-negative and sum up to one. Stochastic matrices are used to define Markov chains with finitely many states.[84] A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row. Properties of the Markov chain like absorbing states, i.e., states that any particle attains eventually, can be read off the eigenvectors of the transition matrices.[85] Statistics also makes use of matrices in many different forms.[86] Descriptive statistics is concerned with describing data sets, which can often be represented as data matrices, which may then be subjected to dimensionality reduction techniques. The covariance matrix encodes the mutual variance of several random variables.[87] Another technique 37.10. APPLICATIONS 223

[ ] 2 0 At the saddle point (x = 0, y = 0) (red) of the function f(x,−y) = x2 − y2, the Hessian matrix is indefinite. 0 −2

using matrices are linear least squares, a method that approximates a finite set of pairs (x1, y1), (x2, y2), ..., (xN, yN), by a linear function

yi ≈ axi + b, i = 1, ..., N

which can be formulated in terms of matrices, related to the singular value decomposition of matrices.[88] Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory to physics.[89][90]

37.10.4 Symmetries and transformations in physics

Further information: Symmetry in physics

Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary particles in quantum field theory are classified as representations of the Lorentz group of special relativity and, more specifically, by their behavior under the group. Concrete representations involving the Pauli matrices and more general gamma matrices are an integral part of the physical description of fermions, which behave as spinors.[91] For the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU(3); for their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are also used for the SU(3) gauge group that forms the basis of the modern description of strong nuclear interactions, quantum chromodynamics. The Cabibbo–Kobayashi–Maskawa matrix, in turn, expresses the fact that the basic quark states that are important for weak interactions are not the same as, but linearly related to the basic quark states that define particles with specific and distinct masses.[92] 224 CHAPTER 37. MATRIX (MATHEMATICS)

Two different Markov chains. The chart depicts the number of[ particles] (of a total[ of 1000)] in state “2”. Both limiting values can be .7 0 .7 .2 determined from the transition matrices, which are given by (red) and (black). .3 1 .3 .8

37.10.5 Linear combinations of quantum states

The first model of quantum mechanics (Heisenberg, 1925) represented the theory’s operators by infinite-dimensional matrices acting on quantum states.[93] This is also referred to as matrix mechanics. One particular example is the density matrix that characterizes the “mixed” state of a quantum system as a linear combination of elementary, “pure” eigenstates.[94] Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimen- tal particle physics: Collision reactions such as occur in particle accelerators, where non-interacting particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states. The linear combination is given by a matrix known as the S-matrix, which encodes all information about the possible interactions between particles.[95]

37.10.6 Normal modes

A general application of matrices in physics is to the description of linearly coupled harmonic systems. The equations of motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way to obtain solutions is to determine the system’s eigenvectors, its normal modes, by diagonalizing the matrix equation. Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibra- tions of systems consisting of mutually bound component atoms.[96] They are also needed for describing mechanical vibrations, and oscillations in electrical circuits.[97] 37.11. HISTORY 225

37.10.7 Geometrical optics

Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is neglected. The result is a model in which light rays are indeed geometrical rays. If the deflection of light rays by optical elements is small, the action of a lens or reflective element on a given light ray can be expressed as multiplication of a two-component vector with a two-by-two matrix called ray transfer matrix: the vector’s components are the light ray’s slope and its distance from the optical axis, while the matrix encodes the properties of the optical element. Actually, there are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens surface, and a translation matrix, describing the translation of the plane of reference to the next refracting surface, where another refraction matrix applies. The optical system, consisting of a combination of lenses and/or reflective elements, is simply described by the matrix resulting from the product of the components’ matrices.[98]

37.10.8 Electronics

Traditional mesh analysis in electronics leads to a system of linear equations that can be described with a matrix. The behaviour of many electronic components can be described using matrices. Let A be a 2-dimensional vector with the component’s input voltage v1 and input current i1 as its elements, and let B be a 2-dimensional vector with the component’s output voltage v2 and output current i2 as its elements. Then the behaviour of the electronic component can be described by B = H · A, where H is a 2 x 2 matrix containing one impedance element (h12), one admittance element (h21) and two dimensionless elements (h11 and h22). Calculating a circuit now reduces to multiplying matrices.

37.11 History

Matrices have a long history of application in solving linear equations but they were known as arrays until the 1800s. The Chinese text The Nine Chapters on the Mathematical Art written in 10th–2nd century BCE is the first example of the use of array methods to solve simultaneous equations,[99] including the concept of determinants. In 1545 Italian mathematician Girolamo Cardano brought the method to Europe when he published Ars Magna.[100] The Japanese mathematician Seki used the same array methods to solve simultaneous equations in 1683.[101] The Dutch Mathematician Jan de Witt represented transformations using arrays in his 1659 book Elements of Curves (1659).[102] Between 1700 and 1710 Gottfried Wilhelm Leibniz publicized the use of arrays for recording information or solutions and experimented with over 50 different systems of arrays.[100] Cramer presented his rule in 1750. The term “matrix” (Latin for “womb”, derived from mater—mother[103]) was coined by James Joseph Sylvester in 1850,[104] who understood a matrix as an object giving rise to a number of determinants today called minors, that is to say, determinants of smaller matrices that derive from the original one by removing columns and rows. In an 1851 paper, Sylvester explains:

I have in previous papers defined a “Matrix” as a rectangular array of terms, out of which different systems of determinants may be engendered as from the womb of a common parent.[105]

Arthur Cayley published a treatise on geometric transformations using matrices that were not rotated versions of the coefficients being investigated as had previously been done. Instead he defined operations such as addition, subtrac- tion, multiplication, and division as transformations of those matrices and showed the associative and distributive properties held true. Cayley investigated and demonstrated the non-commutative property of matrix multiplication as well as the commutative property of matrix addition.[100] Early matrix theory had limited the use of arrays almost exclusively to determinants and Arthur Cayley’s abstract matrix operations were revolutionary. He was instrumental in proposing a matrix concept independent of equation systems. In 1858 Cayley published his Memoir on the theory of matrices[106][107] in which he proposed and demonstrated the Cayley-Hamilton theorem.[100] An English mathematician named Cullis was the first to use modern bracket notation for matrices in 1913 and he simultaneously demonstrated the first significant use the notation A = [ai,j] to represent a matrix where ai,j refers to the ith row and the jth column.[100] The study of determinants sprang from several sources.[108] Number-theoretical problems led Gauss to relate coef- ficients of quadratic forms, i.e., expressions such as x2 + xy − 2y2, and linear maps in three dimensions to matri- ces. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix products are 226 CHAPTER 37. MATRIX (MATHEMATICS) non-commutative. Cauchy was the first to prove general statements about determinants, using as definition of the determinant of a matrix A = [ai,j] the following: replace the powers ajk by ajk in the polynomial

∏ a1a2 ··· an (aj − ai) i

37.11.1 Other historical usages of the word “matrix” in mathematics

The word has been used in unusual ways by at least two authors of historical importance. Bertrand Russell and Alfred North Whitehead in their Principia Mathematica (1910–1913) use the word “matrix” in the context of their Axiom of reducibility. They proposed this axiom as a means to reduce any function to one of lower type, successively, so that at the “bottom” (0 order) the function is identical to its extension:

“Let us give the name of matrix to any function, of however many variables, which does not involve any apparent variables. Then any possible function other than a matrix is derived from a matrix by means of generalization, i.e., by considering the proposition which asserts that the function in question is true with all possible values or with some value of one of the arguments, the other argument or arguments remaining undetermined”.[114]

For example a function Φ(x, y) of two variables x and y can be reduced to a collection of functions of a single variable, e.g., y, by “considering” the function for all possible values of “individuals” ai substituted in place of variable x. And then the resulting collection of functions of the single variable y, i.e., ∀aᵢ: Φ(ai, y), can be reduced to a “matrix” of values by “considering” the function for all possible values of “individuals” bi substituted in place of variable y:

∀b∀aᵢ: Φ(ai, b).

Alfred Tarski in his 1946 Introduction to Logic used the word “matrix” synonymously with the notion of truth table as used in mathematical logic.[115]

37.12 See also

• Algebraic multiplicity • Geometric multiplicity • Gram-Schmidt process • List of matrices 37.13. NOTES 227

• Matrix calculus • Periodic matrix set • Tensor

37.13 Notes

[1] equivalently, table

[2] Anton (1987, p. 23)

[3] Beauregard & Fraleigh (1973, p. 56)

[4] Young, Cynthia. Precalculus. Laurie Rosatone. p. 727. Check date values in: |accessdate= (help);

[5] K. Bryan and T. Leise. The $25,000,000,000 eigenvector: The linear algebra behind Google. SIAM Review, 48(3):569– 581, 2006.

[6] Lang 2002

[7] Fraleigh (1976, p. 209)

[8] Nering (1970, p. 37)

[9] Oualline 2003, Ch. 5

[10] “How to organize, add and multiply matrices - Bill Shillito”. TED ED. Retrieved April 6, 2013.

[11] Brown 1991, Definition I.2.1 (addition), Definition I.2.4 (scalar multiplication), and Definition I.2.33 (transpose)

[12] Brown 1991, Theorem I.2.6

[13] Brown 1991, Definition I.2.20

[14] Brown 1991, Theorem I.2.24

[15] Horn & Johnson 1985, Ch. 4 and 5

[16] Bronson (1970, p. 16)

[17] Kreyszig (1972, p. 220)

[18] Protter & Morrey (1970, p. 869)

[19] Kreyszig (1972, pp. 241,244)

[20] Schneider, Hans; Barker, George Phillip (2012), Matrices and Linear Algebra, Dover Books on Mathematics, Courier Dover Corporation, p. 251, ISBN 9780486139302.

[21] Perlis, Sam (1991), Theory of Matrices, Dover books on advanced mathematics, Courier Dover Corporation, p. 103, ISBN 9780486668109.

[22] Anton, Howard (414), Elementary Linear Algebra (10th ed.), John Wiley & Sons, ISBN 9780470458211 .

[23] Horn, Roger A.; Johnson, Charles R. (2012), Matrix Analysis (2nd ed.), Cambridge University Press, p. 17, ISBN 9780521839402.

[24] Brown 1991, I.2.21 and 22

[25] Greub 1975, Section III.2

[26] Brown 1991, Definition II.3.3

[27] Greub 1975, Section III.1

[28] Brown 1991, Theorem II.3.22

[29] Horn & Johnson 1985, Theorem 2.5.6

[30] Brown 1991, Definition I.2.28 228 CHAPTER 37. MATRIX (MATHEMATICS)

[31] Brown 1991, Definition I.5.13

[32] Horn & Johnson 1985, Chapter 7

[33] Horn & Johnson 1985, Theorem 7.2.1

[34] Horn & Johnson 1985, Example 4.0.6, p. 169

[35] Brown 1991, Definition III.2.1

[36] Brown 1991, Theorem III.2.12

[37] Brown 1991, Corollary III.2.16

[38] Mirsky 1990, Theorem 1.4.1

[39] Brown 1991, Theorem III.3.18

[40] Brown 1991, Definition III.4.1

[41] Brown 1991, Definition III.4.9

[42] Brown 1991, Corollary III.4.10

[43] Householder 1975, Ch. 7

[44] Bau III & Trefethen 1997

[45] Golub & Van Loan 1996, Algorithm 1.3.1

[46] Golub & Van Loan 1996, Chapters 9 and 10, esp. section 10.2

[47] Golub & Van Loan 1996, Chapter 2.3

[48] For example, Mathematica, see Wolfram 2003, Ch. 3.7

[49] Press, Flannery & Teukolsky 1992

[50] Stoer & Bulirsch 2002, Section 4.1

[51] Horn & Johnson 1985, Theorem 2.5.4

[52] Horn & Johnson 1985, Ch. 3.1, 3.2

[53] Arnold & Cooke 1992, Sections 14.5, 7, 8

[54] Bronson 1989, Ch. 15

[55] Coburn 1955, Ch. V

[56] Lang 2002, Chapter XIII

[57] Lang 2002, XVII.1, p. 643

[58] Lang 2002, Proposition XIII.4.16

[59] Reichl 2004, Section L.2

[60] Greub 1975, Section III.3

[61] Greub 1975, Section III.3.13

[62] See any standard reference in group.

[63] Baker 2003, Def. 1.30

[64] Baker 2003, Theorem 1.2

[65] Artin 1991, Chapter 4.5

[66] Rowen 2008, Example 19.2, p. 198

[67] See any reference in representation theory or group representation.

[68] See the item “Matrix” in Itõ, ed. 1987 37.13. NOTES 229

[69] “Empty Matrix: A matrix is empty if either its row or column dimension is zero”, Glossary, O-Matrix v6 User Guide

[70] “A matrix having at least one dimension equal to zero is called an empty matrix”, MATLAB Data Structures

[71] Fudenberg & Tirole 1983, Section 1.1.1

[72] Manning 1999, Section 15.3.4

[73] Ward 1997, Ch. 2.8

[74] Stinson 2005, Ch. 1.1.5 and 1.2.4

[75] Association for Computing Machinery 1979, Ch. 7

[76] Godsil & Royle 2004, Ch. 8.1

[77] Punnen 2002

[78] Lang 1987a, Ch. XVI.6

[79] Nocedal 2006, Ch. 16

[80] Lang 1987a, Ch. XVI.1

[81] Lang 1987a, Ch. XVI.5. For a more advanced, and more general statement see Lang 1969, Ch. VI.2

[82] Gilbarg & Trudinger 2001

[83] Šolin 2005, Ch. 2.5. See also stiffness method.

[84] Latouche & Ramaswami 1999

[85] Mehata & Srinivasan 1978, Ch. 2.8

[86] Healy, Michael (1986), Matrices for Statistics, Oxford University Press, ISBN 978-0-19-850702-4

[87] Krzanowski 1988, Ch. 2.2., p. 60

[88] Krzanowski 1988, Ch. 4.1

[89] Conrey 2007

[90] Zabrodin, Brezin & Kazakov et al. 2006

[91] Itzykson & Zuber 1980, Ch. 2

[92] see Burgess & Moore 2007, section 1.6.3. (SU(3)), section 2.4.3.2. (Kobayashi–Maskawa matrix)

[93] Schiff 1968, Ch. 6

[94] Bohm 2001, sections II.4 and II.8

[95] Weinberg 1995, Ch. 3

[96] Wherrett 1987, part II

[97] Riley, Hobson & Bence 1997, 7.17

[98] Guenther 1990, Ch. 5

[99] Shen, Crossley & Lun 1999 cited by Bretscher 2005, p. 1

[100] Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001 ISBN 978-0321079121 | p.564-565

[101] Needham, Joseph; Wang Ling (1959). Science and Civilisation in China III. Cambridge: Cambridge University Press. p. 117. ISBN 9780521058018.

[102] Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Vanden Eynden, Published by Addison Wesley, October 10, 2001 ISBN 978-0321079121 | p.564

[103] Merriam–Webster dictionary, Merriam–Webster, retrieved April 20, 2009 230 CHAPTER 37. MATRIX (MATHEMATICS)

[104] Although many sources state that J. J. Sylvester coined the mathematical term “matrix” in 1848, Sylvester published nothing in 1848. (For proof that Sylvester published nothing in 1848, see: J. J. Sylvester with H. F. Baker, ed., The Collected Mathematical Papers of James Joseph Sylvester (Cambridge, England: Cambridge University Press, 1904), vol. 1.) His earliest use of the term “matrix” occurs in 1850 in: J. J. Sylvester (1850) “Additions to the articles in the September number of this journal, “On a new class of theorems,” and on Pascal’s theorem,” The London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, 37 : 363-370. From page 369: “For this purpose we must commence, not with a square, but with an oblong arrangement of terms consisting, suppose, of m lines and n columns. This will not in itself represent a determinant, but is, as it were, a Matrix out of which we may form various systems of determinants … "

[105] The Collected Mathematical Papers of James Joseph Sylvester: 1837–1853, Paper 37, p. 247

[106] Phil.Trans. 1858, vol.148, pp.17-37 Math. Papers II 475-496

[107] Dieudonné, ed. 1978, Vol. 1, Ch. III, p. 96

[108] Knobloch 1994

[109] Hawkins 1975

[110] Kronecker 1897

[111] Weierstrass 1915, pp. 271–286

[112] Bôcher 2004

[113] Mehra & Rechenberg 1987

[114] Whitehead, Alfred North; and Russell, Bertrand (1913) Principia Mathematica to *56, Cambridge at the University Press, Cambridge UK (republished 1962) cf page 162ff.

[115] Tarski, Alfred; (1946) Introduction to Logic and the Methodology of Deductive Sciences, Dover Publications, Inc, New York NY, ISBN 0-486-28462-X.

[1] Eigen means “own” in German and in Dutch.

[2] Additionally, the group is required to be closed in the general linear group.

[3] “Not much of matrix theory carries over to infinite-dimensional spaces, and what does is not so useful, but it sometimes helps.” Halmos 1982, p. 23, Chapter 5

37.14 References

• Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0 • Arnold, Vladimir I.; Cooke, Roger (1992), Ordinary differential equations, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-3-540-54813-3 • Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1 • Association for Computing Machinery (1979), Computer Graphics, Tata McGraw–Hill, ISBN 978-0-07-059376- 3 • Baker, Andrew J. (2003), Matrix Groups: An Introduction to Lie Group Theory, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-1-85233-470-3 • Bau III, David; Trefethen, Lloyd N. (1997), Numerical linear algebra, Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-361-9 • Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduc- tion to Groups, Rings, and Fields, Boston: Houghton Mifflin Co., ISBN 0-395-14017-X • Bretscher, Otto (2005), Linear Algebra with Applications (3rd ed.), Prentice Hall • Bronson, Richard (1970), Matrix Methods: An Introduction, New York: Academic Press, LCCN 70097490 • Bronson, Richard (1989), Schaum’s outline of theory and problems of matrix operations, New York: McGraw– Hill, ISBN 978-0-07-007978-6 37.14. REFERENCES 231

• Brown, William C. (1991), Matrices and vector spaces, New York, NY: Marcel Dekker, ISBN 978-0-8247- 8419-5

• Coburn, Nathaniel (1955), Vector and tensor analysis, New York, NY: Macmillan, OCLC 1029828

• Conrey, J. Brian (2007), Ranks of elliptic curves and random matrix theory, Cambridge University Press, ISBN 978-0-521-69964-8

• Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0- 201-01984-1

• Fudenberg, Drew; Tirole, Jean (1983), Game Theory, MIT Press

• Gilbarg, David; Trudinger, Neil S. (2001), Elliptic partial differential equations of second order (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-3-540-41160-4

• Godsil, Chris; Royle, Gordon (2004), Algebraic Graph Theory, Graduate Texts in Mathematics 207, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95220-8

• Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Johns Hopkins, ISBN 978-0- 8018-5414-9

• Greub, Werner Hildbert (1975), Linear algebra, Graduate Texts in Mathematics, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90110-7

• Halmos, Paul Richard (1982), A Hilbert space problem book, Graduate Texts in Mathematics 19 (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-90685-0, MR 675952

• Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press, ISBN 978-0-521- 38632-6

• Householder, Alston S. (1975), The theory of matrices in numerical analysis, New York, NY: Dover Publica- tions, MR 0378371

• Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728- 8.

• Krzanowski, Wojtek J. (1988), Principles of multivariate analysis, Oxford Statistical Science Series 3, The Clarendon Press Oxford University Press, ISBN 978-0-19-852211-9, MR 969370

• Itõ, Kiyosi, ed. (1987), Encyclopedic dictionary of mathematics. Vol. I-IV (2nd ed.), MIT Press, ISBN 978-0- 262-09026-1, MR 901762

• Lang, Serge (1969), Analysis II, Addison-Wesley

• Lang, Serge (1987a), Calculus of several variables (3rd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96405-8

• Lang, Serge (1987b), Linear algebra, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96412-6

• Lang, Serge (2002), Algebra, Graduate Texts in Mathematics 211 (Revised third ed.), New York: Springer- Verlag, ISBN 978-0-387-95385-4, MR 1878556

• Latouche, Guy; Ramaswami, Vaidyanathan (1999), Introduction to matrix analytic methods in stochastic mod- eling (1st ed.), Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-425-8

• Manning, Christopher D.; Schütze, Hinrich (1999), Foundations of statistical natural language processing, MIT Press, ISBN 978-0-262-13360-9

• Mehata, K. M.; Srinivasan, S. K. (1978), Stochastic processes, New York, NY: McGraw–Hill, ISBN 978-0-07- 096612-3

• Mirsky, Leonid (1990), An Introduction to Linear Algebra, Courier Dover Publications, ISBN 978-0-486- 66434-7

• Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76-91646 232 CHAPTER 37. MATRIX (MATHEMATICS)

• Nocedal, Jorge; Wright, Stephen J. (2006), Numerical Optimization (2nd ed.), Berlin, DE; New York, NY: Springer-Verlag, p. 449, ISBN 978-0-387-30303-1

• Oualline, Steve (2003), Practical C++ programming, O'Reilly, ISBN 978-0-596-00419-4

• Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1992), “LU Decomposi- tion and Its Applications”, Numerical Recipes in FORTRAN: The Art of Scientific Computing (PDF) (2nd ed.), Cambridge University Press, pp. 34–42

• Protter, Murray H.; Morrey, Jr., Charles B. (1970), College Calculus with Analytic Geometry (2nd ed.), Reading: Addison-Wesley, LCCN 76087042

• Punnen, Abraham P.; Gutin, Gregory (2002), The traveling salesman problem and its variations, Boston, MA: Kluwer Academic Publishers, ISBN 978-1-4020-0664-7

• Reichl, Linda E. (2004), The transition to chaos: conservative classical systems and quantum manifestations, Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-98788-0

• Rowen, Louis Halle (2008), Graduate Algebra: noncommutative view, Providence, RI: American Mathematical Society, ISBN 978-0-8218-4153-2

• Šolin, Pavel (2005), Partial Differential Equations and the Finite Element Method, Wiley-Interscience, ISBN 978-0-471-76409-0

• Stinson, Douglas R. (2005), Cryptography, Discrete Mathematics and its Applications, Chapman & Hall/CRC, ISBN 978-1-58488-508-5

• Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-95452-3

• Ward, J. P. (1997), Quaternions and Cayley numbers, Mathematics and its Applications 403, Dordrecht, NL: Kluwer Academic Publishers Group, ISBN 978-0-7923-4513-8, MR 1458894

• Wolfram, Stephen (2003), The Mathematica Book (5th ed.), Champaign, IL: Wolfram Media, ISBN 978-1- 57955-022-6

37.14.1 Physics references

• Bohm, Arno (2001), Quantum Mechanics: Foundations and Applications, Springer, ISBN 0-387-95330-2

• Burgess, Cliff; Moore, Guy (2007), The Standard Model. A Primer, Cambridge University Press, ISBN 0-521- 86036-9

• Guenther, Robert D. (1990), Modern Optics, John Wiley, ISBN 0-471-60538-7

• Itzykson, Claude; Zuber, Jean-Bernard (1980), Quantum Field Theory, McGraw–Hill, ISBN 0-07-032071-3

• Riley, Kenneth F.; Hobson, Michael P.; Bence, Stephen J. (1997), Mathematical methods for physics and engineering, Cambridge University Press, ISBN 0-521-55506-X

• Schiff, Leonard I. (1968), Quantum Mechanics (3rd ed.), McGraw–Hill

• Weinberg, Steven (1995), The Quantum Theory of Fields. Volume I: Foundations, Cambridge University Press, ISBN 0-521-55001-7

• Wherrett, Brian S. (1987), Group Theory for Atoms, Molecules and Solids, Prentice–Hall International, ISBN 0-13-365461-3

• Zabrodin, Anton; Brezin, Édouard; Kazakov, Vladimir; Serban, Didina; Wiegmann, Paul (2006), Applications of Random Matrices in Physics (NATO Science Series II: Mathematics, Physics and Chemistry), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-1-4020-4530-1 37.15. EXTERNAL LINKS 233

37.14.2 Historical references

• A. Cayley A memoir on the theory of matrices. Phil. Trans. 148 1858 17-37; Math. Papers II 475-496 • Bôcher, Maxime (2004), Introduction to higher algebra, New York, NY: Dover Publications, ISBN 978-0-486- 49570-5, reprint of the 1907 original edition • Cayley, Arthur (1889), The collected mathematical papers of Arthur Cayley, I (1841–1853), Cambridge Uni- versity Press, pp. 123–126 • Dieudonné, Jean, ed. (1978), Abrégé d'histoire des mathématiques 1700-1900, Paris, FR: Hermann • Hawkins, Thomas (1975), “Cauchy and the spectral theory of matrices”, Historia Mathematica 2: 1–29, doi:10.1016/0315-0860(75)90032-4, ISSN 0315-0860, MR 0469635 • Knobloch, Eberhard (1994), “From Gauss to Weierstrass: determinant theory and its historical evaluations”, The intersection of history and mathematics, Science Networks Historical Studies 15, Basel, Boston, Berlin: Birkhäuser, pp. 51–66, MR 1308079 • Kronecker, Leopold (1897), Hensel, Kurt, ed., Leopold Kronecker’s Werke, Teubner • Mehra, Jagdish; Rechenberg, Helmut (1987), The Historical Development of Quantum Theory (1st ed.), Berlin, DE; New York, NY: Springer-Verlag, ISBN 978-0-387-96284-9 • Shen, Kangshen; Crossley, John N.; Lun, Anthony Wah-Cheung (1999), Nine Chapters of the Mathematical Art, Companion and Commentary (2nd ed.), Oxford University Press, ISBN 978-0-19-853936-0 • Weierstrass, Karl (1915), Collected works 3

37.15 External links

Encyclopedic articles

• Hazewinkel, Michiel, ed. (2001), “Matrix”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4

History

• MacTutor: Matrices and determinants • Matrices and Linear Algebra on the Earliest Uses Pages • Earliest Uses of Symbols for Matrices and Vectors

Online books

• Kaw, Autar K., Introduction to Matrix Algebra, ISBN 978-0-615-25126-4 • The Matrix Cookbook (PDF), retrieved 24 March 2014 • Brookes, Mike (2005), The Matrix Reference Manual, London: Imperial College, retrieved 10 Dec 2008

Online matrix calculators

• SimplyMath (Matrix Calculator) • Matrix Calculator (DotNumerics) • Xiao, Gang, Matrix calculator, retrieved 10 Dec 2008 • Online matrix calculator, retrieved 10 Dec 2008 • Online matrix calculator (ZK framework), retrieved 26 Nov 2009 234 CHAPTER 37. MATRIX (MATHEMATICS)

• Oehlert, Gary W.; Bingham, Christopher, MacAnova, University of Minnesota, School of Statistics, retrieved 10 Dec 2008, a freeware package for matrix algebra and statistics • Online matrix calculator, retrieved 14 Dec 2009

• Operation with matrices in R (determinant, track, inverse, adjoint, transpose) Chapter 38

Natural number

This article is about “positive integers” and “non-negative integers”. For all the numbers ..., −2, −1, 0, 1, 2, ..., see Integer. In mathematics, the natural numbers (sometimes called the whole numbers)[1][2][3][4] are those used for counting (as in “there are six coins on the table”) and ordering (as in “this is the third largest city in the country”). In common language, words used for counting are "cardinal numbers" and words used for ordering are "ordinal numbers". Another use of natural numbers is for what linguists call nominal numbers, such as the model number of a product, where the “natural number” is used only for naming (as distinct from a serial number where the order properties of the natural numbers distinguish later uses from earlier uses) and generally lacks any meaning of number as used in mathematics. The natural numbers are the basis from which many other number sets may be built by extension: the integers, by including an unresolved negation operation; the rational numbers, by including with the integers an unresolved divi- sion operation; the real numbers by including with the rationals the termination of Cauchy sequences; the complex numbers, by including with the real numbers the unresolved square root of minus one; the hyperreal numbers, by in- cluding with real numbers the infinitesimal value epsilon; vectors, by including a vector structure with reals; matrices, by having vectors of vectors; the nonstandard integers; and so on.[5][6] Therefore, the natural numbers are canonically embedded (identified) in the other number systems. Properties of the natural numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics. There is no universal agreement about whether to include zero in the set of natural numbers. Some authors begin the natural numbers with 0, corresponding to the non-negative integers 0, 1, 2, 3, ..., whereas others start with 1, corresponding to the positive integers 1, 2, 3, ....[7][8][9][10] This distinction is of no fundamental concern for the natural numbers (even when viewed via additional axioms as semigroup with respect to addition and monoid for multiplication). Including the number 0 just supplies an identity element for the former (binary) operation to achieve a monoid structure for both, and a (trivial) zero divisor for the multiplication. In common language, for example in primary school, natural numbers may be called counting numbers[11] to dis- tinguish them from the real numbers which are used for measurement.

38.1 History

The most primitive method of representing a natural number is to put down a mark for each object. Later, a set of objects could be tested for equality, excess or shortage, by striking out a mark and removing an object from the set. The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. The ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1, 10, and all the powers of 10 up to over 1 million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. The Babylonians had a place-value system based essentially on the numerals for 1 and 10, using base sixty, so that the symbol for sixty was the same as the symbol for one, its value being determined from context.[15] A much later advance was the development of the idea that 0 can be considered as a number, with its own numeral. The

235 236 CHAPTER 38. NATURAL NUMBER

Natural numbers can be used for counting (one apple, two apples, three apples, ...) use of a 0 digit in place-value notation (within other numbers) dates back as early as 700 BC by the Babylonians, but they omitted such a digit when it would have been the last symbol in the number.[16] The Olmec and Maya civilizations used 0 as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica.[17][18] The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628. However, 0 had been used as a number in the medieval computus (the calculation of the date of Easter), beginning with Dionysius Exiguus in 525, without being denoted by a numeral (standard Roman numerals do not have a symbol for 0); instead nulla (or the genitive form nullae) from nullus, the Latin word for “none”, was employed to denote a 0 value.[19] The first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, sometimes even not as a number at all.[20] Independent studies also occurred at around the same time in India, China, and Mesoamerica.[21] 38.2. NOTATION 237

38.1.1 Modern definitions

In 19th century Europe, there was mathematical and philosophical discussion about the exact nature of the natural numbers. A school of Naturalism stated that the natural numbers were a direct consequence of the human psyche. Henri Poincaré was one of its advocates, as was Leopold Kronecker who summarized “God made the integers, all else is the work of man”. In opposition to the Naturalists, the constructivists saw a need to improve the logical rigor in the foundations of mathematics.[22] In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers thus stating they were not really natural but a consequence of definitions. Later, two classes of such formal definitions were constructed; later, they were shown to be equivalent in most practical applications. Set-theoretical definitions of natural numbers were initiated by Frege and he initially defined a natural number as the class of all sets that are in one-to-one correspondence with a particular set, but this definition turned out to lead to paradoxes including Russell’s paradox. Therefore, this formalism was modified so that a natural number is defined as a particular set, and any set that can be put into one-to-one correspondence with that set is said to have that number of elements.[23] The second class of definitions was introduced by Giuseppe Peano and is now called Peano arithmetic. It is based on an axiomatization of the properties of ordinal numbers: each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several weak systems of set theory. One such system is ZFC with the axiom of infinity replaced by its negation. Theorems that can be proved in ZFC but cannot be proved using the Peano Axioms include Goodstein’s theorem.[24] With all these definitions it is convenient to include 0 (corresponding to the empty set) as a natural number. Including 0 is now the common convention among set theorists,[25] logicians,[26] and computer scientists. Many other mathe- maticians also include 0,[10] although some have kept the older tradition and take 1 to be the first natural number.[27]

38.2 Notation

Mathematicians use N or N (an N in blackboard bold, displayed as ℕ in Unicode) to refer to the set of all natural numbers. This set is countably infinite: it is infinite but countable by definition. This is also expressed by saying that [28] the cardinal number of the set is aleph-naught (ℵ0) . To be unambiguous about whether 0 is included or not, sometimes an index (or superscript) “0” is added in the former case, and a superscript " ∗ " or subscript " 1 " is added in the latter case:

0 N = N0 = {0, 1, 2,...} ∗ + N = N = N1 = N>0 = {1, 2,...}.

38.3 Properties

38.3.1 Addition

One can recursively define an addition on the natural numbers by setting a + 0 = a and a + S(b) = S(a + b) for all a, b. Here S should be read as “successor”. This turns the natural numbers (N, +) into a commutative monoid with identity element 0, the so-called free object with one generator. This monoid satisfies the cancellation property and can be embedded in a group (in the mathematical sense of the word group). The smallest group containing the natural numbers is the integers. If 1 is defined as S(0), then b + 1 = b + S(0) = S(b + 0) = S(b). That is, b + 1 is simply the successor of b.

38.3.2 Multiplication

Analogously, given that addition has been defined, a multiplication × can be defined via a × 0 = 0 and a × S(b) = (a × b) + a. This turns (N*, ×) into a free commutative monoid with identity element 1; a generator set for this monoid is the set of prime numbers. 238 CHAPTER 38. NATURAL NUMBER

38.3.3 Relationship between addition and multiplication

Addition and multiplication are compatible, which is expressed in the distribution law: a × (b + c) = (a × b) + (a × c). These properties of addition and multiplication make the natural numbers an instance of a commutative semiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily commutative. The lack of additive inverses, which is equivalent to the fact that N is not closed under subtraction, means that N is not a ring; instead it is a semiring (also known as a rig). If the natural numbers are taken as “excluding 0”, and “starting at 1”, the definitions of + and × are as above, except that they begin with a + 1 = S(a) and a × 1 = a.

38.3.4 Order

In this section, juxtaposed variables such as ab indicate the product a × b, and the standard order of operations is assumed. A total order on the natural numbers is defined by letting a ≤ b if and only if there exists another natural number c with a + c = b. This order is compatible with the arithmetical operations in the following sense: if a, b and c are natural numbers and a ≤ b, then a + c ≤ b + c and ac ≤ bc. An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by an ordinal number; for the natural numbers this is expressed as ω.

38.3.5 Division

In this section, juxtaposed variables such as ab indicate the product a × b, and the standard order of operations is assumed. While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder is available as a substitute: for any two natural numbers a and b with b ≠ 0 there are natural numbers q and r such that

a = bq + r and r < b.

The number q is called the quotient and r is called the remainder of division of a by b. The numbers q and r are uniquely determined by a and b. This Euclidean division is key to several other properties (divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory.

38.3.6 Algebraic properties satisfied by the natural numbers

The addition (+) and multiplication (×) operations on natural numbers as defined above have several algebraic prop- erties:

• Closure under addition and multiplication: for all natural numbers a and b, both a + b and a × b are natural numbers.

• Associativity: for all natural numbers a, b, and c, a + (b + c) = (a + b) + c and a × (b × c) = (a × b) × c.

• Commutativity: for all natural numbers a and b, a + b = b + a and a × b = b × a.

• Existence of identity elements: for every natural number a, a + 0 = a and a × 1 = a.

• Distributivity of multiplication over addition for all natural numbers a, b, and c, a × (b + c) = (a × b) + (a × c).

• No nonzero zero divisors: if a and b are natural numbers such that a × b = 0, then a = 0 or b = 0. 38.4. GENERALIZATIONS 239

38.4 Generalizations

Two generalizations of natural numbers arise from the two uses:

• A natural number can be used to express the size of a finite set; more generally a cardinal number is a measure for the size of a set also suitable for infinite sets; this refers to a concept of “size” such that if there is a bijection between two sets they have the same size. The set of natural numbers itself and any other countably infinite set has cardinality aleph-null ( ℵ0 ). • Linguistic ordinal numbers “first”, “second”, “third” can be assigned to the elements of a totally ordered finite set, and also to the elements of well-ordered countably infinite sets like the set of natural numbers itself. This can be generalized to ordinal numbers which describe the position of an element in a well-ordered set in general. An ordinal number is also used to describe the “size” of a well-ordered set, in a sense different from cardinality: if there is an order isomorphism between two well-ordered sets they have the same ordinal number. The first ordinal number that is not a natural number is expressed as ω ; this is also the ordinal number of the set of natural numbers itself.

Many well-ordered sets with cardinal number ℵ0 have an ordinal number greater than ω (the latter is the lowest possible). The least ordinal of cardinality ℵ0 (i.e., the initial ordinal) is ω . For finite well-ordered sets, there is one-to-one correspondence between ordinal and cardinal numbers; therefore they can both be expressed by the same natural number, the number of elements of the set. This number can also be used to describe the position of an element in a larger finite, or an infinite, sequence. A countable non-standard model of arithmetic satisfying the Peano Arithmetic (i.e., the first-order Peano axioms) was developed by Skolem in 1933. The hypernatural numbers are an uncountable model that can be constructed from the ordinary natural numbers via the ultrapower construction. Georges Reeb used to claim provocatively that The naïve integers don't fill up N . Other generalizations are discussed in the article on numbers.

38.5 Formal definitions

38.5.1 Peano axioms

Main article: Peano axioms

Many properties of the natural numbers can be derived from the Peano axioms.[29][30]

• Axiom One: 0 is a natural number. • Axiom Two: Every natural number has a successor. • Axiom Three: 0 is not the successor of any natural number. • Axiom Four: If the successor of x equals the successor of y, then x equals y. • Axiom Five (the Axiom of Induction): If a statement is true of 0, and if the truth of that statement for a number implies its truth for the successor of that number, then the statement is true for every natural number.

These are not the original axioms published by Peano, but are named in his honor. Some forms of the Peano axioms have 1 in place of 0. In ordinary arithmetic, the successor of x is x + 1. Replacing Axiom Five by an axiom schema one obtains a (weaker) first-order theory called Peano Arithmetic.

38.5.2 Constructions based on set theory

Main article: Set-theoretic definition of natural numbers 240 CHAPTER 38. NATURAL NUMBER

In the area of mathematics called set theory, a special case of the von Neumann ordinal construction [31] defines the natural numbers as follows:

Set 0 := { }, the empty set, and define S(a) = a ∪ {a} for every set a. S(a) is the successor of a, and S is called the successor function. By the axiom of infinity, there exists a set which contains 0 and is closed under the successor function. (Such sets are said to be `inductive'.) Then the intersection of all inductive sets is defined to be the set of natural numbers. It can be checked that the set of natural numbers satisfies the Peano axioms. Each natural number is then equal to the set of all natural numbers less than it, so that

• 0 = { } • 1 = {0} = {{ }} • 2 = {0, 1} = {0, {0}} = {{ }, {{ }}}

:*3 = {0, 1, 2} = {0, {0}, {0, {0}}} ={{ }, {{ }}, {{ }, {{ }}}}

• n = {0, 1, 2, ..., n−2, n−1} = {0, 1, 2, ..., n−2} ∪ {n−1} = (n−1) ∪ {n−1} = S(n−1) and so on.

With this definition, a natural number n is a particular set with n elements, and n ≤ m if and only if n is a subset of m. Also, with this definition, different possible interpretations of notations like Rn (n-tuples versus mappings of n into R) coincide. Even if one does not accept the axiom of infinity and therefore cannot accept that the set of all natural numbers exists, it is still possible to define any one of these sets.

Other constructions

Although the standard construction is useful, it is not the only possible construction. Zermelo's construction goes as follows:

one defines 0 = { } and S(a) = {a}, producing

• 0 = { } • 1 = {0} ={{ }} • 2 = {1} = {{{ }}}, etc.

Each natural number is then equal to the set of the natural number preceding it.

It is also possible to define 0 = {{ }}

and S(a) = a ∪ {a} producing

• 0 = {{ }} • 1 = {{ }, 0} = {{ }, {{ }}} • 2 = {{ }, 0, 1}, etc. 38.6. SEE ALSO 241

38.6 See also

• Integer

• Set-theoretic definition of natural numbers

• Peano axioms

• Canonical representation of a positive integer

• Countable set

• Number#Classification for other number systems (rational, real, complex etc.)

38.7 Notes

[1] Weisstein, Eric W., “Whole Number”, MathWorld.

[2] Clapham & Nicholson (2014): "whole number An integer, though sometimes it is taken to mean only non-negative integers, or just the positive integers.”

[3] James & James (1992) give definitions of “whole number” under several headwords: INTEGER … Syn. whole number. NUMBER … whole number. A nonnegative integer. WHOLE … whole number. (1) One of the integers 0, 1, 2, 3, … . (2) A positive integer; i.e., a natural number. (3) An integer, positive, negative, or zero.

[4] The Common Core State Standards for Mathematics say: “Whole numbers. The numbers 0, 1, 2, 3, ....” (Glossary, p. 87) (PDF) Definitions from The Ontario Curriculum, Grades 1-8: Mathematics, Ontario Ministry of Education (2005) (PDF) "natural numbers. The counting numbers 1, 2, 3, 4, ....” (Glossary, p. 128) "whole number. Any one of the numbers 0, 1, 2, 3, 4, ....” (Glossary, p. 134) Musser, Peterson & Burger (2013, p. 57): “As mentioned earlier, the study of the set of whole numbers, W = {0, 1, 2, 3, 4, ...}, is the foundation of elementary school mathematics.” These pre-algebra books define the whole numbers:

• Szczepanski & Kositsky (2008): “Another important collection of numbers is the whole numbers, the natural numbers together with zero.” (Chapter 1: The Whole Story, p. 4). On the inside front cover, the authors say: “We based this book on the state standards for pre-algebra in California, Florida, New York, and Texas, ...” • Bluman (2010): “When 0 is added to the set of natural numbers, the set is called the whole numbers.” (Chapter 1: Whole Numbers, p. 1)

Both books define the natural numbers to be: “1, 2, 3, …".

[5] Mendelson (2008) says: “The whole fantastic hierarchy of number systems is built up by purely set-theoretic means from a few simple assumptions about natural numbers.” (Preface, p. x)

[6] Bluman (2010): “Numbers make up the foundation of mathematics.” (p. 1)

[7] Weisstein, Eric W., “Natural Number”, MathWorld.

[8] “natural number”, Merriam-Webster.com (Merriam-Webster), retrieved 4 October 2014

[9] Carothers (2000) says: "ℕ is the set of natural numbers (positive integers)" (p. 3)

[10] Mac Lane & Birkhoff (1999) include zero in the natural numbers: 'Intuitively, the set ℕ = {0, 1, 2, ... } of all natural numbers may be described as follows: ℕ contains an “initial” number 0; ...'. They follow that with their version of the Peano Postulates. (p. 15)

[11] Weisstein, Eric W., “Counting Number”, MathWorld.

[12] Introduction, Royal Belgian Institute of Natural Sciences, Brussels, Belgium. 242 CHAPTER 38. NATURAL NUMBER

[13] Flash presentation, Royal Belgian Institute of Natural Sciences, Brussels, Belgium.

[14] The Ishango Bone, Democratic Republic of the Congo, on permanent display at the Royal Belgian Institute of Natural Sciences, Brussels, Belgium. UNESCO's Portal to the Heritage of Astronomy

[15] Georges Ifrah, The Universal History of Numbers, Wiley, 2000, ISBN 0-471-37568-3

[16] “A history of Zero”. MacTutor History of Mathematics. Retrieved 2013-01-23. ... a tablet found at Kish ... thought to date from around 700 BC, uses three hooks to denote an empty place in the positional notation. Other tablets dated from around the same time use a single hook for an empty place

[17] Mann, Charles C. (2005), 1491: New Revelations Of The Americas Before Columbus, Knopf, p. 19, ISBN 9781400040063.

[18] Evans, Brian (2014), “Chapter 10. Pre-Columbian Mathematics: The Olmec, Maya, and Inca Civilizations”, The De- velopment of Mathematics Throughout the Centuries: A Brief History in a Cultural Context, John Wiley & Sons, ISBN 9781118853979.

[19] Michael L. Gorodetsky (2003-08-25). “Cyclus Decemnovennalis Dionysii – Nineteen year cycle of Dionysius”. Hbar.phys.msu.ru. Retrieved 2012-02-13.

[20] This convention is used, for example, in Euclid’s Elements, see Book VII, definitions 1 and 2.

[21] Morris Kline, Mathematical Thought From Ancient to Modern Times, Oxford University Press, 1990 [1972], ISBN 0-19- 506135-7

[22] “Much of the mathematical work of the twentieth century has been devoted to examining the logical foundations and structure of the subject.” (Eves 1990, p. 606)

[23] Eves 1990, Chapter 15

[24] L. Kirby; J. Paris, Accessible Independence Results for Peano Arithmetic, Bulletin of the London Mathematical Society 14 (4): 285. doi:10.1112/blms/14.4.285, 1982.

[25] Bagaria, Joan. “Set Theory”. The Stanford Encyclopedia of Philosophy (Winter 2014 Edition).

[26] Goldrei, Derek (1998). “3”. Classic set theory : a guided independent study (1. ed., 1. print ed.). Boca Raton, Fla. [u.a.]: Chapman & Hall/CRC. p. 33. ISBN 0-412-60610-0.

[27] This is common in texts about Real analysis. See, for example, Carothers (2000, p. 3) or Thomson, Bruckner & Bruckner (2000, p. 2).

[28] Weisstein, Eric W., “Cardinal Number”, MathWorld.

[29] G.E. Mints (originator), “Peano axioms”, Encyclopedia of Mathematics (Springer, in cooperation with the European Math- ematical Society), retrieved 8 October 2014

[30] Hamilton (1988) calls them “Peano’s Postulates” and begins with “1. 0 is a natural number.” (p. 117f) Halmos (1960) uses the language of set theory instead of the language of arithmetic for his five axioms. He begins with "(I) 0 ∈ ω (where, of course, 0 = ∅ )" ( ω is the set of all natural numbers). (p. 46) Morash (1991) gives “a two-part axiom” in which the natural numbers begin with 1. (Section 10.1: An Axiomatization for the System of Positive Integers)

[31] Von Neumann 1923

38.8 References

• Bluman, Allan (2010), Pre-Algebra DeMYSTiFieD (Second ed.), McGraw-Hill Professional

• Carothers, N.L. (2000), Real analysis, Cambridge University Press, ISBN 0-521-49756-6

• Clapham, Christopher; Nicholson, James (2014), The Concise Oxford Dictionary of Mathematics (Fifth ed.), Oxford University Press

• Dedekind, Richard (1963), Essays on the Theory of Numbers, Dover, ISBN 0-486-21010-3

• Dedekind, Richard (2007), Essays on the Theory of Numbers, Kessinger Publishing, LLC, ISBN 0-548- 08985-X 38.9. EXTERNAL LINKS 243

• Eves, Howard (1990), An Introduction to the History of Mathematics (6th ed.), Thomson, ISBN 978-0-03- 029558-4 • Halmos, Paul (1960), Naive Set Theory, Springer Science & Business Media

• Hamilton, A. G. (1988), Logic for Mathematicians (Revised ed.), Cambridge University Press • James, Robert C.; James, Glenn (1992), Mathematics Dictionary (Fifth ed.), Chapman & Hall

• Landau, Edmund (1966), Foundations of Analysis (Third ed.), Chelsea Pub Co, ISBN 0-8218-2693-X • Mac Lane, Saunders; Birkhoff, Garrett (1999), Algebra (3rd ed.), American Mathematical Society

• Mendelson, Elliott (2008) [1973], Number Systems and the Foundations of Analysis, Dover Publications • Morash, Ronald P. (1991), Bridge to Abstract Mathematics: Mathematical Proof and Structures (Second ed.), Mcgraw-Hill College

• Musser, Gary L.; Peterson, Blake E.; Burger, William F. (2013), Mathematics for Elementary Teachers: A Contemporary Approach (10th ed.), Wiley Global Education, ISBN 978-1118457443

• Szczepanski, Amy F.; Kositsky, Andrew P. (2008), The Complete Idiot’s Guide to Pre-algebra, Penguin Group • Thomson, Brian S.; Bruckner, Judith B.; Bruckner, Andrew M. (2008), Elementary Real Analysis (Second ed.), ClassicalRealAnalysis.com, ISBN 9781434843678 • Von Neumann, Johann (1923), “Zur Einführung der transfiniten Zahlen”, Acta litterarum ac scientiarum Ragiae Universitatis Hungaricae Francisco-Josephinae, Sectio scientiarum mathematicarum 1: 199–208 • Von Neumann, John (January 2002) [1923], “On the introduction of transfinite numbers”, in Jean van Heijenoort, From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931 (3rd ed.), Harvard University Press, pp. 346–354, ISBN 0-674-32449-8 - English translation of von Neumann 1923.

38.9 External links

• Hazewinkel, Michiel, ed. (2001), “Natural number”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4

• Axioms and Construction of Natural Numbers • Essays on the Theory of Numbers by Richard Dedekind at Project Gutenberg 244 CHAPTER 38. NATURAL NUMBER

The Ishango bone (on exhibition at the Royal Belgian Institute of Natural Sciences)[12][13][14] is believed to have been used 20,000 years ago for natural number arithmetic. 38.9. EXTERNAL LINKS 245

The double-struck capital N symbol, often used to denote the set of all natural numbers (see List of mathematical symbols). Chapter 39

Network science

Network science is an interdisciplinary academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks. The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology. The United States National Research Council defines network science as “the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena.”[1]

39.1 Background and history

The study of networks has emerged in diverse disciplines as a means of analyzing complex relational data. The earliest known paper in this field is the famous Seven Bridges of Königsberg written by Leonhard Euler in 1736. Euler’s mathematical description of vertices and edges was the foundation of graph theory, a branch of mathematics that studies the properties of pairwise relations in a network structure. The field of graph theory continued to develop and found applications in chemistry (Sylvester, 1878). In the 1930s Jacob Moreno, a psychologist in the Gestalt tradition, arrived in the United States. He developed the sociogram and presented it to the public in April 1933 at a convention of medical scholars. Moreno claimed that “before the advent of sociometry no one knew what the interpersonal structure of a group 'precisely' looked like (Moreno, 1953). The sociogram was a representation of the social structure of a group of elementary school students. The boys were friends of boys and the girls were friends of girls with the exception of one boy who said he liked a single girl. The feeling was not reciprocated. This network representation of social structure was found so intriguing that it was printed in The New York Times (April 3, 1933, page 17). The sociogram has found many applications and has grown into the field of social network analysis. Probabilistic theory in network science developed as an off-shoot of graph theory with Paul Erdős and Alfréd Rényi's eight famous papers on random graphs. For social networks the exponential random graph model or p* is a notational framework used to represent the probability space of a tie occurring in a social network. An alternate approach to network probability structures is the network probability matrix, which models the probability of edges occurring in a network, based on the historic presence or absence of the edge in a sample of networks. In 1998, David Krackhardt and Kathleen Carley introduced the idea of a meta-network with the PCANS Model. They suggest that “all organizations are structured along these three domains, Individuals, Tasks, and Resources”. Their paper introduced the concept that networks occur across multiple domains and that they are interrelated. This field has grown into another sub-discipline of network science called dynamic network analysis. More recently other network science efforts have focused on mathematically describing different network topologies. Duncan Watts reconciled empirical data on networks with mathematical representation, describing the small-world network. Albert-László Barabási and Reka Albert developed the scale-free network which is a loosely defined network topology that contains hub vertices with many connections, that grow in a way to maintain a constant ratio in the number of the connections versus all other nodes. Although many networks, such as the internet, appear to maintain this aspect, other networks have long tailed distributions of nodes that only approximate scale free ratios.

246 39.1. BACKGROUND AND HISTORY 247

Moreno’s sociogram of a 1st grade class

39.1.1 Department of Defense Initiatives

The U.S. military first became interested in network-centric warfare as an operational concept based on network science in 1996. John A. Parmentola, the U.S. Army Director for Research and Laboratory Management, proposed to the Army’s Board on Science and Technology (BAST) on December 1, 2003 that Network Science become a new Army research area. The BAST, the Division on Engineering and Physical Sciences for the National Research Council (NRC) of the National Academies, serves as a convening authority for the discussion of science and technology issues of importance to the Army and oversees independent Army-related studies conducted by the National Academies. The BAST conducted a study to find out whether identifying and funding a new field of investigation in basic research, Network Science, could help close the gap between what is needed to realize Network-Centric Operations and the current primitive state of fundamental knowledge of networks. As a result, the BAST issued the NRC study in 2005 titled Network Science (referenced above) that defined a new field of basic research in Network Science for the Army. Based on the findings and recommendations of that study and the subsequent 2007 NRC report titled Strategy for an Army Center for Network Science, Technology, and Experimentation, Army basic research resources were redirected to initiate a new basic research program in Network Science. To build a new theoretical foundation for complex networks, some of the key Network Science research efforts now ongoing in Army laboratories address: 248 CHAPTER 39. NETWORK SCIENCE

• Mathematical models of network behavior to predict performance with network size, complexity, and envi- ronment

• Optimized human performance required for network-enabled warfare

• Networking within ecosystems and at the molecular level in cells.

As initiated in 2004 by Frederick I. Moxley with support he solicited from David S. Alberts, the Department of Defense helped to establish the first Network Science Center in conjunction with the U.S. Army at the United States Military Academy (USMA). Under the tutelage of Dr. Moxley and the faculty of the USMA, the first interdisciplinary undergraduate courses in Network Science were taught to cadets at West Point. In order to better instill the tenets of network science among its cadre of future leaders, the USMA has also instituted a five-course undergraduate minor in Network Science. In 2006, the U.S. Army and the United Kingdom (UK) formed the Network and Information Science International Technology Alliance, a collaborative partnership among the Army Research Laboratory, UK Ministry of Defense and a consortium of industries and universities in the U.S. and UK. The goal of the alliance is to perform basic research in support of Network- Centric Operations across the needs of both nations. In 2009, the U.S. Army formed the Network Science CTA, a collaborative research alliance among the Army Re- search Laboratory, CERDEC, and a consortium of about 30 industrial R&D labs and universities in the U.S. The goal of the alliance is to develop a deep understanding of the underlying commonalities among intertwined social/cognitive, information, and communications networks, and as a result improve our ability to analyze, predict, design, and influ- ence complex systems interweaving many kinds of networks. Subsequently, as a result of these efforts, the U.S. Department of Defense has sponsored numerous research projects that support Network Science.

39.2 Network properties

Often, networks have certain attributes that can be calculated to analyze the properties & characteristics of the net- work. These network properties often define network models and can be used to analyze how certain models contrast to each other. Many of the definitions for other terms used in network science can be found in Glossary of graph theory.

39.2.1 Density

The density D of a network( ) is defined as a ratio of the number of edges E to the number of possible edges, given by N 2E T the binomial coefficient 2 , giving D = N(N−1) . Another possible equation is D = N(N−1) . , whereas the ties T are unidirectional (Wasserman & Faust 1994).[2] This gives a better overview over the network density, because unidirectional relationships can be measured.

39.2.2 Size

The size of a network can refer to the number of nodes N or, less commonly, the number of edges E which can range from N − 1 (a tree) to Emax (a complete graph).

39.2.3 Average degree

The degree k of a node is the number of edges connected to it. Closely related to the density of a network is the 2E − average degree, < k >= N . In the ER random graph model, we can compute < k >= p(N 1) where p is the probability of two nodes being connected. 39.2. NETWORK PROPERTIES 249

39.2.4 Average path length

Average path length is calculated by finding the shortest path between all pairs of nodes, adding them up, and then dividing by the total number of pairs. This shows us, on average, the number of steps it takes to get from one member of the network to another.

39.2.5 Diameter of a network

As another means of measuring network graphs, we can define the diameter of a network as the longest of all the calculated shortest paths in a network. In other words, once the shortest path length from every node to all other nodes is calculated, the diameter is the longest of all the calculated path lengths. The diameter is representative of the linear size of a network.

39.2.6 Clustering coefficient

The clustering coefficient is a measure of an “all-my-friends-know-each-other” property. This is sometimes described as the friends of my friends are my friends. More precisely, the clustering coefficient of a node is the ratio of existing links connecting a node’s neighbors to each other to the maximum possible number of such links. The clustering coefficient for the entire network is the average of the clustering coefficients of all the nodes. A high clustering coefficient for a network is another indication of a small world. The clustering coefficient of the i 'th node is

2ei Ci = , ki(ki − 1) where ki is the number of neighbours of the i 'th node, and ei is the number of connections between these neighbours. The maximum possible number of connections between neighbors is, of course,

( ) k k(k − 1) = . 2 2

39.2.7 Connectedness

The way in which a network is connected plays a large part into how networks are analyzed and interpreted. Networks are classified in four different categories:

• Clique/Complete Graph: a completely connected network, where all nodes are connected to every other node. These networks are symmetric in that all nodes have in-links and out-links from all others. • Giant Component: A single connected component which contains most of the nodes in the network. • Weakly Connected Component: A collection of nodes in which there exists a path from any node to any other, ignoring directionality of the edges. • Strongly Connected Component: A collection of nodes in which there exists a directed path from any node to any other.

39.2.8 Node centrality

Main article: Centrality (graph theory)

Centrality indices produce rankings which seek to identify the most important nodes in a network model. Different centrality indices encode different contexts for the word “importance.” The betweenness centrality, for example, considers a node highly important if it form bridges between many other nodes. The eigenvalue centrality, in contrast, 250 CHAPTER 39. NETWORK SCIENCE

considers a node highly important if many other highly important nodes link to it. Hundreds of such measures have been proposed in the literature. It is important to remember that centrality indices are only accurate for identifying the most central nodes. The measures are seldom, if ever, meaningful for the remainder of network nodes.[3] [4] Also, their indications are only accurate within their assumed context for importance, and tend to “get it wrong” for other contexts.[5] For example, imagine two separate communities whose only link is an edge between the most junior member of each community. Since any transfer from one community to the other must go over this link, the two junior members will have high betweenness centrality. But, since they are junior, (presumably) they have few connections to the “important” nodes in their community, meaning their eigenvalue centrality would be quite low. The concept of centrality in the context of static networks was extended, based on empirical and theoretical research, to dynamic centrality[6] in the context of time-dependent and temporal networks.[7][8][9]

39.3 Network models

Network models serve as a foundation to understanding interactions within empirical complex networks. Various random graph generation models produce network structures that may be used in comparison to real-world complex networks.

39.3.1 Erdős–Rényi Random Graph model

This Erdős–Rényi model is generated with N = 4 nodes. For each edge in the complete graph formed by all N nodes, a random number is generated and compared to a given probability. If the random number is greater than p, an edge is formed on the model.

The Erdős–Rényi model, named for Paul Erdős and Alfréd Rényi, is used for generating random graphs in which edges are set between nodes with equal probabilities. It can be used in the probabilistic method to prove the existence of graphs satisfying various properties, or to provide a rigorous definition of what it means for a property to hold for almost all graphs. To generate an Erdős–Rényi model two parameters must be specified: the number of nodes in the graph generated as N and the probability that a link should be formed between any two nodes as p. A constant ⟨k⟩ may derived from these two components with the formula ⟨k⟩ = 2 ⋅ E / N = p ⋅ (N − 1), where E is the expected number of edges. The Erdős–Rényi model has several interesting characteristics in comparison to other graphs. Because the model is generated without bias to particular nodes, the degree distribution is binomial in nature with regards to the formula:

( ) n − 1 P (deg(v) = k) = pk(1 − p)n−1−k k Also as a result of this characteristic, the clustering coefficient tends to 0. The model tends to form a giant component in situations where ⟨k⟩ > 1 in a process called percolation. The average path length is relatively short in this model and tends to log N.

39.3.2 Watts-Strogatz Small World model

The Watts and Strogatz model is a random graph generation model that produces graphs with small-world properties. 39.4. NETWORK ANALYSIS 251

The Watts and Strogatz model uses the concept of rewiring to achieve its structure. The model generator will iterate through each edge in the original lattice structure. An edge may changed its connected vertices according to a given rewiring probability. < k >= 4 in this example.

An initial lattice structure is used to generate a Watts-Strogatz model. Each node in the network is initially linked to its < k > closest neighbors. Another parameter is specified as the rewiring probability. Each edge has a probability p that it will be rewired to the graph as a random edge. The expected number of rewired links in the model is pE = pN < k > /2 . As the Watts-Strogatz model begins as non-random lattice structure, it has a very high clustering coefficient along with high average path length. Each rewire is likely to create a shortcut between highly connected clusters. As the rewiring probability increases, the clustering coefficient decreases slower than the average path length. In effect, this allows the average path length of the network to decrease significantly with only slightly decreases in clustering coefficient. Higher values of p force more rewired edges, which in effect makes the Watts-Strogatz model a random network.

39.3.3 Barabási–Albert (BA) Preferential Attachment model

The Barabási–Albert model is a random network model used to demonstrate a preferential attachment or a “rich- get-richer” effect. In this model, an edge is most likely to attach to nodes with higher degrees. The network begins with an initial network of m0 nodes. m0 ≥ 2 and the degree of each node in the initial network should be at least 1, otherwise it will always remain disconnected from the rest of the network. In the BA model, new nodes are added to the network one at a time. Each new node is connected to m existing nodes with a probability that is proportional to the number of links that the existing nodes already have. Formally, the probability pi that the new node is connected to node i is[10]

ki pi = ∑ , j kj where ki is the degree of node i. Heavily linked nodes (“hubs”) tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as the destination for a new link. The new nodes have a “preference” to attach themselves to the already heavily linked nodes. The degree distribution resulting from the BA model is scale free, in particular, it is a power law of the form:

P (k) ∼ k−3

Hubs exhibit high betweenness centrality which allows short paths to exist between nodes. As a result the BA model tends to have very short average path lengths. The clustering coefficient of this model also tends to 0. While the diameter, D, of many models including the Erdős Rényi random graph model and several small world networks is proportional to log N, the BA model exhibits D~loglogN (ultrasmall word).[12] Note that the average path length scales with N as the diameter.

39.4 Network analysis 252 CHAPTER 39. NETWORK SCIENCE

The degree distribution of the BA Model, which follows a power law. In loglog scale the power law function is a straight line.[11]

39.4.1 Social network analysis

Social network analysis examines the structure of relationships between social entities.[13] These entities are often persons, but may also be groups, organizations, nation states, web sites, scholarly publications. Since the 1970s, the empirical study of networks has played a central role in social science, and many of the mathematical and statistical tools used for studying networks have been first developed in sociology.[14] Amongst many other applications, social network analysis has been used to understand the diffusion of innovations, news and rumors. Similarly, it has been used to examine the spread of both diseases and health-related behaviors. It has also been applied to the study of markets, where it has been used to examine the role of trust in exchange relationships and of social mechanisms in setting prices. Similarly, it has been used to study recruitment into political movements and social organizations. It has also been used to conceptualize scientific disagreements as well as academic pres- tige. More recently, network analysis (and its close cousin traffic analysis) has gained a significant use in military intelligence, for uncovering insurgent networks of both hierarchical and leaderless nature.[15][16]

39.4.2 Dynamic network analysis

Dynamic Network Analysis examines the shifting structure of relationships among different classes of entities in complex socio-technical systems effects, and reflects social stability and changes such as the emergence of new groups, topics, and leaders.[6][7][8][9] Dynamic Network Analysis focuses on meta-networks composed of multiple types of nodes (entities) and multiple types of links. These entities can be highly varied.[6] Examples include people, organizations, topics, resources, tasks, events, locations, and beliefs. Dynamic network techniques are particularly useful for assessing trends and changes in networks over time, identifi- cation of emergent leaders, and examining the co-evolution of people and ideas. 39.4. NETWORK ANALYSIS 253

39.4.3 Biological network analysis

With the recent explosion of publicly available high throughput biological data, the analysis of molecular networks has gained significant interest. The type of analysis in this content are closely related to social network analysis, but often focusing on local patterns in the network. For example network motifs are small subgraphs that are over- represented in the network. Activity motifs are similar over-represented patterns in the attributes of nodes and edges in the network that are over represented given the network structure. The analysis of biological networks has led to the development of network medicine, which looks at the effect of diseases in the interactome.[17]

39.4.4 Link analysis

Link analysis is a subset of network analysis, exploring associations between objects. An example may be examining the addresses of suspects and victims, the telephone numbers they have dialed and financial transactions that they have partaken in during a given timeframe, and the familial relationships between these subjects as a part of police investi- gation. Link analysis here provides the crucial relationships and associations between very many objects of different types that are not apparent from isolated pieces of information. Computer-assisted or fully automatic computer-based link analysis is increasingly employed by banks and insurance agencies in fraud detection, by telecommunication operators in telecommunication network analysis, by medical sector in epidemiology and pharmacology, in law en- forcement investigations, by search engines for relevance rating (and conversely by the spammers for spamdexing and by business owners for search engine optimization), and everywhere else where relationships between many objects have to be analyzed.

Network robustness

The structural robustness of networks[18] is studied using . When a critical fraction of nodes is removed the network becomes fragmented into small clusters. This phenomenon is called percolation[19] and it represents an order-disorder type of phase transition with critical exponents.

Pandemic Analysis

The SIR Model is one of the most well known algorithms on predicting the spread of global pandemics within an infectious population.

Susceptible to Infected S = β(1/N) The formula above describes the “force” of infection for each susceptible unit in an infectious population, where β is equivalent to the transmission rate of said disease. To track the change of those susceptible in an infectious population: × 1 ∆S = β S N ∆t

Infected to Recovered ∆I = µI∆t Over time, the number of those infected fluctuates by: the specified rate of recovery, represented by µ but deducted 1 to one over the average infectious period τ , the numbered of infecious individuals, I , and the change in time, ∆t .

Infectious Period Whether a population will be overcome by a pandemic, with regards to the SIR model, is de- pendent on the value of R0 or the “average people infected by an infected individual.” β R0 = βτ = µ

Web Link Analysis

Several Web search ranking algorithms use link-based centrality metrics, including (in order of appearance) Marchiori's Hyper Search, Google's PageRank, Kleinberg’s HITS algorithm, the CheiRank and TrustRank algorithms. Link anal- ysis is also conducted in information science and communication science in order to understand and extract informa- 254 CHAPTER 39. NETWORK SCIENCE

tion from the structure of collections of web pages. For example the analysis might be of the interlinking between politicians’ web sites or blogs.

PageRank PageRank works by randomly picking “nodes” or websites and then with a certain probability, “ran- domly jumping” to other nodes. By randomly jumping to these other nodes, it helps PageRank completely traverse the network as some webpages exist on the periphery and would not as readily be assessed.

Each node, xi , has a PageRank as defined by the sum of pages j that link to i times one over the outlinks or “out- degree” of j times the “importance” or PageRank of j . ∑ 1 (k) xi = x j→i Nj j

Random Jumping As explained above, PageRank enlists random jumps in attempts to assign PageRank to ev- ery website on the internet. These random jumps find websites that might not be found during the normal search methodologies such as Breadth-First Search and Depth-First Search. In an improvement over the aforementioned formula for determining PageRank includes adding these random jump components. Without the random jumps, some pages would receive a PageRank of 0 which would not be good. The first is α , or the probability that a random jump will occur. Contrasting is the “damping factor”, or 1 − α . ∑ R(p) = α + (1 − α) 1 x(k) N j→i Nj j Another way of looking at it: ∑ R(A) = RB + ... + Rn B(outlinks) n(outlinks)

39.4.5 Centrality measures

Information about the relative importance of nodes and edges in a graph can be obtained through centrality mea- sures, widely used in disciplines like sociology. Centrality measures are essential when a network analysis has to answer questions such as: “Which nodes in the network should be targeted to ensure that a message or information spreads to all or most nodes in the network?" or conversely, “Which nodes should be targeted to curtail the spread of a disease?". Formally established measures of centrality are degree centrality, closeness centrality, betweenness centrality, eigenvector centrality, and katz centrality. The objective of network analysis generally determines the type of centrality measure(s) to be used.[13]

• Degree centrality of a node in a network is the number of links (vertices) incident on the node.

• Closeness centrality determines how “close” a node is to other nodes in a network by measuring the sum of the shortest distances (geodesic paths) between that node and all other nodes in the network.

• Betweenness centrality determines the relative importance of a node by measuring the amount of traffic flowing through that node to other nodes in the network. This is done by measuring the fraction of paths connecting all pairs of nodes and containing the node of interest. Group Betweenness centrality measures the amount of traffic flowing through a group of nodes.[20]

• Eigenvector centrality is a more sophisticated version of degree centrality where the centrality of a node not only depends on the number of links incident on the node but also the quality of those links. This quality factor is determined by the eigenvectors of the adjacency matrix of the network.

• Katz centrality of a node is measured by summing the geodesic paths between that node and all (reachable) nodes in the network. These paths are weighted, paths connecting the node with its immediate neighbors carry higher weights than those which connect with nodes farther away from the immediate neighbors.

39.5 Spread of content in networks

Content in a complex network can spread via two major methods: conserved spread and non-conserved spread.[21] In conserved spread, the total amount of content that enters a complex network remains constant as it passes through. 39.5. SPREAD OF CONTENT IN NETWORKS 255

The model of conserved spread can best be represented by a pitcher containing a fixed amount of water being poured into a series of funnels connected by tubes . Here, the pitcher represents the original source and the water is the content being spread. The funnels and connecting tubing represent the nodes and the connections between nodes, respectively. As the water passes from one funnel into another, the water disappears instantly from the funnel that was previously exposed to the water. In non-conserved spread, the amount of content changes as it enters and passes through a complex network. The model of non-conserved spread can best be represented by a continuously running faucet running through a series of funnels connected by tubes . Here, the amount of water from the original source is infinite Also, any funnels that have been exposed to the water continue to experience the water even as it passes into successive funnels. The non-conserved model is the most suitable for explaining the transmission of most infectious diseases.

39.5.1 The SIR Model

In 1927, W. O. Kermack and A. G. McKendrick created a model in which they considered a fixed population with only three compartments, susceptible: S(t) , infected, I(t) , and recovered, R(t) . The compartments used for this model consist of three classes:

• S(t) is used to represent the number of individuals not yet infected with the disease at time t, or those suscep- tible to the disease

• I(t) denotes the number of individuals who have been infected with the disease and are capable of spreading the disease to those in the susceptible category

• R(t) is the compartment used for those individuals who have been infected and then recovered from the disease. Those in this category are not able to be infected again or to transmit the infection to others.

The flow of this model may be considered as follows:

S → I → R

Using a fixed population, N = S(t) + I(t) + R(t) , Kermack and McKendrick derived the following equations:

dS = −βSI dt dI = βSI − γI dt dR = γI dt Several assumptions were made in the formulation of these equations: First, an individual in the population must be considered as having an equal probability as every other individual of contracting the disease with a rate of β , which is considered the contact or infection rate of the disease. Therefore, an infected individual makes contact and is able to transmit the disease with βN others per unit time and the fraction of contacts by an infected with a susceptible is S/N . The number of new infections in unit time per infective then is βN(S/N) , giving the rate of new infections (or those leaving the susceptible category) as βN(S/N)I = βSI (Brauer & Castillo-Chavez, 2001). For the second and third equations, consider the population leaving the susceptible class as equal to the number entering the infected class. However, a number equal to the fraction ( γ which represents the mean recovery rate, or 1/γ the mean infective period) of infectives are leaving this class per unit time to enter the removed class. These processes which occur simultaneously are referred to as the Law of Mass Action, a widely accepted idea that the rate of contact between two groups in a population is proportional to the size of each of the groups concerned (Daley & Gani, 2005). Finally, it is assumed that the rate of infection and recovery is much faster than the time scale of births and deaths and therefore, these factors are ignored in this model. More can be read on this model on the Epidemic model page. 256 CHAPTER 39. NETWORK SCIENCE

39.6 Interdependent networks

Main article: Interdependent networks

An interdependent network is a system of coupled networks where nodes of one or more networks depend on nodes in other networks. Such dependencies are enhanced by the developments in modern technology. Dependencies may lead to cascading failures between the networks and a relatively small failure can lead to a catastrophic breakdown of the system. Blackouts are a fascinating demonstration of the important role played by the dependencies between networks. A recent study developed a framework to study the cascading failures in an interdependent networks system.[22][23]

39.7 Network optimization

Network problems that involve finding an optimal way of doing something are studied under the name of combinatorial optimization. Examples include network flow, shortest path problem, transport problem, transshipment problem, location problem, matching problem, assignment problem, packing problem, routing problem, Critical Path Analysis and PERT (Program Evaluation & Review Technique).

39.8 Network science research centers

• Duke Network Analysis Center[24] • IBM’s Network Science Research Center (NSRC)[25] • Network Science Collaborative Technology Alliance (US Army Research Laboratory)[26] • Network Science and Technology (NEST) Center (Rensselaer Polytechnic Institute)[27] • CEU Center for Network Science (Central European University , founded in 2009) • Center for Networks and Relational Analysis (University of California-Irvine)[28] • Interdisciplinary Center for Network Science and Applications (iCeNSA) (University of Notre Dame[29] • Technology Center for Networks & Pathways (Johns Hopkins University[30] • Yale Institute of Network Science (YINS)[31] • Social Cognitive Networks Academic Research Center at RPI (SNARC)[32] • Warren Center for Network and Data Sciences at Penn (YINS)[33]

39.9 Network analysis and visualization tools

• Graph-tool and NetworkX, free and efficient Python modules for manipulation and statistical analysis of net- works. • igraph, an open source C library for the analysis of large-scale complex networks, with interfaces to R, Python and Ruby. • Orange, a free data mining software suite, module orngNetwork • Pajek, program for (large) network analysis and visualization. • Tulip, a free data mining and visualization software dedicated to the analysis and visualization of relational data. • SEMOSS, an RDF-based open source context-aware analytics tool written in Java leveraging the SPARQL query language. • ORA, a tool for Dynamic Network Analysis and network visualization.[34] 39.10. SEE ALSO 257

39.10 See also

• Collaborative innovation network

• Communicative ecology

• Complex network

• Dual-phase evolution

• Quantum complex network

• Glossary of graph theory

• Higher category theory

• Immune network theory

• Irregular warfare

• Polytely

• Systems theory

• Service network

• Erdős–Rényi model

• Random networks

• Non-Linear Preferential Attachment

• Constructal law[35]

• Percolation

• Network theory in risk assessment

• Network topology

• Network analyzer

• Network formation

• Networks in labor economics

• Small-world networks

• Scale-free networks

• Network dynamics

• Sequential dynamical system

• Climate as complex networks

• Structural cut-off

• Rumor spread in social network 258 CHAPTER 39. NETWORK SCIENCE

39.11 Further reading

• “Network Science Center,” http://www.dodccrp.org/files/Network_Science_Center.asf

• “Connected: The Power of Six Degrees,” http://ivl.slis.indiana.edu/km/movies/2008-talas-connected.mov

• Cohen, R.; Erez, K.; Havlin, S. (2000). “Resilience of the Internet to random breakdown”. Phys. Rev. Lett 85: 4626. doi:10.1103/physrevlett.85.4626.

• Pu, Cun-Lai; Wen-; Pei, Jiang; Michaelson, Andrew (2012). “Robustness analysis of network controllability” (PDF). Physica A: Statistical Mechanics and its Applications 391 (18): 4420–4425. doi:10.1016/j.physa.2012.04.019.

• “The Burgeoning Field of Network Science,” http://themilitaryengineer.com/index.php/tme-articles/tme-past-articles/ item/160-leader-profile-the-burgeoning-field-of-network-science

• S.N. Dorogovtsev and J.F.F. Mendes, Evolution of Networks: From biological networks to the Internet and WWW, Oxford University Press, 2003, ISBN 0-19-851590-1

• Linked: The New Science of Networks, A.-L. Barabási (Perseus Publishing, Cambridge

• Network Science, Committee on Network Science for Future Army Applications, National Research Council. 2005. The National Academies Press (2005)ISBN 0-309-10026-7

• Network Science Bulletin, USMA (2007) ISBN 978-1-934808-00-9

• The Structure and Dynamics of Networks Mark Newman, Albert-László Barabási, & Duncan J. Watts (The Princeton Press, 2006) ISBN 0-691-11357-2

• Dynamical processes on complex networks, Alain Barrat, Marc Barthelemy, Alessandro Vespignani (Cambridge University Press, 2008) ISBN 978-0-521-87950-7

• Network Science: Theory and Applications, Ted G. Lewis (Wiley, March 11, 2009) ISBN 0-470-33188-7

• Nexus: Small Worlds and the Groundbreaking Theory of Networks, Mark Buchanan (W. W. Norton & Com- pany, June 2003) ISBN 0-393-32442-7

• Six Degrees: The Science of a Connected Age, Duncan J. Watts (W. W. Norton & Company, February 17, 2004) ISBN 0-393-32542-3

• netwiki Scientific wiki dedicated to network theory

• New Network Theory International Conference on 'New Network Theory'

• Network Workbench: A Large-Scale Network Analysis, Modeling and Visualization Toolkit

• Network analysis of computer networks

• Network analysis of organizational networks

• Network analysis of terrorist networks

• Network analysis of a disease outbreak

• Link Analysis: An Information Science Approach (book)

• Connected: The Power of Six Degrees (documentary)

• Influential Spreaders in Networks, M. Kitsak, L. K. Gallos, S. Havlin, F. Liljeros, L. Muchnik, H. E. Stanley, H.A. Makse, Nature Physics 6, 888 (2010)

• A short course on complex networks

• A course on complex network analysis by Albert-László Barabási 39.12. EXTERNAL LINKS 259

39.12 External links

• Network Science Center at the U.S. Military Academy at West Point, NY

• http://press.princeton.edu/titles/8114.html

• http://www.cra.org/ccc/NSE.ppt.pdf

• http://www.ifr.ac.uk/netsci08/

• GNET — Group of Complex Systems & Random Networks

• http://www.netsci09.net/

• Cyberinfrastructure

• Prof. Nicholas A Christakis’ introduction to network science in Prospect magazine

• Video Lectures on complex networks by Prof. Shlomo Havlin

39.13 Notes

[1] Committee on Network Science for Future Army Applications (2006). Network Science. National Research Council. ISBN 0309653886.

[2] http://psycnet.apa.org/journals/prs/9/4/172/

[3] Lawyer, Glenn (2014). “Understanding the spreading power of all nodes in a network: a continuous-time perspective”. arXiv. Retrieved July 11, 2014.

[4] Sikic, Mile; Lancic, Alen; Antulov-Fantulin, Nino; Stefancic, Hrvoje (October 2013). “Epidemic centrality -- is there an underestimated epidemic impact of network peripheral nodes?". The European Physical Journal B 86 (10): 1–13. doi:10.1140/epjb/e2013-31025-5.

[5] Borgatti, Stephen P. (2005). “Centrality and Network Flow”. Social Networks (Elsevier) 27: 55–71. doi:10.1016/j.socnet.2004.11.008.

[6] Braha, D. and Bar-Yam, Y. 2006. “From Centrality to Temporary Fame: Dynamic Centrality in Complex Networks.” Complexity 12: 59-63.

[7] Hill,S.A. and Braha, D. 2010. “Dynamic Model of Time-Dependent Complex Networks.” Physical Review E 82, 046105.

[8] Gross, T. and Sayama, H. (Eds.). 2009. Adaptive Networks: Theory, Models and Applications. Springer.

[9] Holme, P. and Saramäki, J. 2013. Temporal Networks. Springer.

[10] R. Albert; A.-L. Barabási (2002). “Statistical mechanics of complex networks” (PDF). Reviews of Modern Physics 74: 47–97. arXiv:cond-mat/0106096. Bibcode:2002RvMP...74...47A. doi:10.1103/RevModPhys.74.47.

[11] Albert-László Barabási & Réka Albert (October 1999). “Emergence of scaling in random networks” (PDF). Science 286 (5439): 509–512. arXiv:cond-mat/9910332. Bibcode:1999Sci...286..509B. doi:10.1126/science.286.5439.509. PMID 10521342.

[12] R. Cohen, S. Havlin (2003). “Scale-free networks are ultrasmall”. Phys. Rev. Lett 90 (5): 058701. doi:10.1103/PhysRevLett.90.058701. PMID 12633404. |first2= missing |last2= in Authors list (help)

[13] Wasserman, Stanley and Katherine Faust. 1994. Social Network Analysis: Methods and Applications. Cambridge: Cam- bridge University Press.

[14] Newman, M.E.J. Networks: An Introduction. Oxford University Press. 2010, ISBN 978-0199206650

[15] “Toward a Complex Adaptive Intelligence Community The Wiki and the Blog”. D. Calvin Andrus. cia.gov. Retrieved 25 August 2012.

[16] Network analysis of terrorist networks

[17] Barabási, A. L., Gulbahce, N., & Loscalzo, J. (2011). Network medicine: a network-based approach to human disease. Nature Reviews Genetics, 12(1), 56-68. 260 CHAPTER 39. NETWORK SCIENCE

[18] R. Cohen, S. Havlin (2010). Complex Networks: Structure, Robustness and Function. Cambridge University Press.

[19] A. Bunde, S. Havlin (1996). Fractals and Disordered Systems. Springer.

[20] Puzis, R., Yagil, D., Elovici, Y., Braha, D. (2009) Collaborative attack on Internet users’ anonymity, Internet Research 19(1)

[21] Newman, M., Barabási, A.-L., Watts, D.J. [eds.] (2006) The Structure and Dynamics of Networks. Princeton, N.J.: Princeton University Press.

[22] S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley, S. Havlin (2010). “Catastrophic cascade of failures in interdependent networks”. Nature 464 (7291): 1025–28. arXiv:0907.1182. Bibcode:2010Natur.464.1025B. doi:10.1038/nature08932. PMID 20393559.

[23] Jianxi Gao, Sergey V. Buldyrev3, Shlomo Havlin4, and H. Eugene Stanley (2011). “Robustness of a Network of Networks”. Phys. Rev. Lett 107 (19): 195701. arXiv:1010.5829. Bibcode:2011PhRvL.107s5701G. doi:10.1103/PhysRevLett.107.195701. PMID 22181627. |first2= missing |last2= in Authors list (help); |first3= missing |last3= in Authors list (help); |first4= miss- ing |last4= in Authors list (help)

[24] https://dnac.ssri.duke.edu/about.php

[25] http://www-304.ibm.com/industries/publicsector/us/en/rep/!!/xmlid=229952

[26] http://www.ns-cta.org/ns-cta-blog/

[27] http://www.nest.rpi.edu/

[28] http://lakshmi.calit2.uci.edu/cnra/

[29] http://www.icensa.com/

[30] http://www.hopkinsmedicine.org/institute_basic_biomedical_sciences/research_centers/high_throughput_biology_hit/technology_ center_networks_pathways/

[31] http://yins.yale.edu/

[32] http://scnarc.rpi.edu/

[33] http://warrencenter.upenn.edu/

[34] Kathleen M. Carley, 2014, ORA: A Toolkit for Dynamic Network Analysis and Visualization, In Reda Alhajj and Jon Rokne (Eds.) Encyclopedia of Social Network Analysis and Mining, Springer.

[35] Bejan A., Lorente S., The Constructal Law of Design and Evolution in Nature. Philosophical Transactions of the Royal Society B, Biological Science, Vol. 365, 2010, pp. 1335-1347. Chapter 40

Ordinal number

This article is about the mathematical concept. For number words denoting a position in a sequence (“first”, “second”, “third”, etc.), see Ordinal number (linguistics). In set theory, an ordinal number, or ordinal, is the order type of a well-ordered set. They are usually identified with hereditarily transitive sets. Ordinals are an extension of the natural numbers different from integers and from cardinals. Like other kinds of numbers, ordinals can be added, multiplied, and exponentiated. Ordinals were introduced by Georg Cantor in 1883[1] to accommodate infinite sequences and to classify derived sets, which he had previously introduced in 1872 while studying the uniqueness of trigonometric series.[2] Two sets S and S' have the same cardinality if there is a bijection between them (i.e. there exists a function f that is both injective and surjective, that is it maps each element x of S to a unique element y = f(x) of S' and each element y of S' comes from exactly one such element x of S). If a partial order < is defined on set S, and a partial order <' is defined on set S' , then the posets (S,<) and (S' ,<') are order isomorphic if there is a bijection f that preserves the ordering. That is, f(a) <' f(b) if and only if a < b. Every well-ordered set (S,<) is order isomorphic to the set of ordinals less than one specific ordinal number [the order type of (S,<)] under their natural ordering. The finite ordinals (and the finite cardinals) are the natural numbers: 0, 1, 2, …, since any two total orderings of a finite set are order isomorphic. The least infinite ordinal is ω, which is identified with the cardinal number ℵ0 . However, in the transfinite case, beyond ω, ordinals draw a finer distinction than cardinals on account of their order information. Whereas there is only one countably infinite cardinal, namely ℵ0 itself, there are uncountably many countably infinite ordinals, namely

2 3 ω ωω ω, ω + 1, ω + 2, …, ω·2, ω·2 + 1, …, ω , …, ω , …, ω , …, ω , …, ε0, ….

Here addition and multiplication are not commutative: in particular 1 + ω is ω rather than ω + 1 and likewise, 2·ω is ω rather than ω·2. The set of all countable ordinals constitutes the first uncountable ordinal ω1, which is identified with the cardinal ℵ1 (next cardinal after ℵ0 ). Well-ordered cardinals are identified with their initial ordinals, i.e. the smallest ordinal of that cardinality. The cardinality of an ordinal defines a many to one association from ordinals to cardinals. In general, each ordinal α is the order type of the set of ordinals strictly less than the ordinal α itself. This property permits every ordinal to be represented as the set of all ordinals less than it. Ordinals may be categorized as: zero, successor ordinals, and limit ordinals (of various cofinalities). Given a class of ordinals, one can identify the α- th member of that class, i.e. one can index (count) them. Such a class is closed and unbounded if its indexing function is continuous and never stops. The Cantor normal form uniquely represents each ordinal as a finite sum of ordinal powers of ω. However, this cannot form the basis of a universal ordinal notation due to such self-referential ε0 representations as ε0 = ω . Larger and larger ordinals can be defined, but they become more and more difficult to describe. Any ordinal number can be made into a topological space by endowing it with the order topology; this topology is discrete if and only if the ordinal is a countable cardinal, i.e. at most ω. A subset of ω + 1 is open in the order topology if and only if either it is cofinite or it does not contain ω as an element.

261 262 CHAPTER 40. ORDINAL NUMBER 0

ω 1

ω·5 ω+1 ω² ω+2 ω·4 ω+3 2 ω²+1 ω²+2

ω²·4 ω³ ω+4

ω⁴ ω³+ω ω ω·3 ω²·3 ω ω³+ω² ω²+ω 3 ω³·2 ω²+ω·2 ω²·2 4

ω·2+3

ω·2+2

ω·2+1 ω·2 5

Representation of the ordinal numbers up to ωω. Each turn of the spiral represents one power of ω

40.1 Ordinals extend the natural numbers

A natural number (which, in this context, includes the number 0) can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. When restricted to finite sets these two concepts coincide; there is only one way to put a finite set into a linear sequence, up to isomorphism. When dealing with infinite sets one has to distinguish between the notion of size, which leads to cardinal numbers, and the notion of position, which is generalized by the ordinal numbers described here. This is because, while any set has only one size (its cardinality), there are many nonisomorphic well-orderings of any infinite set, as explained below. Whereas the notion of cardinal number is associated with a set with no particular structure on it, the ordinals are intimately linked with the special kind of sets that are called well-ordered (so intimately linked, in fact, that some 40.1. ORDINALS EXTEND THE NATURAL NUMBERS 263

mathematicians make no distinction between the two concepts). A well-ordered set is a totally ordered set (given any two elements one defines a smaller and a larger one in a coherent way) in which there is no infinite decreasing sequence (however, there may be infinite increasing sequences); equivalently, every non-empty subset of the set has a least element. Ordinals may be used to label the elements of any given well-ordered set (the smallest element being labelled 0, the one after that 1, the next one 2, “and so on”) and to measure the “length” of the whole set by the least ordinal that is not a label for an element of the set. This “length” is called the order type of the set. Any ordinal is defined by the set of ordinals that precede it: in fact, the most common definition of ordinals identifies each ordinal as the set of ordinals that precede it. For example, the ordinal 42 is the order type of the ordinals less than it, i.e., the ordinals from 0 (the smallest of all ordinals) to 41 (the immediate predecessor of 42), and it is generally identified as the set {0,1,2,…,41}. Conversely, any set (S) of ordinals that is downward-closed—meaning that for any ordinal α in S and any ordinal β < α, β is also in S—is (or can be identified with) an ordinal. There are infinite ordinals as well: the smallest infinite ordinal is ω, which is the order type of the natural numbers (finite ordinals) and that can even be identified with the set of natural numbers (indeed, the set of natural numbers is well-ordered—as is any set of ordinals—and since it is downward closed it can be identified with the ordinal associated with it, which is exactly how ω is defined).

A graphical “matchstick” representation of the ordinal ω². Each stick corresponds to an ordinal of the form ω·m+n where m and n are natural numbers.

Perhaps a clearer intuition of ordinals can be formed by examining a first few of them: as mentioned above, they start with the natural numbers, 0, 1, 2, 3, 4, 5, … After all natural numbers comes the first infinite ordinal, ω, and after that come ω+1, ω+2, ω+3, and so on. (Exactly what addition means will be defined later on: just consider them as names.) After all of these come ω·2 (which is ω+ω), ω·2+1, ω·2+2, and so on, then ω·3, and then later on ω·4. Now the set of ordinals formed in this way (the ω·m+n, where m and n are natural numbers) must itself have an ordinal associated with it: and that is ω2. Further on, there will be ω3, then ω4, and so on, and ωω, then ωω², and much later on ε0 (epsilon nought) (to give a few examples of relatively small—countable—ordinals). This can be continued indefinitely far (“indefinitely far” is exactly what ordinals are good at: basically every time one says “and so on” when enumerating ordinals, it defines a larger ordinal). The smallest uncountable ordinal is the set of all countable ordinals, expressed as ω1. 264 CHAPTER 40. ORDINAL NUMBER

40.2 Definitions

40.2.1 Well-ordered sets

Further information: Ordered set

In a well-ordered set, every non-empty subset contains a distinct smallest element. Given the axiom of dependent choice, this is equivalent to just saying that the set is totally ordered and there is no infinite decreasing sequence, something perhaps easier to visualize. In practice, the importance of well-ordering is justified by the possibility of applying transfinite induction, which says, essentially, that any property that passes on from the predecessors of an element to that element itself must be true of all elements (of the given well-ordered set). If the states of a computation (computer program or game) can be well-ordered in such a way that each step is followed by a “lower” step, then the computation will terminate. It is inappropriate to distinguish between two well-ordered sets if they only differ in the “labeling of their elements”, or more formally: if the elements of the first set can be paired off the with the elements of the second set such that if one element is smaller than another in the first set, then the partner of the first element is smaller than the partner of the second element in the second set, and vice versa. Such a one-to-one correspondence is called an order isomorphism and the two well-ordered sets are said to be order-isomorphic, or similar (obviously this is an equivalence relation). Provided there exists an order isomorphism between two well-ordered sets, the order isomorphism is unique: this makes it quite justifiable to consider the two sets as essentially identical, and to seek a “canonical” representative of the isomorphism type (class). This is exactly what the ordinals provide, and it also provides a canonical labeling of the elements of any well-ordered set. Essentially, an ordinal is intended to be defined as an isomorphism class of well-ordered sets: that is, as an equivalence class for the equivalence relation of “being order-isomorphic”. There is a technical difficulty involved, however, in the fact that the equivalence class is too large to be a set in the usual Zermelo–Fraenkel (ZF) formalization of set theory. But this is not a serious difficulty. The ordinal can be said to be the order type of any set in the class.

40.2.2 Definition of an ordinal as an equivalence class

The original definition of ordinal number, found for example in Principia Mathematica, defines the order type of a well-ordering as the set of all well-orderings similar (order-isomorphic) to that well-ordering: in other words, an ordinal number is genuinely an equivalence class of well-ordered sets. This definition must be abandoned in ZF and related systems of axiomatic set theory because these equivalence classes are too large to form a set. However, this definition still can be used in type theory and in Quine’s axiomatic set theory New Foundations and related systems (where it affords a rather surprising alternative solution to the Burali-Forti paradox of the largest ordinal).

40.2.3 Von Neumann definition of ordinals

Rather than defining an ordinal as an equivalence class of well-ordered sets, it will be defined as a particular well- ordered set that (canonically) represents the class. Thus, an ordinal number will be a well-ordered set; and every well-ordered set will be order-isomorphic to exactly one ordinal number. The standard definition, suggested by John von Neumann, is: each ordinal is the well-ordered set of all smaller ordinals. In symbols, λ = [0,λ).[3][4] Formally:

A set S is an ordinal if and only if S is strictly well-ordered with respect to set membership and every element of S is also a subset of S.

Note that the natural numbers are ordinals by this definition. For instance, 2 is an element of 4 = {0, 1, 2, 3}, and 2 is equal to {0, 1} and so it is a subset of {0, 1, 2, 3}. It can be shown by transfinite induction that every well-ordered set is order-isomorphic to exactly one of these ordinals, that is, there is an order preserving bijective function between them. Furthermore, the elements of every ordinal are ordinals themselves. Given two ordinals S and T, S is an element of T if and only if S is a proper subset of T. Moreover, either S is an element of T, or T is an element of S, or they are 40.3. TRANSFINITE SEQUENCE 265 equal. So every set of ordinals is totally ordered. Further, every set of ordinals is well-ordered. This generalizes the fact that every set of natural numbers is well-ordered. Consequently, every ordinal S is a set having as elements precisely the ordinals smaller than S. For example, every set of ordinals has a supremum, the ordinal obtained by taking the union of all the ordinals in the set. This union exists regardless of the set’s size, by the axiom of union. The class of all ordinals is not a set. If it were a set, one could show that it was an ordinal and thus a member of itself, which would contradict its strict ordering by membership. This is the Burali-Forti paradox. The class of all ordinals is variously called “Ord”, “ON”, or "∞". An ordinal is finite if and only if the opposite order is also well-ordered, which is the case if and only if each of its subsets has a maximum.

40.2.4 Other definitions

There are other modern formulations of the definition of ordinal. For example, assuming the axiom of regularity, the following are equivalent for a set x:

• x is an ordinal,

• x is a transitive set, and set membership is trichotomous on x,

• x is a transitive set totally ordered by set inclusion,

• x is a transitive set of transitive sets.

These definitions cannot be used in non-well-founded set theories. In set theories with urelements, one has to further make sure that the definition excludes urelements from appearing in ordinals.

40.3 Transfinite sequence

If α is a limit ordinal and X is a set, an α-indexed sequence of elements of X is a function from α to X. This concept, a transfinite sequence or ordinal-indexed sequence, is a generalization of the concept of a sequence. An ordinary sequence corresponds to the case α = ω.

40.4 Transfinite induction

Main article: Transfinite induction

40.4.1 What is transfinite induction?

Transfinite induction holds in any well-ordered set, but it is so important in relation to ordinals that it is worth restating here.

Any property that passes from the set of ordinals smaller than a given ordinal α to α itself, is true of all ordinals.

That is, if P(α) is true whenever P(β) is true for all β<α, then P(α) is true for all α. Or, more practically: in order to prove a property P for all ordinals α, one can assume that it is already known for all smaller β<α. 266 CHAPTER 40. ORDINAL NUMBER

40.4.2 Transfinite recursion

Transfinite induction can be used not only to prove things, but also to define them. Such a definition is normally said to be by transfinite recursion – the proof that the result is well-defined uses transfinite induction. Let F denote a (class) function F to be defined on the ordinals. The idea now is that, in defining F(α) for an unspecified ordinal α, one may assume that F(β) is already defined for all β < α and thus give a formula for F(α) in terms of these F(β). It then follows by transfinite induction that there is one and only one function satisfying the recursion formula up to and including α. Here is an example of definition by transfinite recursion on the ordinals (more will be given later): define function F by letting F(α) be the smallest ordinal not in the set {F(β) | β < α}, that is, the set consisting of all F(β) for β < α. This definition assumes the F(β) known in the very process of defining F; this apparent vicious circle is exactly what definition by transfinite recursion permits. In fact, F(0) makes sense since there is no ordinal β < 0, and the set {F(β) | β < 0} is empty. So F(0) is equal to 0 (the smallest ordinal of all). Now that F(0) is known, the definition applied to F(1) makes sense (it is the smallest ordinal not in the singleton set {F(0)} = {0}), and so on (the and so on is exactly transfinite induction). It turns out that this example is not very exciting, since provably F(α) = α for all ordinals α, which can be shown, precisely, by transfinite induction.

40.4.3 Successor and limit ordinals

Any nonzero ordinal has the minimum element, zero. It may or may not have a maximum element. For example, 42 has maximum 41 and ω+6 has maximum ω+5. On the other hand, ω does not have a maximum since there is no largest natural number. If an ordinal has a maximum α, then it is the next ordinal after α, and it is called a successor ordinal, namely the successor of α, written α+1. In the von Neumann definition of ordinals, the successor of α is α ∪ {α} since its elements are those of α and α itself.[3] A nonzero ordinal that is not a successor is called a limit ordinal. One justification for this term is that a limit ordinal is indeed the limit in a topological sense of all smaller ordinals (under the order topology).

When ⟨αι|ι < γ⟩ is an ordinal-indexed sequence, indexed by a limit γ and the sequence is increasing, i.e. αι < αρ whenever ι < ρ,its limit is defined the least upper bound of the set {αι|ι < γ},that is, the smallest ordinal (it always exists) greater than any term of the sequence. In this sense, a limit ordinal is the limit of all smaller ordinals (indexed by itself). Put more directly, it is the supremum of the set of smaller ordinals. Another way of defining a limit ordinal is to say that α is a limit ordinal if and only if:

There is an ordinal less than α and whenever ζ is an ordinal less than α, then there exists an ordinal ξ such that ζ < ξ < α.

So in the following sequence:

0, 1, 2, ... , ω, ω+1

ω is a limit ordinal because for any smaller ordinal (in this example, a natural number) there is another ordinal (natural number) larger than it, but still less than ω. Thus, every ordinal is either zero, or a successor (of a well-defined predecessor), or a limit. This distinction is important, because many definitions by transfinite induction rely upon it. Very often, when defining a function F by transfinite induction on all ordinals, one defines F(0), and F(α+1) assuming F(α) is defined, and then, for limit ordinals δ one defines F(δ) as the limit of the F(β) for all β<δ (either in the sense of ordinal limits, as previously explained, or for some other notion of limit if F does not take ordinal values). Thus, the interesting step in the definition is the successor step, not the limit ordinals. Such functions (especially for F nondecreasing and taking ordinal values) are called continuous. Ordinal addition, multiplication and exponentiation are continuous as functions of their second argument.

40.4.4 Indexing classes of ordinals

Any well-ordered set is similar (order-isomorphic) to a unique ordinal number α , or, in other words, that its elements can be indexed in increasing fashion by the ordinals less than α . This applies, in particular, to any set of ordinals: 40.5. ARITHMETIC OF ORDINALS 267

any set of ordinals is naturally indexed by the ordinals less than some α . The same holds, with a slight modification, for classes of ordinals (a collection of ordinals, possibly too large to form a set, defined by some property): any class of ordinals can be indexed by ordinals (and, when the class is unbounded in the class of all ordinals, this puts it in class-bijection with the class of all ordinals). So the γ -th element in the class (with the convention that the “0-th” is the smallest, the “1-th” is the next smallest, and so on) can be freely spoken of. Formally, the definition is by transfinite induction: the γ -th element of the class is defined (provided it has already been defined for all β < γ ), as the smallest element greater than the β -th element for all β < γ . This could be applied, for example, to the class of limit ordinals: the γ -th ordinal, which is either a limit or zero is ω · γ (see ordinal arithmetic for the definition of multiplication of ordinals). Similarly, one can consider additively indecomposable ordinals (meaning a nonzero ordinal that is not the sum of two strictly smaller ordinals): the γ -th additively indecomposable ordinal is indexed as ωγ . The technique of indexing classes of ordinals is often useful in α the context of fixed points: for example, the γ -th ordinal α such that ω = α is written εγ . These are called the "epsilon numbers".

40.4.5 Closed unbounded sets and classes

A class C of ordinals is said to be unbounded, or cofinal, when given any ordinal α , there is a β in C such that α < β (then the class must be a proper class, i.e., it cannot be a set). It is said to be closed when the limit of a sequence of ordinals in the class is again in the class: or, equivalently, when the indexing (class-)function F is continuous in the sense that, for δ a limit ordinal, F (δ) (the δ -th ordinal in the class) is the limit of all F (γ) for γ < δ ; this is also the same as being closed, in the topological sense, for the order topology (to avoid talking of topology on proper classes, one can demand that the intersection of the class with any given ordinal is closed for the order topology on that ordinal, this is again equivalent). Of particular importance are those classes of ordinals that are closed and unbounded, sometimes called clubs. For example, the class of all limit ordinals is closed and unbounded: this translates the fact that there is always a limit ordinal greater than a given ordinal, and that a limit of limit ordinals is a limit ordinal (a fortunate fact if the termi- nology is to make any sense at all!). The class of additively indecomposable ordinals, or the class of ε· ordinals, or the class of cardinals, are all closed unbounded; the set of regular cardinals, however, is unbounded but not closed, and any finite set of ordinals is closed but not unbounded. A class is stationary if it has a nonempty intersection with every closed unbounded class. All superclasses of closed unbounded classes are stationary, and stationary classes are unbounded, but there are stationary classes that are not closed and stationary classes that have no closed unbounded subclass (such as the class of all limit ordinals with countable cofinality). Since the intersection of two closed unbounded classes is closed and unbounded, the intersection of a stationary class and a closed unbounded class is stationary. But the intersection of two stationary classes may be empty, e.g. the class of ordinals with cofinality ω with the class of ordinals with uncountable cofinality. Rather than formulating these definitions for (proper) classes of ordinals, one can formulate them for sets of ordinals below a given ordinal α : A subset of a limit ordinal α is said to be unbounded (or cofinal) under α provided any ordinal less than α is less than some ordinal in the set. More generally, we can call a subset of any ordinal α cofinal in α provided every ordinal less than α is less than or equal to some ordinal in the set. The subset is said to be closed under α provided it is closed for the order topology in α , i.e. a limit of ordinals in the set is either in the set or equal to α itself.

40.5 Arithmetic of ordinals

Main article: Ordinal arithmetic

There are three usual operations on ordinals: addition, multiplication, and (ordinal) exponentiation. Each can be de- fined in essentially two different ways: either by constructing an explicit well-ordered set that represents the operation or by using transfinite recursion. Cantor normal form provides a standardized way of writing ordinals. The so-called “natural” arithmetical operations retain commutativity at the expense of continuity.

40.6 Ordinals and cardinals 268 CHAPTER 40. ORDINAL NUMBER

40.6.1 Initial ordinal of a cardinal

Each ordinal has an associated cardinal, its cardinality, obtained by simply forgetting the order. Any well-ordered set having that ordinal as its order-type has the same cardinality. The smallest ordinal having a given cardinal as its cardinality is called the initial ordinal of that cardinal. Every finite ordinal (natural number) is initial, but most infinite ordinals are not initial. The is equivalent to the statement that every set can be well-ordered, i.e. that every cardinal has an initial ordinal. In this case, it is traditional to identify the cardinal number with its initial ordinal, and we say that the initial ordinal is a cardinal. Cantor used the cardinality to partition ordinals into classes. He referred to the natural numbers as the first number class, the ordinals with cardinality ℵ0 (the countably infinite ordinals) as the second number class and generally, [5] the ordinals with cardinality ℵn−2 as the n-th number class.

The α-th infinite initial ordinal is written ωα . Its cardinality is written ℵα . For example, the cardinality of ω0 = ω 2 is ℵ0 , which is also the cardinality of ω or ε0 (all are countable ordinals). So (assuming the axiom of choice) we identify ω with ℵ0 , except that the notation ℵ0 is used when writing cardinals, and ω when writing ordinals (this is ℵ2 ℵ 2 important since, for example, 0 = 0 whereas ω > ω ). Also, ω1 is the smallest uncountable ordinal (to see that it exists, consider the set of equivalence classes of well-orderings of the natural numbers: each such well-ordering defines a countable ordinal, and ω1 is the order type of that set), ω2 is the smallest ordinal whose cardinality is greater than ℵ1 , and so on, and ωω is the limit of the ωn for natural numbers n (any limit of cardinals is a cardinal, so this limit is indeed the first cardinal after all the ωn ). See also Von Neumann cardinal assignment.

40.6.2 Cofinality

The cofinality of an ordinal α is the smallest ordinal δ that is the order type of a cofinal subset of α . Notice that a number of authors define cofinality or use it only for limit ordinals. The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set. Thus for a limit ordinal, there exists a δ -indexed strictly increasing sequence with limit α . For example, the cofinality of ω² is ω, because the sequence ω·m (where m ranges over the natural numbers) tends to ω²; but, more generally, any countable limit ordinal has cofinality ω. An uncountable limit ordinal may have either cofinality ω as does ωω or an uncountable cofinality. The cofinality of 0 is 0. And the cofinality of any successor ordinal is 1. The cofinality of any limit ordinal is at least ω . An ordinal that is equal to its cofinality is called regular and it is always an initial ordinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial even if it is not regular, which it usually is not. If the Axiom of Choice, then ωα+1 is regular for each α. In this case, the ordinals 0, 1, ω , ω1 , and ω2 are regular, whereas 2, 3, ωω , and ωω·₂ are initial ordinals that are not regular. The cofinality of any ordinal α is a regular ordinal, i.e. the cofinality of the cofinality of α is the same as the cofinality of α. So the cofinality operation is idempotent.

40.7 Some “large” countable ordinals

For more details on this topic, see Large countable ordinal.

We have already mentioned (see Cantor normal form) the ordinal ε0, which is the smallest satisfying the equation ω ωα = α , so it is the limit of the sequence 0, 1, ω , ωω , ωω , etc. Many ordinals can be defined in such a manner α as fixed points of certain ordinal functions (the ι -th ordinal such that ω = α is called ει , then we could go on trying to find the ι -th ordinal such that εα = α , “and so on”, but all the subtlety lies in the “and so on”). We can try to do this systematically, but no matter what system is used to define and construct ordinals, there is always an ordinal that lies just above all the ordinals constructed by the system. Perhaps the most important ordinal that limits CK a system of construction in this manner is the Church–Kleene ordinal, ω1 (despite the ω1 in the name, this ordinal is countable), which is the smallest ordinal that cannot in any way be represented by a computable function (this can CK be made rigorous, of course). Considerably large ordinals can be defined below ω1 , however, which measure the 40.8. TOPOLOGY AND ORDINALS 269

“proof-theoretic strength” of certain formal systems (for example, ε0 measures the strength of Peano arithmetic). Large ordinals can also be defined above the Church-Kleene ordinal, which are of interest in various parts of logic.

40.8 Topology and ordinals

For more details on this topic, see Order topology.

Any ordinal can be made into a topological space in a natural way by endowing it with the order topology. See the Topology and ordinals section of the “Order topology” article.

40.9 Downward closed sets of ordinals

A set is downward closed if anything less than an element of the set is also in the set. If a set of ordinals is downward closed, then that set is an ordinal—the least ordinal not in the set. Examples:

• The set of ordinals less than 3 is 3 = { 0, 1, 2 }, the smallest ordinal not less than 3.

• The set of finite ordinals is infinite, the smallest infinite ordinal: ω.

• The set of countable ordinals is uncountable, the smallest uncountable ordinal: ω1.

40.10 See also

• Counting

• Ordinal space

40.11 Notes

[1] Thorough introductions are given by Levy (1979) and Jech (2003).

[2] Hallett, Michael (1979), “Towards a theory of mathematical research programmes. I”, The British Journal for the Philosophy of Science 30 (1): 1–25, doi:10.1093/bjps/30.1.1, MR 532548. See the footnote on p. 12.

[3] von Neumann 1923

[4] Levy (1979, p. 52) attributes the idea to unpublished work of Zermelo in 1916 and several papers by von Neumann the 1920s.

[5] Dauben (1990:97)

40.12 References

• Cantor, G., (1897), Beitrage zur Begrundung der transfiniten Mengenlehre. II (tr.: Contributions to the Founding of the Theory of Transfinite Numbers II), Mathematische Annalen 49, 207-246 English translation.

• Conway, J. H. and Guy, R. K. “Cantor’s Ordinal Numbers.” In The Book of Numbers. New York: Springer- Verlag, pp. 266–267 and 274, 1996.

• Dauben, Joseph Warren, (1990), Georg Cantor: his mathematics and philosophy of the infinite. Chapter 5: The Mathematics of Cantor’s Grundlagen. ISBN 0-691-02447-2 270 CHAPTER 40. ORDINAL NUMBER

• Hamilton, A. G. (1982), Numbers, Sets, and Axioms : the Apparatus of Mathematics, New York: Cambridge University Press, ISBN 0-521-24509-5 See Ch. 6, “Ordinal and cardinal numbers” • Kanamori, A., Set Theory from Cantor to Cohen, to appear in: Andrew Irvine and John H. Woods (editors), The Handbook of the Philosophy of Science, volume 4, Mathematics, Cambridge University Press. • Levy, A. (1979), Basic Set Theory, Berlin, New York: Springer-Verlag Reprinted 2002, Dover. ISBN 0-486- 42079-5 • Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag

• Sierpiński, W. (1965). Cardinal and Ordinal Numbers (2nd ed.). Warszawa: Państwowe Wydawnictwo Naukowe. Also defines ordinal operations in terms of the Cantor Normal Form.

• Suppes, P. (1960), Axiomatic Set Theory, D.Van Nostrand Company Inc., ISBN 0-486-61630-4 • von Neumann, Johann (1923), “Zur Einführung der trasfiniten Zahlen”, Acta litterarum ac scientiarum Ragiae Universitatis Hungaricae Francisco-Josephinae, Sectio scientiarum mathematicarum 1: 199–208

• von Neumann, John (January 2002) [1923], “On the introduction of transfinite numbers”, in Jean van Hei- jenoort, From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931 (3rd ed.), Harvard University Press, pp. 346–354, ISBN 0-674-32449-8 - English translation of von Neumann 1923.

40.13 External links

• Hazewinkel, Michiel, ed. (2001), “Ordinal number”, Encyclopedia of Mathematics, Springer, ISBN 978-1- 55608-010-4 • Weisstein, Eric W., “Ordinal Number”, MathWorld.

• Ordinals at ProvenMath • Beitraege zur Begruendung der transfiniten Mengenlehre Cantor’s original paper published in Mathematische Annalen 49(2), 1897 • Ordinal calculator GPL'd free software for computing with ordinals and ordinal notations

• Chapter 4 of Don Monk’s lecture notes on set theory is an introduction to ordinals. Chapter 41

Power set

For the search engine developer, see Powerset (company). In mathematics, the power set (or powerset) of any set S, written P(S) , ℘(S), P(S), ℙ(S) or 2S, is the set of all

{x,y,z}

{x,y} {x,z} {y,z}

{x} {y} {z}

Ø

The elements of the power set of the set {x, y, z} ordered in respect to inclusion. subsets of S, including the empty set and S itself. In axiomatic set theory (as developed, for example, in the ZFC axioms), the existence of the power set of any set is postulated by the axiom of power set.[1] Any subset of P(S) is called a family of sets over S.

41.1 Example

If S is the set {x, y, z}, then the subsets of S are:

271 272 CHAPTER 41. POWER SET

• {} (also denoted ∅ , the empty set) • {x} • {y} • {z} • {x, y} • {x, z} • {y, z} • {x, y, z}

and hence the power set of S is {{}, {x}, {y}, {z}, {x, y}, {x, z}, {y, z}, {x, y, z}}.[2]

41.2 Properties

If S is a finite set with |S| = n elements, then the number of subsets of S is |P(S)| = 2n . This fact, which is the motivation for the notation 2S, may be demonstrated simply as follows,

We write any subset of S in the format {ω1, ω2, . . . , ωn} where ωi, 1 ≤ i ≤ n , can take the value of 0 or 1 . If ωi = 1 , the i -th element of S is in the subset; otherwise, the i -th element is not in the subset. Clearly the number of distinct subsets that can be constructed this way is 2n .

Cantor’s diagonal argument shows that the power set of a set (whether infinite or not) always has strictly higher cardinality than the set itself (informally the power set must be larger than the original set). In particular, Cantor’s theorem shows that the power set of a countably infinite set is uncountably infinite. For example, the power set of the set of natural numbers can be put in a one-to-one correspondence with the set of real numbers (see cardinality of the continuum). The power set of a set S, together with the operations of union, intersection and complement can be viewed as the prototypical example of a Boolean algebra. In fact, one can show that any finite Boolean algebra is isomorphic to the Boolean algebra of the power set of a finite set. For infinite Boolean algebras this is no longer true, but every infinite Boolean algebra can be represented as a subalgebra of a power set Boolean algebra (see Stone’s representation theorem). The power set of a set S forms an abelian group when considered with the operation of symmetric difference (with the empty set as the identity element and each set being its own inverse) and a commutative monoid when considered with the operation of intersection. It can hence be shown (by proving the distributive laws) that the power set considered together with both of these operations forms a Boolean ring.

41.3 Representing subsets as functions

In set theory, XY is the set of all functions from Y to X. As “2” can be defined as {0,1} (see natural number), 2S (i.e., {0,1}S) is the set of all functions from S to {0,1}. By identifying a function in 2S with the corresponding preimage of 1, we see that there is a bijection between 2S and P(S) , where each function is the characteristic function of the subset in P(S) with which it is identified. Hence 2S and P(S) could be considered identical set-theoretically. (Thus there are two distinct notational motivations for denoting the power set by 2S: the fact that this function-representation of subsets makes it a special case of the XY notation and the property, mentioned above, that |2S| = 2|S|.) This notion can be applied to the example above in which S = {x, y, z} to see the isomorphism with the binary numbers from 0 to 2n−1 with n being the number of elements in the set. In S, a 1 in the position corresponding to the location in the set indicates the presence of the element. So {x, y} = 110. For the whole power set of S we get:

• { } = 000 (Binary) = 0 (Decimal) 41.4. RELATION TO BINOMIAL THEOREM 273

• {x} = 100 = 4

• {y} = 010 = 2

• {z} = 001 = 1

• {x, y} = 110 = 6

• {x, z} = 101 = 5

• {y, z} = 011 = 3

• {x, y, z} = 111 = 7

41.4 Relation to binomial theorem

The power set is closely related to the binomial theorem. The number of sets with k elements in the power set of a set with n elements will be a combination C(n, k), also called a binomial coefficient. For example the power set of a set with three elements, has:

• C(3, 0) = 1 set with 0 elements

• C(3, 1) = 3 sets with 1 element

• C(3, 2) = 3 sets with 2 elements

• C(3, 3) = 1 set with 3 elements.

41.5 Algorithms

If S is a finite set, there is a recursive algorithm to calculate P(S) . Define the operation F(e, T ) = {X ∪ {e}|X ∈ T } In English, return the set with the element eadded to each set X in T .

• If S = {},then P(S) = {{}} is returned.

• Otherwise:

• Let ebe any single element of S . • Let T = S \{e}, where ' S \{e}' denotes the relative complement of {e}in S . • And the result: P(S) = P(T ) ∪ F(e, P(T )) is returned.

In other words, the power set of the empty set is the set containing the empty set and the power set of any other set is all the subsets of the set containing some specific element and all the subsets of the set not containing that specific element.

41.6 Subsets of limited cardinality

The set of subsets of S of cardinality less than κ is denoted by Pκ(S) or P<κ(S) . Similarly, the set of non-empty subsets of S might be denoted by P≥1(S) . 274 CHAPTER 41. POWER SET

41.7 Power object

A set can be regarded as an algebra having no nontrivial operations or defining equations. From this perspective the idea of the power set of X as the set of subsets of X generalizes naturally to the subalgebras of an algebraic structure or algebra. Now the power set of a set, when ordered by inclusion, is always a complete atomic Boolean algebra, and every complete atomic Boolean algebra arises as the lattice of all subsets of some set. The generalization to arbitrary algebras is that the set of subalgebras of an algebra, again ordered by inclusion, is always an algebraic lattice, and every algebraic lattice arises as the lattice of subalgebras of some algebra. So in that regard subalgebras behave analogously to subsets. However there are two important properties of subsets that do not carry over to subalgebras in general. First, although the subsets of a set form a set (as well as a lattice), in some classes it may not be possible to organize the subalgebras of an algebra as itself an algebra in that class, although they can always be organized as a lattice. Secondly, whereas the subsets of a set are in bijection with the functions from that set to the set {0,1} = 2, there is no guarantee that a class of algebras contains an algebra that can play the role of 2 in this way. Certain classes of algebras enjoy both of these properties. The first property is more common, the case of having both is relatively rare. One class that does have both is that of multigraphs. Given two multigraphs G and H, a homomorphism h: G → H consists of two functions, one mapping vertices to vertices and the other mapping edges to edges. The set HG of homomorphisms from G to H can then be organized as the graph whose vertices and edges are respectively the vertex and edge functions appearing in that set. Furthermore the subgraphs of a multigraph G are in bijection with the graph homomorphisms from G to the multigraph Ω definable as the complete directed graph on two vertices (hence four edges, namely two self-loops and two more edges forming a cycle) augmented with a fifth edge, namely a second self-loop at one of the vertices. We can therefore organize the subgraphs of G as the multigraph ΩG, called the power object of G. What is special about a multigraph as an algebra is that its operations are unary. A multigraph has two sorts of elements forming a set V of vertices and E of edges, and has two unary operations s,t: E → V giving the source (start) and target (end) vertices of each edge. An algebra all of whose operations are unary is called a presheaf. Every class of presheaves contains a presheaf Ω that plays the role for subalgebras that 2 plays for subsets. Such a class is a special case of the more general notion of elementary topos as a category that is closed (and moreover cartesian closed) and has an object Ω, called a subobject classifier. Although the term “power object” is sometimes used synonymously with exponential object YX, in topos theory Y is required to be Ω.

41.8 Functors and quantifiers

In category theory and the theory of elementary topoi, the universal quantifier can be understood as the right adjoint of a functor between power sets, the inverse image functor of a function between sets; likewise, the existential quantifier is the left adjoint.[3]

41.9 See also

• Set theory • Axiomatic set theory • Family of sets • Field of sets

41.10 Notes

[1] Devlin (1979) p.50

[2] Puntambekar, A.A. (2007). Theory Of Automata And Formal Languages. Technical Publications. pp. 1–2. ISBN 978- 81-8431-193-8. 41.11. REFERENCES 275

[3] Saunders Mac Lane, Ieke Moerdijk, (1992) Sheaves in Geometry and Logic Springer-Verlag. ISBN 0-387-97710-4 See page 58

41.11 References

• Devlin, Keith J. (1979). Fundamentals of contemporary set theory. Universitext. Springer-Verlag. ISBN 0-387-90441-7. Zbl 0407.04003.

• Halmos, Paul R. (1960). Naive set theory. The University Series in Undergraduate Mathematics. van Nostrand Company. Zbl 0087.04403.

• Puntambekar, A.A. (2007). Theory Of Automata And Formal Languages. Technical Publications. ISBN 978-81-8431-193-8.

41.12 External links

• Weisstein, Eric W., “Power Set”, MathWorld.

• Power set at PlanetMath.org. • Power set in nLab

• Power object in nLab Chapter 42

Quantum graph

In mathematics and physics, a quantum graph is a linear, network-shaped structure of vertices connected by bonds (or edges) with a differential or pseudo-differential operator acting on functions defined on the bonds. Such systems were first studied by Linus Pauling as models of free electrons in organic molecules in the 1930s. They arise in a variety of mathematical contexts, e.g. as model systems in quantum chaos, in the study of waveguides, in photonic crystals and in Anderson localization, or as limit on shrinking thin wires. Quantum graphs have become prominent models in mesoscopic physics used to obtain a theoretical understanding of nanotechnology. Another, more simple notion of quantum graphs was introduced by Freedman et al.[1]

42.1 Metric graphs

A metric graph embedded in the plane with three open edges. The dashed line denotes the metric distance between two points x and y .

A metric graph is a graph consisting of a set V of vertices and a set E of edges where each edge e = (v1, v2) ∈ E has been associated with an interval [0,Le] so that xe is the coordinate on the interval, the vertex v1 corresponds to xe = 0 and v2 to xe = Le or vice versa. The choice of which vertex lies at zero is arbitrary with the alternative corresponding to a change of coordinate on the edge. The graph has a natural metric: for two points x, y on the graph, ρ(x, y) is the shortest distance between them where distance is measured along the edges of the graph. Open graphs: in the combinatorial graph model edges always join pairs of vertices however in a quantum graph one may also consider semi-infinite edges. These are edges associated with the interval [0, ∞) attached to a single vertex at xe = 0 . A graph with one or more such open edges is referred to as an open graph.

276 42.2. QUANTUM GRAPHS 277

42.2 Quantum graphs

Quantum graphs are metric graphs equipped with a differential (or pseudo-differential) operator acting on functions | | on the graph. A function f on⊕ a metric graph is defined as the E -tuple of functions fe(xe) on the intervals. The 2 Hilbert space of the graph is e∈E L ([0,Le]) where the inner product of two functions is

∫ ∑ Le ⟨ ⟩ ∗ f, g = fe (xe)ge(xe) dxe, e∈E 0

Le may be infinite in the case of an open edge. The simplest example of an operator on a metric graph is the Laplace − d2 operator. The operator on an edge is 2 where xe is the coordinate on the edge. To make the operator self-adjoint dxe a suitable domain must be specified. This is typically achieved by taking the Sobolev space H2 of functions on the edges of the graph and specifying matching conditions at the vertices. The trivial example of matching conditions that make the operator self-adjoint are the Dirichlet boundary conditions, fe(0) = fe(Le) = 0 for every edge. An eigenfunction on a finite edge may be written as

( ) nπxe fe(xe) = sin Le for integer n . If the graph is closed with no infinite edges and the lengths of the edges of the graph are rationally n2π2 independent then an eigenfunction is supported on a single graph edge and the eigenvalues are 2 . The Dirichlet Le conditions don't allow interaction between the intervals so the spectrum is the same as that of the set of disconnected edges. More interesting self-adjoint matching conditions that allow interaction between edges are the Neumann or natural matching conditions. A function f in the domain of the operator is continuous everywhere on the graph and the sum of the outgoing derivatives at a vertex is zero,

∑ f ′(v) = 0 , e∼v

′ ′ ′ ′ where f (v) = f (0) if the vertex v is at x = 0 and f (v) = −f (Le) if v is at x = Le . The properties of other operators on metric graphs have also been studied.

• These include the more general class of Schrödinger operators, ( ) d 2 i + Ae(xe) + Ve(xe) , dxe

where Ae is a “magnetic vector potential” on the edge and Ve is a scalar potential.

• Another example is the Dirac operator on a graph which is a matrix valued operator acting on vector valued functions that describe the quantum mechanics of particles with an intrinsic angular momentum of one half such as the electron.

• The Dirichlet-to-Neumann operator on a graph is a pseudo-differential operator that arises in the study of photonic crystals.

42.3 Theorems

All self-adjoint matching conditions of the Laplace operator on a graph can be classified according to a scheme of Kostrykin and Schrader. In practice, it is often more convenient to adopt a formalism introduced by Kuchment, see,[2] which automatically yields an operator in variational form. 278 CHAPTER 42. QUANTUM GRAPH

Let v be a vertex with d edges emanating from it. For simplicity we choose the coordinates on the edges so that v lies at xe = 0 for each edge meeting at v . For a function f on the graph let

f = (f (0), f (0), . . . , f (0))T , f′ = (f ′ (0), f ′ (0), . . . , f ′ (0))T . e1 e2 ed e1 e2 ed Matching conditions at v can be specified by a pair of matrices A and B through the linear equation,

Af + Bf′ = 0.

The matching conditions define a self-adjoint operator if (A, B) has the maximal rank d and AB∗ = BA∗. The spectrum of the Laplace operator on a finite graph can be conveniently described using a scattering matrix approach introduced by Kottos and Smilansky .[3] [4] The eigenvalue problem on an edge is,

2 − d 2 2 fe(xe) = k fe(xe). dxe So a solution on the edge can be written as a linear combination of plane waves.

ikxe −ikxe fe(xe) = cee +c ˆee .

where in a time-dependent Schrödinger equation c is the coefficient of the outgoing plane wave at 0 and cˆ coefficient of the incoming plane wave at 0 . The matching conditions at v define a scattering matrix

S(k) = −(A + ikB)−1(A − ikB).

The scattering matrix relates the vectors of incoming and outgoing plane-wave coefficients at v , c = S(k)ˆc . For self-adjoint matching conditions S is unitary. An element of σ(uv)(vw) of S is a complex transition amplitude from a directed edge (uv) to the edge (vw) which in general depends on k . However, for a large class of matching conditions the S-matrix is independent of k . With Neumann matching conditions for example

  1 −1 0 0 ...    −  0 0 ... 0  0 1 1 0 ...   . . .   . .   . . .  A =  .. ..  ,B =   .    0 0 ... 0   0 ... 0 1 −1  1 1 ... 1 0 ... 0 0 0

Substituting in the equation for S produces k -independent transition amplitudes

2 σ = − δ . (uv)(vw) d uw

where δuw is the Kronecker delta function that is one if u = w and zero otherwise. From the transition amplitudes we may define a 2|E| × 2|E| matrix

ikL(uv) U(uv)(lm)(k) = δvlσ(uv)(vm)(k)e .

U is called the bond scattering matrix and can be thought of as a quantum evolution operator on the graph. It is unitary and acts on the vector of 2|E| plane-wave coefficients for the graph where c(uv) is the coefficient of the plane wave traveling from u to v . The phase eikL(uv) is the phase acquired by the plane wave when propagating from vertex u to vertex v . 42.4. APPLICATIONS 279

Quantization condition: An eigenfunction on the graph can be defined through its associated 2|E| plane-wave coefficients. As the eigenfunction is stationary under the quantum evolution a quantization condition for the graph can be written using the evolution operator.

|U(k) − I| = 0.

Eigenvalues kj occur at values of k where the matrix U(k) has an eigenvalue one. We will order the spectrum with 0 ⩽ k0 ⩽ k1 ⩽ ... . The first trace formula for a graph was derived by Roth (1983). In 1997 Kottos and Smilansky used the quantiza- tion condition above to obtain the following trace formula for the Laplace operator on a graph when the transition amplitudes are independent of k . The trace formula links the spectrum with periodic orbits on the graph.

∞ ∑ L 1 ∑ L d(k) := δ(k − k ) = + p A cos(kL ). j π π r p p j=0 p p

L d(k) is called the density of states. The right hand side of the trace formula is made up of two terms, the Weyl term π is the mean separation∑ of eigenvalues and the oscillating part is∑ a sum over all periodic orbits p = (e1, e2, . . . , en) on the graph. Lp = e∈p Le is the length of the orbit and L = e∈E Le is the total length of the graph. For an orbit generated by repeating a shorter primitive orbit, rp counts the number of repartitions. Ap = σe1e2 σe2e3 . . . σene1 is the product of the transition amplitudes at the vertices of the graph around the orbit.

42.4 Applications

Naphthalene molecule

Quantum graphs were first employed in the 1930s to model the spectrum of free electrons in organic molecules like Naphthalene, see figure. As a first approximation the atoms are taken to be vertices while the σ-electrons form bonds that fix a frame in the shape of the molecule on which the free electrons are confined. A similar problem appears when considering quantum waveguides. These are mesoscopic systems - systems built with a width on the scale of nanometers. A quantum waveguide can be thought of as a fattened graph where the 280 CHAPTER 42. QUANTUM GRAPH

edges are thin tubes. The spectrum of the Laplace operator on this domain converges to the spectrum of the Laplace operator on the graph under certain conditions. Understanding mesoscopic systems plays an important role in the field of nanotechnology. In 1997 Kottos and Smilansky proposed quantum graphs as a model to study quantum chaos, the quantum mechanics of systems that are classically chaotic. Classical motion on the graph can be defined as a probabilistic Markov chain where the probability of scattering from edge e to edge f is given by the absolute value of the quantum transition 2 amplitude squared, |σef | . For almost all finite connected quantum graphs the probabilistic dynamics is ergodic and mixing, in other words chaotic. Quantum graphs embedded in two or three dimensions appear in the study of photonic crystals . In two dimensions a simple model of a photonic crystal consists of polygonal cells of a dense dielectric with narrow interfaces between the cells filled with air. Studying dielectric modes that stay mostly in the dielectric gives rise to a pseudo-differential operator on the graph that follows the narrow interfaces. Periodic quantum graphs like the lattice in R2 are common models of periodic systems and quantum graphs have been applied to the study the phenomena of Anderson localization where localized states occur at the edge of spectral bands in the presence of disorder.

42.5 See also

• Event symmetry • Schild’s Ladder, for fictional quantum graph theory

• Feynman diagram

42.6 References

[1] M. Freedman, L. Lovász & A. Schrijver, Reflection positivity, rank connectivity, and homomorphism of graphs, J. Amer. Math. Soc. 20, 37-51 (2007); MR2257396

[2] P. Kuchment, Quantum graphs I. Some basic structures, Waves in Random Media 14, S107-S128 (2004)

[3] T. Kottos & U. Smilansky, Periodic Orbit Theory and Spectral Statistics for Quantum Graphs, Annals of Physics 274 76-124 (1999)

[4] S. Gnutzman & U. Smilansky, Quantum graphs: applications to quantum chaos and universal spectral statistics, Adv. Phys. 55 527-625 (2006)

• Quantum graphs on arxiv.org Chapter 43

Quiver (mathematics)

In mathematics, a quiver is a directed graph where loops and multiple arrows between two vertices are allowed, i.e. a multidigraph. They are commonly used in representation theory: a representation V of a quiver assigns a vector space V(x) to each vertex x of the quiver and a linear map V(a) to each arrow a. In category theory, a quiver can be understood to be an underlying structure of a category, but without identity morphisms and composition. That is, there is a forgetful functor from Cat to Quiv. Its left adjoint is a free functor which, from a quiver, makes the corresponding free category.

43.1 Definition

A quiver Γ consists of:

• The set V of vertices of Γ

• The set E of edges of Γ

• Two functions: s: E → V giving the start or source of the edge, and another function, t: E → V giving the target of the edge.

This definition is identical to that of a multidigraph. A morphism of quivers is defined as follows. If Γ = (V, E, s, t) and Γ′ = (V ′,E′, s′, t′) are two quivers, then a ′ ′ morphism m = (mv, me) of quivers consist of two functions mv : V → V and me : E → E such that following diagrams commute:

′ mv ◦ s = s ◦ me

and

′ mv ◦ t = t ◦ me

43.2 Category-theoretic definition

The above definition is based in set theory; the category-theoretic definition generalizes this into a functor from the free quiver to the category of sets. The free quiver (also called the walking quiver, Kronecker quiver, 2-Kronecker quiver or Kronecker category) Q is a category with two objects, and four morphisms: The objects are V and E. The four morphisms are s: E → V, t: E → V, and the identity morphisms idV: V → V and idE: E → E. That is, the free quiver is

281 282 CHAPTER 43. QUIVER (MATHEMATICS)

s E ⇒ V t A quiver is then a functor Γ: Q → Set. More generally, a quiver in a category C is a functor Γ: Q → C. The category Quiv(C) of quivers in C is the functor category where:

• objects are functors Γ: Q → C, • morphisms are natural transformations between functors.

Note that Quiv is the category of presheaves on the opposite category Qop.

43.3 Path algebra

If Γ is a quiver, then a path in Γ is a sequence of arrows an an₋₁ ... a3 a2 a1 such that the head of ai₊₁ = tail of ai, using the convention of concatenating paths from right to left. If K is a field then the quiver algebra or path algebra KΓ is defined as a vector space having all the paths (of length ≥ 0) in the quiver as basis (including, for each vertex i of the quiver Γ, a trivial path ei of length 0; these paths are not assumed to be equal for different i), and multiplication given by concatenation of paths. If two paths cannot be concatenated because the end vertex of the first is not equal to the starting vertex of the second, their product is defined to be zero. This defines an associative algebra over K. This algebra has a unit element if and only if the quiver has only finitely many vertices. In this case, the modules over KΓ are naturally identified with the∑ representations of Γ. If the quiver has infinitely many vertices, then KΓ has an approximate identity given by eE := v∈E 1v where E ranges over finite subsets of the vertex set of Γ. If the quiver has finitely many vertices and arrows, and the end vertex and starting vertex of any path are always distinct (i.e. Q has no oriented cycles), then KΓ is a finite-dimensional hereditary algebra over K and conversely any such finite-dimensional hereditary algebra over K is isomorphic to the path algebra over its Ext quiver

43.4 Representations of quivers

A representation of a quiver Q is an association of an R-module to each vertex of Q, and a morphism between each module for each arrow. A representation V of a quiver Q is said to be trivial if V(x) = 0 for all vertices x in Q. A morphism, f: V → V′, between representations of the quiver Q, is a collection of linear maps f(x): V (x) → V ′(x) such that for every arrow a in Q from x to y V ′(a)f(x) = f(y)V (a) , i.e. the squares that f forms with the arrows of V and V′ all commute. A morphism, f, is an isomorphism, if f(x) is invertible for all vertices x in the quiver. With these definitions the representations of a quiver form a category. If V and W are representations of a quiver Q, then the direct sum of these representations, V ⊕ W , is defined by (V ⊕ W )(x) = V (x) ⊕ W (x) for all vertices x in Q and (V ⊕ W )(a) is the direct sum of the linear mappings V(a) and W(a). A representation is said to be decomposable if it is isomorphic to the direct sum of non-zero representations. A categorical definition of a quiver representation can also be given. The quiver itself can be considered a category, where the vertices are objects and paths are morphisms. Then a representation of Q is just a covariant functor from this category to the category of finite dimensional vector spaces. Morphisms of representations of Q are precisely natural transformations between the corresponding functors. For a finite quiver Γ (a quiver with finitely many vertices and edges), let KΓ be its path algebra. Let ei denote the trivial path at vertex i. Then we can associate to the vertex i the projective KΓ-module KΓei consisting of linear combinations of paths which have starting vertex i. This corresponds to the representation of Γ obtained by putting a copy of K at each vertex which lies on a path starting at i and 0 on each other vertex. To each edge joining two copies of K we associate the identity map. 43.5. GABRIEL’S THEOREM 283

43.5 Gabriel’s theorem

Main article: Gabriel’s theorem

A quiver is of finite type if it has only finitely many isomorphism classes of indecomposable representations. Gabriel (1972) classified all quivers of finite type, and also their indecomposable representations. More precisely, Gabriel’s theorem states that:

1. A (connected) quiver is of finite type if and only if its underlying graph (when the directions of the arrows are ignored) is one of the ADE Dynkin diagrams: An , Dn , E6 , E7 , E8 . 2. The indecomposable representations are in a one-to-one correspondence with the positive roots of the root system of the Dynkin diagram.

Dlab & Ringel (1973) found a generalization of Gabriel’s theorem in which all Dynkin diagrams of finite dimensional semisimple Lie algebras occur.

43.6 See also

• ADE classification • Adhesive category

• Graph algebra • Group algebra

• Incidence algebra • Quiver diagram

• Semi-invariant of a quiver

43.7 References

• Derksen, Harm; Weyman, Jerzy (February 2005), “Quiver Representations” (PDF), Notices of the American Mathematical Society 52 (2)

• Dlab, Vlastimil; Ringel, Claus Michael (1973), On algebras of finite representation type, Carleton Mathematical Lecture Notes 2, Department of Mathematics, Carleton Univ., Ottawa, Ont., MR 0347907

• Crawley-Boevey, William (1992), Notes on Quiver Representations (PDF), Oxford University • Gabriel, Peter (1972), “Unzerlegbare Darstellungen. I”, Manuscripta Mathematica 6 (1): 71–103, doi:10.1007/BF01298413, ISSN 0025-2611, MR 0332887. Errata. • Alistair, Savage (2006), “Finite-dimensional algebras and quivers”, in Francoise, J.-P.; Naber, G. L.; Tsou, S.T., Encyclopedia of Mathematical Physics 2, Elsevier, pp. 313–320, arXiv:math/0505082

• Simson, Daniel; Skowronski, Andrzej; Assem, Ibrahim (2007), Elements of the Representation Theory of As- sociative Algebras, Cambridge University Press, ISBN 978-0-521-88218-7

• Quiver in nLab Chapter 44

Random graph

For the countably-infinite random graph, see Rado graph.

In mathematics, random graph is the general term to refer to probability distributions over graphs. Random graphs may be described simply by a probability distribution, or by a random process which generates them.[1] The theory of random graphs lies at the intersection between graph theory and probability theory. From a mathematical perspective, random graphs are used to answer questions about the properties of typical graphs. Its practical applications are found in all areas in which complex networks need to be modeled – a large number of random graph models are thus known, mirroring the diverse types of complex networks encountered in different areas. In a mathematical context, random graph refers almost exclusively to the Erdős–Rényi random graph model. In other contexts, any graph model may be referred to as a random graph.

44.1 Random graph models

A random graph is obtained by starting with a set of n isolated vertices and adding successive edges between them at random. The aim of the study in this field is to determine at what stage a particular property of the graph is likely to arise.[2] Different random graph models produce different probability distributions on graphs. Most commonly studied is the one proposed by Edgar Gilbert, denoted G(n,p), in which every possible edge occurs independently with − probability 0 < p < 1. The( ) probability of obtaining any one particular random graph with m edges is pm(1 − p)N m n [3] with the notation N = 2 . A closely related model, the Erdős–Rényi model( ) denoted G(n,M), assigns equal probability to all graphs( with) exactly N N [2] M edges. With 0 ≤ M ≤ N, G(n,M) has M elements and every element occurs with probability 1/ M . The latter model can be viewed as a snapshot at a particular time (M) of the random graph process G˜n , which is a stochastic process that starts with n vertices and no edges, and at each step adds one new edge chosen uniformly from the set of missing edges. If instead we start with an infinite set of vertices, and again let every possible edge occur independently with probability 0 < p < 1, then we get an object G called an infinite random graph. Except in the trivial cases when p is 0 or 1, such a G almost surely has the following property:

Given any n + m elements a1, . . . , an, b1, . . . , bm ∈ V , there is a vertex c in V that is adjacent to each of a1, . . . , an and is not adjacent to any of b1, . . . , bm .

It turns out that if the vertex set is countable then there is, up to isomorphism, only a single graph with this property, namely the Rado graph. Thus any countably infinite random graph is almost surely the Rado graph, which for this reason is sometimes called simply the random graph. However, the analogous result is not true for uncountable graphs, of which there are many (nonisomorphic) graphs satisfying the above property. Another model, which generalizes Gilbert’s random graph model, is the random dot-product model. A random dot-product graph associates with each vertex a real vector. The probability of an edge uv between any vertices u and v is some function of the dot product u • v of their respective vectors.

284 44.2. TERMINOLOGY 285

The network probability matrix models random graphs through edge probabilities, which represent the probability pi,j that a given edge ei,j exists for a specified time period. This model is extensible to directed and undirected; weighted and unweighted; and static or dynamic graphs structure. For M ≃ pN, where N is the maximal number of edges possible, the two most widely used models, G(n,M) and G(n,p), are almost interchangeable.[4] Random regular graphs form a special case, with properties that may differ from random graphs in general. Once we have a model of random graphs, every function on graphs, becomes a random variable. The study of this model is to determine if, or at least estimate the probability that, a property may occur.[3]

44.2 Terminology

The term 'almost every' in the context of random graphs refers to a sequence of spaces and probabilities, such that the error probabilities tend to zero.[3]

44.3 Properties of random graphs

The theory of random graphs studies typical properties of random graphs, those that hold with high probability for graphs drawn from a particular distribution. For example, we might ask for a given value of n and p what the probability is that G(n,p) is connected. In studying such questions, researchers often concentrate on the asymptotic behavior of random graphs—the values that various probabilities converge to as n grows very large. Percolation theory characterizes the connectedness of random graphs, especially infinitely large ones. Percolation is related to the robustness of the graph (called also network). Given a random graph of n nodes and an average degree ⟨k⟩ . Next we remove randomly a fraction 1−p of nodes and leave only a fraction p. There exists 1 a critical percolation threshold pc = ⟨k⟩ below which the network becomes fragmented while above pc a giant connected component exists.[1][4][5][6] [7][8] (threshold functions, evolution of G~) Random graphs are widely used in the probabilistic method, where one tries to prove the existence of graphs with certain properties. The existence of a property on a random graph can often imply, via the Szemerédi regularity lemma, the existence of that property on almost all graphs. In random regular graphs, G(n,r-reg) are the set of r-regular graphs with r = r(n) such that n and m are the natural numbers, 3 ≤ r < n, and rn = 2m is even.[2] The degree sequence of a graph G in Gn depends only on the number of edges in the sets[2]

(2) { ≤ ≤ ̸ } ⊂ (2) ··· Vn = ij : 1 j n, i = j V , i = 1, , n.

If edges, M in a random graph, GM is large enough to ensure that almost every GM has minimum degree at least 1, then almost every GM is connected and, if n is even, almost every GM has a perfect matching. In particular, the moment the last isolated vertex vanishes in almost every random graph, the graph becomes connected.[2] Almost every graph process on an even number of vertices with the edge raising the minimum degree to 1 or a random graph with slightly more than (n/4)log(n) edges and with probability close to 1 ensures that the graph has a complete matching, with exception of at most one vertex. For some constant c, almost every labelled graph with n vertices and at least cnlog(n) edges is Hamiltonian. With the probability tending to 1, the particular edge that increases the minimum degree to 2 makes the graph Hamiltonian.

44.4 Coloring of Random Graphs

Given a random graph G of order n with the vertex V(G) = {1, ..., n}, by the greedy algorithm on the number of colors, the vertices can be colored with colors 1, 2, ... (vertex 1 is colored 1, vertex 2 is colored 1 if it is not adjacent 286 CHAPTER 44. RANDOM GRAPH to vertex 1, otherwise it is colored 2, etc.).[2] The number of proper colorings of random graphs given a number of q colors, called its chromatic polynomial, remains unknown so far. The scaling of zeros of the chromatic polynomial of random graphs with parameters n and the number of edges m or the connection probability p has been studied empirically using an algorithm based on symbolic pattern matching. [9]

44.5 Random trees

Main article: random tree

A random tree is a tree or arborescence that is formed by a stochastic process. In a large range of random graphs of order n and size M(n) the distribution of the number of tree components of order k is asymptotically Poisson. Types of random trees include uniform spanning tree, random minimal spanning tree, random binary tree, treap, rapidly exploring random tree, Brownian tree, and random forest.

44.6 Conditionally uniform random graphs

Main article: Conditionally uniform random graphs

Conditionally uniform random graphs assign equal probability to all the graphs having a specified properties. They can be seen as a generalization of the Erdős–Rényi model G(n,M), when the conditioning information is not necessarily the number of edges M, but whatever other arbitrary network property. In this case very few analytical results are available and simulation is required to obtain empirical distributions of average properties. Recently, a general and exact methodology for random graph simulation has been proposed by Stefano Nasini and Jordi Castro. [10]

44.7 History

Random graphs were first defined by Paul Erdős and Alfréd Rényi in their 1959 paper “On Random Graphs”[8] and independently by Gilbert in his paper “Random graphs”.[5]

44.8 See also

• Bose–Einstein condensation: a network theory approach

• Cavity method

• Complex networks

• Dual-phase evolution

• Erdős–Rényi model

• Exponential random graph model

• Graph theory

• Network science

• Percolation

• Semilinear response 44.9. REFERENCES 287

44.9 References

[1] Béla Bollobás, Random Graphs, 2nd Edition, 2001, Cambridge University Press

[2] Béla Bollobás, Random Graphs, 1985, Academic Press Inc., London Ltd.

[3] Béla Bollobás, Probabilistic Combinatorics and Its Applications, 1991, Providence, RI: American Mathematical Society.

[4] Bollobas, B. and Riordan, O.M. “Mathematical results on scale-free random graphs” in “Handbook of Graphs and Net- works” (S. Bornholdt and H.G. Schuster (eds)), Wiley VCH, Weinheim, 1st ed., 2003

[5] Gilbert, E. N. (1959), “Random graphs”, Annals of Mathematical Statistics 30: 1141–1144, doi:10.1214/aoms/1177706098.

[6] Newman, M. E. J. (2010). Networks: An Introduction. Oxford.

[7] Reuven Cohen and Shlomo Havlin (2010). Complex Networks: Structure, Robustness and Function. Cambridge University Press.

[8] Erdős, P. Rényi, A (1959) “On Random Graphs I” in Publ. Math. Debrecen 6, p. 290–297

[9] Frank Van Bussel, Christoph Ehrlich, Denny Fliegner, Sebastian Stolzenberg and Marc Timme, Chromatic Polynomials of Random Graphs, J. Phys. A: Math. Theor. 43, 175002 (2010) | doi:10.1088/1751-8113/43/17/175002

[10] Stefano Nasini and Jordi Castro, (2015) “Mathematical programming approaches for classes of random network problems”, in European Journal of Operational research Chapter 45

Ring theory

In abstract algebra, ring theory is the study of rings—algebraic structures in which addition and multiplication are defined and have similar properties to those operations defined for the integers. Ring theory studies the structure of rings, their representations, or, in different language, modules, special classes of rings (group rings, division rings, universal enveloping algebras), as well as an array of properties that proved to be of interest both within the theory itself and for its applications, such as homological properties and polynomial identities. Commutative rings are much better understood than noncommutative ones. Algebraic geometry and algebraic num- ber theory, which provide many natural examples of commutative rings, have driven much of the development of commutative ring theory, which is now, under the name of commutative algebra, a major area of modern mathe- matics. Because these three fields (algebraic geometry, and commutative algebra) are so intimately connected it is usually difficult and meaningless to decide which field a particular result belongs to. For example, Hilbert’s Nullstellensatz is a theorem which is fundamental for algebraic geometry, and is stated and proved in terms of commutative algebra. Similarly, Fermat’s last theorem is stated in terms of elementary arithmetic, which is a part of commutative algebra, but its proof involves deep results of both algebraic number theory and algebraic geometry. Noncommutative rings are quite different in flavour, since more unusual behavior can arise. While the theory has developed in its own right, a fairly recent trend has sought to parallel the commutative development by building the theory of certain classes of noncommutative rings in a geometric fashion as if they were rings of functions on (non-existent) 'noncommutative spaces’. This trend started in the 1980s with the development of noncommutative geometry and with the discovery of quantum groups. It has led to a better understanding of noncommutative rings, especially noncommutative Noetherian rings.(Goodearl 1989) For the definitions of a ring and basic concepts and their properties, see ring (mathematics). The definitions of terms used throughout ring theory may be found in the glossary of ring theory.

45.1 History

Commutative ring theory originated in algebraic number theory, algebraic geometry, and invariant theory. Central to the development of these subjects were the rings of integers in algebraic number fields and algebraic function fields, and the rings of polynomials in two or more variables. Noncommutative ring theory began with attempts to extend the complex numbers to various hypercomplex number systems. The genesis of the theories of commutative and noncommutative rings dates back to the early 19th century, while their maturity was achieved only in the third decade of the 20th century. More precisely, William Rowan Hamilton put forth the quaternions and biquaternions; James Cockle presented tessarines and coquaternions; and William Kingdon Clifford was an enthusiast of split-biquaternions, which he called algebraic motors. These noncommutative algebras, and the non-associative Lie algebras, were studied within universal algebra before the subject was divided into particular mathematical structure types. One sign of re-organization was the use of direct sums to describe algebraic structure. The various hypercomplex numbers were identified with matrix rings by Joseph Wedderburn (1908) and Emil Artin (1928). Wedderburn’s structure theorems were formulated for finite-dimensional algebras over a field while Artin

288 45.2. COMMUTATIVE RINGS 289

generalized them to Artinian rings. In 1920, Emmy Noether, in collaboration with W. Schmeidler, published a paper about the theory of ideals in which they defined left and right ideals in a ring. The following year she published a landmark paper called Idealtheorie in Ringbereichen, analyzing ascending chain conditions with regard to (mathematical) ideals. Noted algebraist Irving Kaplansky called this work “revolutionary";[1] the publication gave rise to the term "Noetherian ring", and several other mathematical objects being called Noetherian.[1][2][3]

45.2 Commutative rings

Main article: Commutative algebra

A ring is called commutative if its multiplication is commutative. Commutative rings resemble familiar number systems, and various definitions for commutative rings are designed to formalize properties of the integers. Com- mutative rings are also important in algebraic geometry. In commutative ring theory, numbers are often replaced by ideals, and the definition of the prime ideal tries to capture the essence of prime numbers. Integral domains, non-trivial commutative rings where no two non-zero elements multiply to give zero, generalize another property of the integers and serve as the proper realm to study divisibility. Principal ideal domains are integral domains in which every ideal can be generated by a single element, another property shared by the integers. Euclidean domains are integral domains in which the Euclidean algorithm can be carried out. Important examples of commutative rings can be constructed as rings of polynomials and their factor rings. Summary: Euclidean domain => principal ideal domain => unique factorization domain => => Commutative ring.

45.2.1 Algebraic geometry

Main article: Algebraic geometry

Algebraic geometry is in many ways the mirror image of commutative algebra. A scheme is built up out of rings in some sense. gave the decisive definitions of the objects used in algebraic geometry. He defined the spectrum of a commutative ring as the space of prime ideals with , but augments it with a sheaf of rings: to every Zariski-open set he assigns a commutative ring, thought of as the ring of “polynomial functions” defined on that set. These objects are the “affine schemes"; a general scheme is then obtained by “gluing together” several such affine schemes, in analogy to the fact that general varieties can be obtained by gluing together affine varieties.

45.3 Noncommutative rings

Main articles: Noncommutative ring, noncommutative algebraic geometry and noncommutative geometry

Noncommutative rings resemble rings of matrices in many respects. Following the model of algebraic geometry, attempts have been made recently at defining noncommutative geometry based on noncommutative rings. Noncom- mutative rings and associative algebras (rings that are also vector spaces) are often studied via their categories of modules. A module over a ring is an Abelian group that the ring acts on as a ring of endomorphisms, very much akin to the way fields (integral domains in which every non-zero element is invertible) act on vector spaces. Examples of noncommutative rings are given by rings of square matrices or more generally by rings of endomorphisms of Abelian groups or modules, and by monoid rings.

45.3.1 Representation theory

Main article: Representation theory

Representation theory is a branch of mathematics that draws heavily on non-commutative rings. It studies abstract algebraic structures by representing their elements as linear transformations of vector spaces, and studies modules over 290 CHAPTER 45. RING THEORY

these abstract algebraic structures. In essence, a representation makes an abstract algebraic object more concrete by describing its elements by matrices and the algebraic operations in terms of matrix addition and matrix multiplication, which is non-commutative. The algebraic objects amenable to such a description include groups, associative algebras and Lie algebras. The most prominent of these (and historically the first) is the representation theory of groups, in which elements of a group are represented by invertible matrices in such a way that the group operation is matrix multiplication.

45.4 Some useful theorems

General:

• Isomorphism theorems for rings • Nakayama’s lemma

Structure theorems:

• The Artin–Wedderburn theorem determines the structure of semisimple rings. • The Jacobson density theorem determines the structure of primitive rings. • Goldie’s theorem determines the structure of semiprime Goldie rings. • The Zariski–Samuel theorem determines the structure of a commutative principal ideal rings. • The Hopkins–Levitzki theorem gives necessary and sufficient conditions for a Noetherian ring to be an Artinian ring. • Morita theory consists of theorems determining when two rings have “equivalent” module categories. • Wedderburn’s little theorem states that finite domains are fields.

45.5 Structures and invariants of rings

45.5.1 Dimension of a commutative ring

Main article: Dimension theory (algebra)

The Krull dimension of a commutative ring R is the supremum of the lengths n of all the increasing chains of prime ideals p0 ⊊ p1 ⊊ ··· ⊊ pn . For example, the polynomial ring k[t1, ··· , tn] over a field k has dimension n. The fundamental theorem in the dimension theory states the following numbers coincide for a noetherian local ring (R, m) :[4]

• The Krull dimension of R. • The minimum number of the generators of the m -primary ideals. • ⊕ k k+1 The dimension of the graded ring grm(R) = k≥0m /m (equivalently, one plus the degree of its Hilbert polynomial).

A commutative ring R is said to be catenary if any pair of prime ideals p ⊂ p′ can be extended to a chain of prime ′ ideals p = p0 ⊊ ··· ⊊ pn = p of same finite length such that there is no prime ideal that is strictly contained in two consecutive terms. Practically all noetherian rings that appear in application are catenary. If (R, m) is a catenary local integral domain, then, by definition, dim R = ht p + dim R/p 45.5. STRUCTURES AND INVARIANTS OF RINGS 291

[5] where ht p = dim Rp is the height of p . It is a deep theorem of Ratliff that the converse is also true. If R is an integral domain that is a finitely generated k-algebra, then its dimension is the transcendence degree of its field of fractions over k. If S is an integral extension of a commutative ring R, then S and R have the same dimension. Closely related concepts are those of depth and global dimension. In general, if R is a noetherian local ring, then the depth of R is less than or equal to the dimension of R. When the equality holds, R is called a Cohen–Macaulay ring. A regular local ring is an example of a Cohen–Macaulay ring. It is a theorem of Serre that R is a regular local ring if and only if it has finite global dimension and in that case the global dimension is the Krull dimension of R. The significance of this is that a global dimension is a homological notion.

45.5.2 Morita equivalence

Main article: Morita equivalence

Two rings R, S are said to be Morita equivalent if the category of left modules over R is equivalent to the category of left modules over S. In fact, two commutative rings which are Morita equivalent must be isomorphic, so the notion does not add anything new to the category of commutative rings. However, commutative rings can be Morita equivalent to noncommutative rings, so Morita equivalence is coarser than isomorphism. Morita equivalence is especially important in algebraic topology and functional analysis.

45.5.3 Finitely generated projective module over a ring and Picard group

Let R be a commutative ring and P(R) the set of isomorphism classes of finitely generated projective modules over R; let also Pn(R) subsets consisting of those with constant rank n. (The rank of a module M is the continuous function [6] Spec R → Z, p 7→ dim M ⊗R k(p) . ) P1(R) is usually denoted by Pic(R). It is an abelian group called the Picard group of R.[7] If R is an integral domain with the field of fractions F of R, then there is an exact sequence of groups:[8]

f7→fR 1 → R∗ → F ∗ → Cart(R) → Pic(R) → 1 where Cart(R) is the set of fractional ideals of R. If R is a regular domain (i.e., regular at any prime ideal), then Pic(R) is precisely the divisor class group of R.[9] For example, if R is a principal ideal domain, then Pic(R) vanishes. In algebraic number theory, R will be taken to be the ring of integers, which is Dedekind and thus regular. It follows that Pic(R) is a finite group (finiteness of class number) that measures the deviation of the ring of integers from being a PID.

One can also consider the group completion of P(R) ; this results in a commutative ring K0(R). Note that K0(R) = K0(S) if two commutative rings R, S are Morita equivalent. See also: Algebraic K-theory

45.5.4 Structure of noncommutative rings

Main article: Noncommutative ring

The structure of a noncommutative ring is more complicated than that of a commutative ring. For example, there exist simple rings, containing no non-trivial proper (two-sided) ideals, which contain non-trivial proper left or right ideals. Various invariants exist for commutative rings, whereas invariants of noncommutative rings are difficult to find. As an example, the nilradical of a ring, the set of all nilpotent elements, need not be an ideal unless the ring is commutative. Specifically, the set of all nilpotent elements in the ring of all n x n matrices over a division ring never forms an ideal, irrespective of the division ring chosen. There are, however, analogues of the nilradical defined for noncommutative rings, that coincide with the nilradical when commutativity is assumed. The concept of the Jacobson radical of a ring; that is, the intersection of all right/left annihilators of simple right/left modules over a ring, is one example. The fact that the Jacobson radical can be viewed as the intersection of all maximal right/left ideals in the ring, shows how the internal structure of the ring is reflected by its modules. It is also 292 CHAPTER 45. RING THEORY a fact that the intersection of all maximal right ideals in a ring is the same as the intersection of all maximal left ideals in the ring, in the context of all rings; whether commutative or noncommutative. Noncommutative rings serve as an active area of research due to their ubiquity in mathematics. For instance, the ring of n-by-n matrices over a field is noncommutative despite its natural occurrence in geometry, physics and many parts of mathematics. More generally, endomorphism rings of abelian groups are rarely commutative, the simplest example being the endomorphism ring of the Klein four-group. One of the best known noncommutative rings is the division ring of quaternions.

45.6 Applications

45.6.1 The ring of integers of a number field

Main article: Ring of integers

45.6.2 The coordinate ring of an algebraic variety

If X is an affine algebraic variety, then the set of all regular functions on X forms a ring called the coordinate ring of X. For a projective variety, there is an analogous ring called the homogeneous coordinate ring. Those rings are essentially the same things as varieties: they correspond in essentially a unique way. This may be seen via either Hilbert’s Nullstellensatz or scheme-theoretic constructions (i.e., Spec and Proj).

45.6.3 Ring of invariants

A basic (and perhaps the most fundamental) question in the classical invariant theory is to find and study polynomials in the polynomial ring k[V ] that are invariant under the action of a finite group (or more generally reductive) G on V. The main example is the ring of symmetric polynomials: symmetric polynomials are polynomials that are invariant under permutation of variable. The fundamental theorem of symmetric polynomials states that this ring is [10] R[σ1, . . . , σn] where σi are elementary symmetric polynomials.

45.7 Notes

[1] Kimberling 1981, p. 18.

[2] Dick 1981, pp. 44–45.

[3] Osen 1974, pp. 145–46.

[4] Matsumura 1980, Theorem 13.4

[5] Matsumura 1980, Theorem 31.4

[6] Weibel, Ch I, Definition 2.2.3

[7] Weibel, Definition preceding Proposition 3.2 in Ch I

[8] Weibel, Ch I, Proposition 3.5

[9] Weibel, Ch I, Corollary 3.8.1

[10] Springer 1970, Theorem 1.5.7 45.8. REFERENCES 293

45.8 References

• History of ring theory at the MacTutor Archive

• R.B.J.T. Allenby (1991). Rings, Fields and Groups. Butterworth-Heinemann. ISBN 0-340-54440-6. • Atiyah M. F., Macdonald, I. G., Introduction to commutative algebra. Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont. 1969 ix+128 pp.

• T.S. Blyth and E.F. Robertson (1985). Groups, rings and fields: Algebra through practice, Book 3. Cambridge university Press. ISBN 0-521-27288-2.

• Faith, Carl, Rings and things and a fine array of twentieth century associative algebra. Mathematical Surveys and Monographs, 65. American Mathematical Society, Providence, RI, 1999. xxxiv+422 pp. ISBN 0-8218- 0993-8 • Goodearl, K. R., Warfield, R. B., Jr., An introduction to noncommutative Noetherian rings. London Math- ematical Society Student Texts, 16. Cambridge University Press, Cambridge, 1989. xviii+303 pp. ISBN 0-521-36086-2

• Herstein, I. N., Noncommutative rings. Reprint of the 1968 original. With an afterword by Lance W. Small. Carus Mathematical Monographs, 15. Mathematical Association of America, Washington, DC, 1994. xii+202 pp. ISBN 0-88385-015-X • Nathan Jacobson, Structure of rings. American Mathematical Society Colloquium Publications, Vol. 37. Re- vised edition American Mathematical Society, Providence, R.I. 1964 ix+299 pp. • Nathan Jacobson, The Theory of Rings. American Mathematical Society Mathematical Surveys, vol. I. Amer- ican Mathematical Society, New York, 1943. vi+150 pp. • Judson, Thomas W. (1997). “Abstract Algebra: Theory and Applications”. An introductory undergraduate text in the spirit of texts by Gallian or Herstein, covering groups, rings, integral domains, fields and Galois theory. Free downloadable PDF with open-source GFDL license.

• Lam, T. Y., A first course in noncommutative rings. Second edition. Graduate Texts in Mathematics, 131. Springer-Verlag, New York, 2001. xx+385 pp. ISBN 0-387-95183-0

• Lam, T. Y., Exercises in classical ring theory. Second edition. Problem Books in Mathematics. Springer- Verlag, New York, 2003. xx+359 pp. ISBN 0-387-00500-5

• Lam, T. Y., Lectures on modules and rings. Graduate Texts in Mathematics, 189. Springer-Verlag, New York, 1999. xxiv+557 pp. ISBN 0-387-98428-3 • McConnell, J. C.; Robson, J. C. Noncommutative Noetherian rings. Revised edition. Graduate Studies in Mathematics, 30. American Mathematical Society, Providence, RI, 2001. xx+636 pp. ISBN 0-8218-2169-5 • Pierce, Richard S., Associative algebras. Graduate Texts in Mathematics, 88. Studies in the History of Modern Science, 9. Springer-Verlag, New York-Berlin, 1982. xii+436 pp. ISBN 0-387-90693-2 • Rowen, Louis H., Ring theory. Vol. I, II. Pure and Applied Mathematics, 127, 128. Academic Press, Inc., Boston, MA, 1988. ISBN 0-12-599841-4, ISBN 0-12-599842-2 • Springer, Tonny A. (1977), Invariant theory, Lecture Notes in Mathematics 585, Springer-Verlag

• Weibel, Charles, The K-book: An introduction to algebraic K-theory • Connell, Edwin, Free Online Textbook, http://www.math.miami.edu/~{}ec/book/ Chapter 46

Robertson–Seymour theorem

In graph theory, the Robertson–Seymour theorem (also called the graph minor theorem[1]) states that the undirected graphs, partially ordered by the graph minor relationship, form a well-quasi-ordering.[2] Equivalently, every family of graphs that is closed under minors can be defined by a finite set of forbidden minors, in the same way that Wagner’s theorem characterizes the planar graphs as being the graphs that do not have the complete graph K5 and the complete bipartite graph K₃,₃ as minors. The Robertson–Seymour theorem is named after mathematicians Neil Robertson and Paul D. Seymour, who proved it in a series of twenty papers spanning over 500 pages from 1983 to 2004.[3] Before its proof, the statement of the theorem was known as Wagner’s conjecture after the German mathematician Klaus Wagner, although Wagner said he never conjectured it.[4] A weaker result for trees is implied by Kruskal’s tree theorem, which was conjectured in 1937 by Andrew Vázsonyi and proved in 1960 independently by Joseph Kruskal and S. Tarkowski.[5]

46.1 Statement

A minor of an undirected graph G is any graph that may be obtained from G by a sequence of zero or more contractions of edges of G and deletions of edges and vertices of G. The minor relationship forms a partial order on the set of all distinct finite undirected graphs, as it obeys the three axioms of partial orders: it is reflexive (every graph is a minor of itself), transitive (a minor of a minor of G is itself a minor of G), and antisymmetric (if two graphs G and H are minors of each other, then they must be isomorphic). However, if graphs that are isomorphic may nonetheless be considered as distinct objects, then the minor ordering on graphs forms a preorder, a relation that is reflexive and transitive but not necessarily antisymmetric.[6] A preorder is said to form a well-quasi-ordering if it contains neither an infinite descending chain nor an infinite antichain.[7] For instance, the usual ordering on the non-negative integers is a well-quasi-ordering, but the same ordering on the set of all integers is not, because it contains the infinite descending chain 0, −1, −2, −3... The Robertson–Seymour theorem states that finite undirected graphs and graph minors form a well-quasi-ordering. It is obvious that the graph minor relationship does not contain any infinite descending chain, because each contraction or deletion reduces the number of edges and vertices of the graph (a non-negative integer).[8] The nontrivial part of the theorem is that there are no infinite antichains, infinite sets of graphs that are all unrelated to each other by the minor ordering. If S is a set of graphs, and M is a subset of S containing one representative graph for each equivalence class of minimal elements (graphs that belong to S but for which no proper minor belongs to S), then M forms an antichain; therefore, an equivalent way of stating the theorem is that, in any infinite set S of graphs, there must be only a finite number of non-isomorphic minimal elements. Another equivalent form of the theorem is that, in any infinite set S of graphs, there must be a pair of graphs one of which is a minor of the other.[8] The statement that every infinite set has finitely many minimal elements implies this form of the theorem, for if there are only finitely many minimal elements, then each of the remaining graphs must belong to a pair of this type with one of the minimal elements. And in the other direction, this form of the theorem implies the statement that there can be no infinite antichains, because an infinite antichain is a set that does not contain any pair related by the minor relation.

294 46.2. FORBIDDEN MINOR CHARACTERIZATIONS 295

46.2 Forbidden minor characterizations

A family F of graphs is said to be closed under the operation of taking minors if every minor of a graph in F also belongs to F. If F is a minor-closed family, then let S be the set of graphs that are not in F (the complement of F). According to the Robertson–Seymour theorem, there exists a finite set H of minimal elements in S. These minimal elements form a forbidden graph characterization of F: the graphs in F are exactly the graphs that do not have any graph in H as a minor.[9] The members of H are called the excluded minors (or forbidden minors, or minor- minimal obstructions) for the family F. For example, the planar graphs are closed under taking minors: contracting an edge in a planar graph, or removing edges or vertices from the graph, cannot destroy its planarity. Therefore, the planar graphs have a forbidden minor characterization, which in this case is given by Wagner’s theorem: the set H of minor-minimal nonplanar graphs contains exactly two graphs, the complete graph K5 and the complete bipartite graph K₃,₃, and the planar graphs are exactly the graphs that do not have a minor in the set {K5, K₃,₃}. The existence of forbidden minor characterizations for all minor-closed graph families is an equivalent way of stating the Robertson–Seymour theorem. For, suppose that every minor-closed family F has a finite set H of minimal forbidden minors, and let S be any infinite set of graphs. Define F from S as the family of graphs that do not have a minor in S. Then F is minor-closed and has a finite set H of minimal forbidden minors. Let C be the complement of F. S is a subset of C since S and F are disjoint, and H are the minimal graphs in C. Consider a graph G in H. G cannot have a proper minor in S since G is minimal in C. At the same time, G must have a minor in S, since otherwise G would be an element in F. Therefore, G is an element in S, i.e., H is a subset of S, and all other graphs in S have a minor among the graphs in H, so H is the finite set of minimal elements of S. For the other implication, assume that every set of graphs has a finite subset of minimal graphs and let a minor-closed set F be given. We want to find a set H of graphs such that a graph is in F if and only if it does not have a minor in H. Let E be the graphs which are not minors of any graph in F, and let H be the finite set of minimal graphs in E. Now, let an arbitrary graph G be given. Assume first that G is in F. G cannot have a minor in H since G is in F and H is a subset of E. Now assume that G is not in F. Then G is not a minor of any graph in F, since F is minor-closed. Therefore, G is in E, so G has a minor in H.

46.3 Examples of minor-closed families

Main article: Forbidden graph characterization

The following sets of finite graphs are minor-closed, and therefore (by the Robertson–Seymour theorem) have for- bidden minor characterizations:

• forests, linear forests (disjoint unions of path graphs), pseudoforests, and cactus graphs;

• planar graphs, outerplanar graphs, apex graphs (formed by adding a single vertex to a planar graph), toroidal graphs, and the graphs that can be embedded on any fixed two-dimensional manifold;[10]

• graphs that are linklessly embeddable in Euclidean 3-space, and graphs that are knotlessly embeddable in Euclidean 3-space;[10]

• graphs with a feedback vertex set of size bounded by some fixed constant; graphs with Colin de Verdière graph invariant bounded by some fixed constant; graphs with treewidth, pathwidth, or branchwidth bounded by some fixed constant.

46.4 Obstruction sets

Some examples of finite obstruction sets were already known for specific classes of graphs before the Robertson– Seymour theorem was proved. For example, the obstruction for the set of all forests is the loop graph (or, if one restricts to simple graphs, the cycle with three vertices). This means that a graph is a forest if and only if none of its minors is the loop (or, the cycle with three vertices, respectively). The sole obstruction for the set of paths is the tree with four vertices, one of which has degree 3. In these cases, the obstruction set contains a single element, but 296 CHAPTER 46. ROBERTSON–SEYMOUR THEOREM

The Petersen family, the obstruction set for linkless embedding.

in general this is not the case. Wagner’s theorem states that a graph is planar if and only if it has neither K5 nor K₃,₃ as a minor. In other words, the set {K5, K₃,₃} is an obstruction set for the set of all planar graphs, and in fact the unique minimal obstruction set. A similar theorem states that K4 and K₂,₃ are the forbidden minors for the set of outerplanar graphs. Although the Robertson–Seymour theorem extends these results to arbitrary minor-closed graph families, it is not a complete substitute for these results, because it does not provide an explicit description of the obstruction set for any family. For example, it tells us that the set of toroidal graphs has a finite obstruction set, but it does not provide any such set. The complete set of forbidden minors for toroidal graphs remains unknown, but contains at least 16000 graphs.[11]

46.5 Polynomial time recognition

The Robertson–Seymour theorem has an important consequence in computational complexity, due to the proof by Robertson and Seymour that, for each fixed graph G, there is a polynomial time algorithm for testing whether larger 46.6. FIXED-PARAMETER TRACTABILITY 297

graphs have G as a minor. The running time of this algorithm can be expressed as a cubic polynomial in the size of the larger graph (although there is a constant factor in this polynomial that depends superpolynomially on the size of G), which has been improved to quadratic time by Kawarabayashi, Kobayashi, and Reed.[12] As a result, for every minor-closed family F, there is polynomial time algorithm for testing whether a graph belongs to F: simply check, for each of the forbidden minors for F, whether the given graph contains that forbidden minor.[13] However, this method requires a specific finite obstruction set to work, and the theorem does not provide one. The theorem proves that such a finite obstruction set exists, and therefore the problem is polynomial because of the above algorithm. However, the algorithm can be used in practice only if such a finite obstruction set is provided. As a result, the theorem proves that the problem can be solved in polynomial time, but does not provide a concrete polynomial-time algorithm for solving it. Such proofs of polynomiality are non-constructive: they prove polynomiality of problems without providing an explicit polynomial-time algorithm.[14] In many specific cases, checking whether a graph is in a given minor-closed family can be done more efficiently: for example, checking whether a graph is planar can be done in linear time.

46.6 Fixed-parameter tractability

For graph invariants with the property that, for each k, the graphs with invariant at most k are minor-closed, the same method applies. For instance, by this result, treewidth, branchwidth, and pathwidth, vertex cover, and the minimum genus of an embedding are all amenable to this approach, and for any fixed k there is a polynomial time algorithm for testing whether these invariants are at most k, in which the exponent in the running time of the algorithm does not depend on k. A problem with this property, that it can be solved in polynomial time for any fixed k with an exponent that does not depend on k, is known as fixed-parameter tractable. However, this method does not directly provide a single fixed-parameter-tractable algorithm for computing the pa- rameter value for a given graph with unknown k, because of the difficulty of determining the set of forbidden minors. Additionally, the large constant factors involved in these results make them highly impractical. Therefore, the devel- opment of explicit fixed-parameter algorithms for these problems, with improved dependence on k, has continued to be an important line of research.

46.7 Finite form of the graph minor theorem

Friedman, Robertson & Seymour (1987) showed that the following theorem exhibits the independence phenomenon by being unprovable in various formal systems that are much stronger than Peano arithmetic, yet being provable in systems much weaker than ZFC:

Theorem: For every positive integer n, there is an integer m so large that if G1, ..., Gm is a sequence of finite undirected graphs, where each Gi has size at most n+i, then Gj ≤ Gk for some j < k.

(Here, the size of a graph is the total number of its nodes and edges, and ≤ denotes the minor ordering.)

46.8 See also

• Graph structure theorem

46.9 Notes

[1] Bienstock & Langston (1995).

[2] Robertson & Seymour (2004).

[3] Robertson and Seymour (1983, 2004); Diestel (2005, p. 333). 298 CHAPTER 46. ROBERTSON–SEYMOUR THEOREM

[4] Diestel (2005, p. 355).

[5] Diestel (2005, pp. 335–336); Lovász (2005), Section 3.3, pp. 78–79.

[6] E.g., see Bienstock & Langston (1995), Section 2, “well-quasi-orders”.

[7] Diestel (2005, p. 334).

[8] Lovász (2005, p. 78).

[9] Bienstock & Langston (1995), Corollary 2.1.1; Lovász (2005), Theorem 4, p. 78.

[10] Lovász (2005, pp. 76–77).

[11] Chambers (2002).

[12] Kawarabayashi, Kobayashi & Reed (2012)

[13] Robertson & Seymour (1995); Bienstock & Langston (1995), Theorem 2.1.4 and Corollary 2.1.5; Lovász (2005), Theorem 11, p. 83.

[14] Fellows & Langston (1988); Bienstock & Langston (1995), Section 6.

46.10 References

• Bienstock, Daniel; Langston, Michael A. (1995), “Algorithmic implications of the graph minor theorem”, Network Models (PDF), Handbooks in Operations Research and Management Science 7, pp. 481–502, doi:10.1016/S0927- 0507(05)80125-2. • Chambers, J. (2002), Hunting for torus obstructions, M.Sc. thesis, Department of Computer Science, Univer- sity of Victoria. • Diestel, Reinhard (2005), “Minors, Trees, and WQO”, Graph Theory (PDF) (Electronic Edition 2005 ed.), Springer, pp. 326–367.

• Fellows, Michael R.; Langston, Michael A. (1988), “Nonconstructive tools for proving polynomial-time decid- ability”, Journal of the ACM 35 (3): 727–739, doi:10.1145/44483.44491.

• Friedman, Harvey; Robertson, Neil; Seymour, Paul (1987), “The metamathematics of the graph minor the- orem”, in Simpson, S., Logic and Combinatorics, Contemporary Mathematics 65, American Mathematical Society, pp. 229–261. • Kawarabayashi, Ken-ichi; Kobayashi, Yusuke; Reed, Bruce (2012), “The disjoint paths problem in quadratic time” (PDF), Journal of Combinatorial Theory, Series B 102 (2): 424–435, doi:10.1016/j.jctb.2011.07.004. • Lovász, László (2005), “Graph Minor Theory”, Bulletin of the American Mathematical Society (New Series) 43 (1): 75–86, doi:10.1090/S0273-0979-05-01088-8. • Robertson, Neil; Seymour, Paul (1983), “Graph Minors. I. Excluding a forest”, Journal of Combinatorial Theory, Series B 35 (1): 39–61, doi:10.1016/0095-8956(83)90079-5. • Robertson, Neil; Seymour, Paul (1995), “Graph Minors. XIII. The disjoint paths problem”, Journal of Com- binatorial Theory, Series B 63 (1): 65–110, doi:10.1006/jctb.1995.1006. • Robertson, Neil; Seymour, Paul (2004), “Graph Minors. XX. Wagner’s conjecture”, Journal of Combinatorial Theory, Series B 92 (2): 325–357, doi:10.1016/j.jctb.2004.08.001.

46.11 External links

• Weisstein, Eric W., “Robertson-Seymour Theorem”, MathWorld. Chapter 47

Split-quaternion

In abstract algebra, the split-quaternions or coquaternions are elements of a 4-dimensional associative algebra introduced by James Cockle in 1849 under the latter name. Like the quaternions introduced by Hamilton in 1843, they form a four dimensional real vector space equipped with a multiplicative operation. Unlike the quaternion algebra, the split-quaternions contain zero divisors, nilpotent elements, and nontrivial idempotents. As a mathematical structure, they form an algebra over the real numbers, which is isomorphic to the algebra of 2 × 2 real matrices. The coquaternions came to be called split-quaternions due to the division into positive and negative terms in the modulus function. For other names for split-quaternions see the Synonyms section below. The set {1, i, j, k} forms a basis. The products of these elements are

ij = k = −ji, jk = −i = −kj, ki = j = −ik, i2 = −1, j2 = +1, k2 = +1, and hence ijk = 1. It follows from the defining relations that the set {1, i, j, k, −1, −i, −j, −k} is a group under coquaternion multiplication; it is isomorphic to the dihedral group of a square. A coquaternion

q = w + xi + yj + zk, has a conjugate

q* = w − xi − yj − zk,

and multiplicative modulus

qq* = w2 + x2 − y2 − z2.

This quadratic form is split into positive and negative parts, in contrast to the positive definite form on the algebra of quaternions. When the modulus is non-zero, then q has a multiplicative inverse, namely q*/qq*. The set

U = {q : qq* ≠ 0}

299 300 CHAPTER 47. SPLIT-QUATERNION

is the set of units. The set P of all coquaternions forms a ring (P, +, •) with group of units (U, •). The coquaternions with modulus qq* = 1 form a non-compact topological group SU(1,1), shown below to be isomorphic to SL(2, R).

The split-quaternion basis can be identified as the basis elements of either the Clifford algebra Cℓ₁,₁(R), with {1, e1 = i, e2 = j, e1e2 = k}; or the algebra Cℓ₂,₀(R), with {1, e1 = j, e2 = k, e1e2 = i}. Historically coquaternions preceded Cayley’s matrix algebra; coquaternions (along with quaternions and tessarines) evoked the broader linear algebra.

47.1 Matrix representations

Let

q = w + xi + yj + zk, and consider u = w + xi, and v = y + zi as ordinary complex numbers with complex conjugates denoted by u* = w − xi, v* = y − zi. Then the complex matrix

( ) u v v∗ u∗

represents q in the ring of matrices, i.e. the multiplication of split-quaternions behaves the same way as the matrix multiplication. For example, the determinant of this matrix is

uu* − vv* = qq*.

The appearance of the minus sign, where there is a plus in H, distinguishes coquaternions from quaternions. The use of the split-quaternions of modulus one (qq* = 1) for hyperbolic motions of the Poincaré disk model of hyperbolic geometry is one of the great utilities of the algebra. Besides the complex matrix representation, another linear representation associates coquaternions with 2 × 2 real matrices. This isomorphism can be made explicit as follows: Note first the product

( )( ) ( ) 0 1 1 0 0 −1 = 1 0 0 −1 1 0

and that the square of each factor on the left is the identity matrix, while the square of the right hand side is the negative of the identity matrix. Furthermore, note that these three matrices, together with the identity matrix, form a basis for M(2, R). One can make the matrix product above correspond to jk = −i in the coquaternion ring. Then for an arbitrary matrix there is the bijection

( ) a c (a + d) + (c − b)i + (b + c)j + (a − d)k ↔ q = , b d 2

which is in fact a ring isomorphism. Furthermore, computing squares of components and gathering terms shows that qq* = ad − bc, which is the determinant of the matrix. Consequently there is a group isomorphism between the unit quasi-sphere of coquaternions and SL(2, R) = {g ∈ M(2, R) : det g = 1}, and hence also with SU(1, 1): the latter can be seen in the complex representation above. For instance, see Karzel and Kist[1] for the hyperbolic motion group representation with 2 × 2 real matrices. In both of these linear representations the modulus is given by the determinant function. Since the determinant is a multiplicative mapping, the modulus of the product of two coquaternions is equal to the product of the two separate moduli. Thus coquaternions form a composition algebra. As an algebra over the field of real numbers, it is one of only seven such algebras. 47.2. PROFILE 301

The circle E lies in the plane z = 0. Elements of J are square roots of +1.

47.2 Profile

The subalgebras of P may be seen by first noting the nature of the subspace {zi + xj + yk : x, y, z ∈ R}. Let

r(θ) = j cos(θ) + k sin(θ)

The parameters z and r(θ) are the basis of a cylindrical coordinate system in the subspace. Parameter θ denotes azimuth. Next let a denote any real number and consider the coquaternions

p(a, r) = i sinh a + r cosh a v(a, r) = i cosh a + r sinh a.

These are the equilateral-hyperboloidal coordinates described by Alexander Macfarlane and Carmody.[2] Next, form three foundational sets in the vector-subspace of the ring:

E = {r ∈ P: r = r(θ), 0 ≤ θ < 2π} J = {p(a, r) ∈ P: a ∈ R, r ∈ E}, hyperboloid of one sheet I = {v(a, r) ∈ P: a ∈ R, r ∈ E}, hyperboloid of two sheets.

Now it is easy to verify that 302 CHAPTER 47. SPLIT-QUATERNION

Elements of I are square roots of −1

{q ∈ P: q2 = 1} = J ∪ {1, −1} and that

{q ∈ P: q2 = −1} = I.

These set equalities mean that when p ∈ J then the plane

{x + yp: x, y ∈ R} = Dp is a subring of P that is isomorphic to the plane of split-complex numbers just as when v is in I then

{x + yv: x, y ∈ R} = Cv is a planar subring of P that is isomorphic to the ordinary complex plane C. Note that for every r ∈ E,(r + i)2 = 0 = (r − i)2 so that r + i and r − i are nilpotents. The plane N = {x + y(r + i): x, y ∈ R} is a subring of P that is isomorphic to the dual numbers. Since every coquaternion must lie in a Dp, a Cv, or an N plane, these planes profile P. For example, the unit quasi-sphere

SU(1, 1) = {q ∈ P: qq* = 1} consists of the “unit circles” in the constituent planes of P: In Dp it is a unit hyperbola, in N the “unit circle” is a pair of parallel lines, while in Cv it is indeed a circle (though it appears elliptical due to v-stretching).These ellipse/circles found in each Cv are like the illusion of the Rubin vase which “presents the viewer with a mental choice of two interpretations, each of which is valid”. 47.3. PAN-ORTHOGONALITY 303

47.3 Pan-orthogonality

When coquaternion q = w + xi + yj + zk, then the scalar part of q is w. Definition. For non-zero coquaternions q and t we write q ⊥ t when the scalar part of the product q(t*) is zero.

• For every v ∈ I, if q, t ∈ Cv, then q ⊥ t means the rays from 0 to q and t are perpendicular. • For every p ∈ J, if q, t ∈ Dp, then q ⊥ t means these two points are hyperbolic-orthogonal. • For every r ∈ E and every a ∈ R, p = p(a, r) and v = v(a, r) satisfy p ⊥ v. • If u is a unit in the coquaternion ring, then q ⊥ t implies qu ⊥ tu.

Proof:(qu)(tu)* = (uu*)q(t*) follows from (tu)* = u*t*, which can be established using the anticommutativity property of vector cross products.

47.4 Counter-sphere geometry

The quadratic form qq* is positive definite on the planes Cv and N. Consider the counter-sphere {q: qq* = −1}. Take m = x + yi + zr where r = j cos(θ) + k sin(θ). Fix θ and suppose

mm* = −1 = x2 + y2 − z2.

Since points on the counter-sphere must line on the conjugate of the unit hyperbola in some plane Dp ⊂ P, m can be written, for some p ∈ J m = p exp (bp) = sinh b + p cosh b = sinh b + i sinh a cosh b + r cosh a cosh b

Let φ be the angle between the hyperbolas from r to p and m. This angle can be viewed, in the plane tangent to the counter-sphere at r, by projection:

x sinh b tanh b tan ϕ = = = y sinh a cosh b sinh a 1 lim tan ϕ = , b→∞ sinh a as in the expression of angle of parallelism in the hyperbolic plane H2 . The parameter θ determining the meridian varies over the S1. Thus the counter-sphere appears as the manifold S1 × H2.

47.5 Application to kinematics

By using the foundations given above, one can show that the mapping

q 7→ u−1qu

is an ordinary or hyperbolic rotation according as

u = eav, v ∈ I or u = eap, p ∈ J

The collection of these mappings bears some relation to the Lorentz group since it is also composed of ordinary and hyperbolic rotations. Among the peculiarities of this approach to relativistic kinematic is the anisotropic profile, say as compared to hyperbolic quaternions. 304 CHAPTER 47. SPLIT-QUATERNION

Reluctance to use coquaternions for kinematic models may stem from the (2, 2) signature when spacetime is presumed to have signature (1, 3) or (3, 1). Nevertheless, a transparently relativistic kinematics appears when a point of the counter-sphere is used to represent an inertial frame of reference. Indeed, if tt* = −1, then there is a p = i sinh(a) + r cosh(a) ∈ J such that t ∈ Dp, and an b ∈ R such that t = p exp(bp). Then if u = exp(bp), v = i cosh(a) + r sinh(a), and s = ir, the set {t, u, v, s} is a pan-orthogonal basis stemming from t, and the orthogonalities persist through applications of the ordinary or hyperbolic rotations.

47.6 Historical notes

The coquaternions were initially introduced (under that name)[3] in 1849 by James Cockle in the London–Edinburgh– Dublin Philosophical Magazine. The introductory papers by Cockle were recalled in the 1904 Bibliography[4] of the Quaternion Society. Alexander Macfarlane called the structure of coquaternion vectors an exspherical system when he was speaking at the International Congress of Mathematicians in Paris in 1900.[5] The unit sphere was considered in 1910 by Hans Beck.[6] For example, the dihedral group appears on page 419. The coquaternion structure has also been mentioned briefly in the Annals of Mathematics.[7][8]

47.7 Synonyms

• Para-quaternions (Ivanov and Zamkovoy 2005, Mohaupt 2006) Manifolds with para-quaternionic structures are studied in differential geometry and string theory. In the para-quaternionic literature k is replaced with −k. • Musean hyperbolic quaternions • Exspherical system (Macfarlane 1900) • Split-quaternions (Rosenfeld 1988)[9] • Antiquaternions (Rosenfeld 1988) • Pseudoquaternions (Yaglom 1968[10] Rosenfeld 1988)

47.8 See also

• Split-biquaternions • Split-octonions • Hypercomplex numbers

47.9 Notes

[1] Karzel, Helmut & Günter Kist (1985) “Kinematic Algebras and their Geometries”, in Rings and Geometry, R. Kaya, P. Plaumann, and K. Strambach editors, pp 437–509, esp 449,50, D. Reidel ISBN 90-277-2112-2

[2] Carmody, Kevin (1997) “Circular and hyperbolic quaternions, octonions, sedionions”, Applied Mathematics and Compu- tation 84(1):27–47, esp. 38

[3] James Cockle (1849), On Systems of Algebra involving more than one Imaginary, Philosophical Magazine (series 3) 35: 434,5, link from Biodiversity Heritage Library

[4] A. Macfarlane (1904) Bibliography of Quaternions and Allied Systems of Mathematics, from Cornell University Historical Math Monographs, entries for James Cockle, pp. 17–18

[5] Alexander Macfarlane (1900) Application of space analysis to curvilinear coordinates, Proceedings of the International Congress of Mathematicians, Paris, page 306, from International Mathematical Union

[6] Hans Beck (1910) Ein Seitenstück zur Mobius’schen Geometrie der Kreisverwandschaften, Transactions of the American Mathematical Society 11 47.10. FURTHER READING 305

[7] A. A. Albert (1942), “Quadratic Forms permitting Composition”, Annals of Mathematics 43:161 to 77

[8] Valentine Bargmann (1947), “Irreducible unitary representations of the Lorentz Group”, Annals of Mathematics 48: 568– 640

[9] Rosenfeld, B.A. (1988) A History of Non-Euclidean Geometry, page 389, Springer-Verlag ISBN 0-387-96458-4

[10] Isaak Yaglom (1968) Complex Numbers in Geometry, page 24, Academic Press

47.10 Further reading

• Brody, Dorje C., and Eva-Maria Graefe. “On complexified mechanics and coquaternions.” Journal of Physics A: Mathematical and Theoretical 44.7 (2011): 072001. doi:10.1088/1751-8113/44/7/072001

• Ivanov, Stefan; Zamkovoy, Simeon (2005), “Parahermitian and paraquaternionic manifolds”, Differential Ge- ometry and its Applications 23, pp. 205–234, math.DG/0310415 MR 2006d:53025.

• Mohaupt, Thomas (2006), “New developments in special geometry”, hep-th/0602171. • Özdemir, M. (2009) “The roots of a split quaternion”, Applied Mathematics Letters 22:258–63.

• Özdemir, M. & A.A. Ergin (2006) “Rotations with timelike quaternions in Minkowski 3-space”, Journal of Geometry and Physics 56: 322–36.

• Pogoruy, Anatoliy & Ramon M Rodrigues-Dagnino (2008) Some algebraic and analytical properties of co- quaternion algebra, Advances in Applied Clifford Algebras. Chapter 48

Statistical model

A statistical model embodies a set of assumptions concerning the generation of the observed data, and similar data from a larger population. A model represents, often in considerably idealized form, the data-generating process. The model assumptions describe a set of probability distributions, some of which are assumed to adequately approximate the distribution from which a particular data set is sampled. A model is usually specified by mathematical equations that relate one or more random variables and possibly other non-random variables. As such, “a model is a formal representation of a theory” (Herman Adèr quoting Kenneth Bollen).[1] All statistical hypothesis tests and all statistical estimators are derived from statistical models. More generally, sta- tistical models are part of the foundation of statistical inference.

48.1 Formal definition

In mathematical terms, a statistical model is usually thought of as a pair ( S, P ), where S is the set of possible observations, i.e. the sample space, and P is a set of probability distributions on S .[2] The intuition behind this definition is as follows. It is assumed that there is a “true” probability distribution that generates the observed data. We choose P to represent a set (of distributions) which contains a distribution that adequately approximates the true distribution. Note that we do not require that P contains the true distribution, and in practice that is rarely the case. Indeed, as Burnham & Anderson state, “A model is a simplification or approximation of reality and hence will not reflect all of reality”[3]—whence the saying "all models are wrong".

The set P is almost always parameterized: P = {Pθ : θ ∈ Θ} . The set Θ defines the parameters of the model. A parameterization is generally required to have distinct parameter values give rise to distinct distributions, i.e. to meet ⇒ [2] this condition: Pθ1 = Pθ2 θ1 = θ2 . A parameterization that meets the condition is said to be identifiable.

48.2 An example

Height and age are each probabilistically distributed over humans. They are stochastically related: when we know that a person is of age 10, this influences the chance of the person being 6 feet tall. We could formalize that relationship in a linear regression model with the following form: heighti = b0 + b1agei + εi, where b0 is the intercept, b1 is a parameter that age is multiplied by to get a prediction of height, ε is the error term, and i identifies the person. This implies that height is predicted by age, with some error.

An admissible model must be consistent with all the data points. Thus, the straight line (heighti = b0 + b1agei) is not a model of the data. The line cannot be a model, unless it exactly fits all the data points—i.e. all the data points lie perfectly on a straight line. The error term, εi, must be included in the model, so that the model is consistent with all the data points. To do statistical inference, we would first need to assume some probability distributions for the εi. For instance, we might assume that the εi distributions are i.i.d. Gaussian, with zero mean. In this instance, the model would have 3 parameters: b0, b1, and the variance of the Gaussian distribution.

306 48.3. GENERAL REMARKS 307

We can formally specify the model in the form ( S, P ) as follows. The sample space, S , of our model comprises the 2 set of all possible pairs (age, height). Each possible value of θ = (b0, b1, σ ) determines a distribution on S ; denote that distribution by Pθ . If Θ is the set of all possible values of θ , then P = {Pθ : θ ∈ Θ} . (The parameterization is identifiable, and this is easy to check.) In this example, the model is determined by (1) specifying S and (2) making some assumptions relevant to P . There are two assumptions: that height can be approximated by a linear function of age; that errors in the approximation are distributed as i.i.d. Gaussian. The assumptions are sufficient to specify P —as they are required to do.

48.3 General remarks

A statistical model is a special type of mathematical model. What distinguishes a statistical model from other mathe- matical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e. some of the variables are stochastic. In the example above, ε is a stochastic variable; without that variable, the model would be deterministic. Statistical models are often used even when the physical process being modeled is deterministic. For instance, coin tossing is, in principle, a deterministic process; yet it is commonly modeled as stochastic (via a Bernoulli process). There are three purposes for a statistical model, according to Konishi & Kitagawa.[4]

• Predictions • Extraction of information • Description of stochastic structures

48.4 Dimension of a model

Suppose that we have a statistical model ( S, P ) with P = {Pθ : θ ∈ Θ} . The model is said to be parametric if Θ has a finite dimension. In notation, we write that Θ ⊆ Rd where d is a positive integer ( R denotes the real numbers; other sets can be used, in principle). Here, d is called the dimension of the model. As an example, if we assume that data arise from a univariate Gaussian distribution, then we are assuming that

( ) 1 (x − µ)2 P = {Pµ,σ(x) ≡ √ exp − : µ ∈ R, σ > 0} 2πσ 2σ2 In this example, the dimension, d, equals 2. As another example, suppose that the data consists of points (x, y) that we assume are distributed according to a straight line with i.i.d. Gaussian residuals (with zero mean). Then the dimension of the statistical model is 3: the intercept of the line, the slope of the line, and the variance of the distribution of the residuals. (Note that in geometry, a straight line has dimension 1.) A statistical model is nonparametric if the parameter set Θ is infinite dimensional. A statistical model is semiparametric if it has both finite-dimensional and infinite-dimensional parameters. Formally, if d is the dimension of Θ and n is the number of samples, both semiparametric and nonparemtric models have d → ∞ as n → ∞ . If d/n → 0 as n → ∞ , then the model is semiparametric; otherwise, the model is nonparametric. Parametric models are by far the most commonly-used statistical models. Regarding semiparametric and nonpara- metric models, Sir David Cox has said, “These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies”.[5]

48.5 Nested models

Two statistical models are nested if the first model can be transformed into the second model by imposing constraints on the parameters of the first model. For example, the set of all Gaussian distributions has, nested within it, the set of 308 CHAPTER 48. STATISTICAL MODEL zero-mean Gaussian distributions: we constrain the mean in the set of all Gaussian distributions to get the zero-mean distributions. In that example, the first model has a higher dimension than the second model (the zero-mean model has dimension 1). Such is usually, but not always, the case. As a different example, the set of positive-mean Gaussian distributions, which has dimension 2, is nested within the set of all Gaussian distributions.

48.6 Comparing models

Main article: Model selection

It is assumed that there is a “true” probability distribution that generates the observed data. The main goal of model selection is to make statements about which elements of P are most likely to adequately approximate the true distri- bution. Models can be compared to each other. This can either be done when we have done an exploratory data analysis or a confirmatory data analysis. In an exploratory analysis, we formulate all models we can think of, and see which describes our data best. In a confirmatory analysis we check which of the models that we have described before the data was collected best fits the data, or test if our only model fits the data. Common criteria for comparing models include R2, Bayes factor, and the likelihood-ratio test together with its gen- eralization relative likelihood. Konishi & Kitagawa state: “The majority of the problems in statistical inference can be considered to be problems related to statistical modeling. They are typically formulated as comparisons of several statistical models.”[6] Relat- edly, Sir David Cox has said, “How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis”.[7]

48.7 See also

• Deterministic system • Econometric model • Graphical model • Identifiability • Regression analysis • Scientific modelling • Statistical inference • Statistical theory • Stochastic process • System identification

48.8 Notes

[1] Adèr 2008

[2] McCullagh 2002

[3] Burnham & Anderson 2002, §1.2.5

[4] Konishi & Kitagawa 2008, §1.1

[5] Cox 2006, p.2 48.9. REFERENCES 309

[6] Konishi & Kitagawa 2008, p.75

[7] Cox 2006, p.197

48.9 References

• Adèr, H.J. (2008), “Modelling”, in Adèr, H.J.; Mellenbergh, G.J., Advising on Research Methods: a consultant’s companion, Huizen, The Netherlands: Johannes van Kessel Publishing, pp. 271–304.

• Burnham, K. P.; Anderson, D. R. (2002), Model Selection and Multimodel Inference (2nd ed.), Springer-Verlag, ISBN 0-387-95364-7.

• Cox, D.R. (2006), Principles of Statistical Inference, Cambridge University Press. • Konishi, S.; Kitagawa, G. (2008), Information Criteria and Statistical Modeling, Springer.

• McCullagh, P. (2002), “What is a statistical model?", Annals of Statistics 30: 1225–1310.

48.10 Further reading

• Davison A.C. (2008), Statistical Models, Cambridge University Press. • Freedman D.A. (2009), Statistical Models, Cambridge University Press.

• Helland I.S. (2010), Steps Towards a Unified Basis for Scientific Models and Methods, World Scientific. • Kroese D.P., Chan J.C.C. (2014), Statistical Modeling and Computation, Springer.

• Stapleton J.H. (2007), Models for Probability and Statistical Inference, Wiley-Interscience. Chapter 49

Topological graph theory

This article is about the study of graph embeddings. For graphs in the plane with crossings, see topological graph.

In mathematics topological graph theory is a branch of graph theory. It studies the embedding of graphs in surfaces, spatial embeddings of graphs, and graphs as topological spaces.[1] It also studies immersions of graphs. Embedding a graph in a surface means that we want to draw the graph on a surface, a sphere for example, without two edges intersecting. A basic embedding problem often presented as a mathematical puzzle is the three-cottage problem. More important applications can be found in printing electronic circuits where the aim is to print (embed) a circuit (the graph) on a circuit board (the surface) without two connections crossing each other and resulting in a short circuit.

49.1 Graphs as topological spaces

An undirected graph can be viewed as an abstract simplicial complex C with a single-element set per vertex and a two-element set per edge.[2] The geometric realization |C| of the complex consists of a copy of the unit interval [0,1] per edge, with the endpoints of these intervals glued together at vertices. In this view, embeddings of graphs into a surface or as subdivisions of other graphs are both instances of topological embedding, homeomorphism of graphs is just the specialization of topological homeomorphism, the notion of a connected graph coincides with topological connectedness, and a connected graph is a tree if and only if its fundamental group is trivial. Other simplicial complexes associated with graphs include the Whitney complex or clique complex, with a set per clique of the graph, and the matching complex, with a set per matching of the graph (equivalently, the clique complex of the complement of the line graph). The matching complex of a complete bipartite graph is called a chessboard complex, as it can be also described as the complex of sets of nonattacking rooks on a chessboard.[3]

49.2 Example studies

John Hopcroft and Robert Tarjan[4] derived a means of testing the planarity of a graph in time linear to the number of edges. Their algorithm does this by constructing a which they term a “palm tree”. Efficient is fundamental to graph drawing. Fan Chung et al.[5] studied the problem of embedding a graph into a book with the graph’s verticies in a line along the spine of the book. Its edges are drawn on separate pages in such a way that edges residing on the same page do not cross. This problem abstracts layout problems arising in the routing of multilayer printed circuit boards. Graph embeddings are also used to prove structural results about graphs, via graph minor theory and the graph structure theorem.

310 49.3. SEE ALSO 311

49.3 See also

• Crossing number (graph theory)

• Genus • Planar graph

• Topological combinatorics • Voltage graph

49.4 Notes

[1] J.L. Gross and T.W. Tucker, Topological graph theory, Wiley Interscience, 1987

[2] Graph topology, from PlanetMath.

[3] Shareshian, John; Wachs, Michelle L. (2004). “Torsion in the matching complex and chessboard complex”. arXiv:math. CO/0409054.

[4] Hopcroft, John; Tarjan, Robert E. (1974). “Efficient Planarity Testing”. Journal of the ACM 21 (4): 549–568. doi:10.1145/321850.321852.

[5] Chung, F. R. K.; Leighton, F. T.; Rosenberg, A. L. (1987). “Embedding Graphs in Books: A Layout Problem with Applications to VLSI Design”. SIAM Journal on Algebraic and Discrete Methods 8 (1). Chapter 50

Triangle graph

In the mathematical field of graph theory, the triangle graph is a planar undirected graph with 3 vertices and 3 edges, in the form of a triangle.[1]

The triangle graph is also known as the cycle graph C3 and the complete graph K3 .

50.1 Properties

The triangle graph has chromatic number 3, chromatic index 3, radius 1, diameter 1 and girth 3. It is also a 2-vertex- connected graph and a 2-edge-connected graph. Its chromatic polynomial is : (x − 3)(x − 2)x

50.2 See also

• Triangle-free graph

50.3 References

[1] Weisstein, Eric W., “Triangle Graph”, MathWorld.

312 Chapter 51

Triangle-free graph

In the mathematical area of graph theory, a triangle-free graph is an undirected graph in which no three vertices form a triangle of edges. Triangle-free graphs may be equivalently defined as graphs with clique number ≤ 2, graphs with girth ≥ 4, graphs with no induced 3-cycle, or locally independent graphs. By Turán’s theorem, the n-vertex triangle-free graph with the maximum number of edges is a complete bipartite graph in which the numbers of vertices on each side of the bipartition are as equal as possible.

51.1 Triangle finding problem

The triangle finding problem is the problem of determining whether a graph is triangle-free or not. When the graph does contain a triangle, algorithms are often required to output three vertices which form a triangle in the graph. It is possible to test whether a graph with m edges is triangle-free in time O(m1.41).[1] Another approach is to find the trace of A3, where A is the adjacency matrix of the graph. The trace is zero if and only if the graph is triangle-free. For dense graphs, it is more efficient to use this simple algorithm which relies on matrix multiplication, since it gets the down to O(n2.373), where n is the number of vertices. As Imrich, Klavžar & Mulder (1999) show, triangle-free graph recognition is equivalent in complexity to median graph recognition; however, the current best algorithms for median graph recognition use triangle detection as a subroutine rather than vice versa. The decision tree complexity or query complexity of the problem, where the queries are to an oracle which stores the adjacency matrix of a graph, is Θ(n2). However, for quantum algorithms, the best known lower bound is Ω(n), but the best known algorithm is O(n35/27).[2]

51.2 Independence number and Ramsey theory

An independent set of √n vertices in an n-vertex triangle-free graph is easy to find: either there is a vertex with greater than √n neighbors (in which case those neighbors are an independent set) or all vertices have fewer than √n neighbors [3] (in which case any maximal independent set must have at least √√n vertices). This bound can be tightened slightly: in every triangle-free graph there√ exists an independent set of Ω( n log n) vertices, and in some triangle-free graphs every independent set has O( n log n) vertices.[4] One way to generate triangle-free graphs in which all independent sets are small is the triangle-free process[5] in which one generates a maximal triangle-free graph by repeatedly adding randomly chosen edges that√ do not complete a triangle. With high probability, this process produces a graph with independence number O( n log n) . It is also possible to find regular graphs with the same properties.[6]

t2 These results may also be interpreted as giving asymptotic bounds on the Ramsey numbers R(3,t) of the form Θ( log t ) t2 : if the edges of a complete graph on Ω( log t ) vertices are colored red and blue, then either the red graph contains a triangle or, if it is triangle-free, then it must have an independent set of size t corresponding to a clique of the same size in the blue graph.

313 314 CHAPTER 51. TRIANGLE-FREE GRAPH

51.3 Coloring triangle-free graphs

Much research about triangle-free graphs has focused on graph coloring. Every bipartite graph (that is, every 2-colorable graph) is triangle-free, and Grötzsch’s theorem states that every triangle-free planar graph may be 3- colored.[7] However, nonplanar triangle-free graphs may require many more than three colors. Mycielski (1955) defined a construction, now called the Mycielskian, for forming a new triangle-free graph from another triangle-free graph. If a graph has chromatic number k, its Mycielskian has chromatic number k + 1, so this construction may be used to show that arbitrarily large numbers of colors may be needed to color nonplanar triangle- free graphs. In particular the Grötzsch graph, an 11-vertex graph formed by repeated application of Mycielski’s construction, is a triangle-free graph that cannot be colored with fewer than four colors, and is the smallest graph with this property.[8] Gimbel & Thomassen (2000) and Nilli (2000) showed that the number of colors needed to color any m-edge triangle-free graph is

( ) m1/3 O (log m)2/3 and that there exist triangle-free graphs that have chromatic numbers proportional to this bound. There have also been several results relating coloring to minimum degree in triangle-free graphs. Andrásfai, Erdős & Sós (1974) proved that any n-vertex triangle-free graph in which each vertex has more than 2n/5 neighbors must be bipartite. This is the best possible result of this type, as the 5-cycle requires three colors but has exactly 2n/5 neighbors per vertex. Motivated by this result, Erdős & Simonovits (1973) conjectured that any n-vertex triangle- free graph in which each vertex has at least n/3 neighbors can be colored with only three colors; however, Häggkvist (1981) disproved this conjecture by finding a counterexample in which each vertex of the Grötzsch graph is replaced by an independent set of a carefully chosen size. Jin (1995) showed that any n-vertex triangle-free graph in which each vertex has more than 10n/29 neighbors must be 3-colorable; this is the best possible result of this type, because Häggkvist’s graph requires four colors and has exactly 10n/29 neighbors per vertex. Finally, Brandt & Thomassé (2006) proved that any n-vertex triangle-free graph in which each vertex has more than n/3 neighbors must be 4- colorable. Additional results of this type are not possible, as Hajnal[9] found examples of triangle-free graphs with arbitrarily large chromatic number and minimum degree (1/3 − ε)n for any ε > 0.

51.4 See also

• Monochromatic triangle problem, the problem of partitioning the edges of a given graph into two triangle-free graphs

51.5 References

Notes

[1] Alon, Yuster & Zwick (1994).

[2] Lee, Magniez & Santha (2013), improving a previous algorithm by Belovs (2012).

[3] Boppana & Halldórsson (1992) p. 184, based on an idea from an earlier coloring approximation algorithm of Avi Wigder- son.

[4] Kim (1995).

[5] Erdős, Suen & Winkler (1995); Bohman (2009).

[6] Alon, Ben-Shimon & Krivelevich (2010).

[7] Grötzsch (1959); Thomassen (1994)).

[8] Chvátal (1974).

[9] see Erdős & Simonovits (1973). 51.5. REFERENCES 315

Sources

• Alon, Noga; Ben-Shimon, Sonny; Krivelevich, Michael (2010), “A note on regular Ramsey graphs”, Journal of Graph Theory 64 (3): 244–249, arXiv:0812.2386, doi:10.1002/jgt.20453, MR 2674496.

• Alon, N.; Yuster, R.; Zwick, U. (1994), “Finding and counting given length cycles”, Proceedings of the 2nd European Symposium on Algorithms, Utrecht, The Netherlands, pp. 354–364.

• Andrásfai, B.; Erdős, P.; Sós, V. T. (1974), “On the connection between chromatic number, maximal clique and minimal degree of a graph”, Discrete Mathematics 8 (3): 205–218, doi:10.1016/0012-365X(74)90133-2.

• Belovs, Aleksandrs (2012), “Span programs for functions with constant-sized 1-certificates”, Proceedings of the Forty-fourth Annual ACM Symposium on Theory of Computing (STOC '12), New York, NY, USA: ACM, pp. 77–84, doi:10.1145/2213977.2213985, ISBN 978-1-4503-1245-5.

• Bohman, Tom (2009), “The triangle-free process”, Advances in Mathematics 221 (5): 1653–1677, arXiv:0806.4375, doi:10.1016/j.aim.2009.02.018, MR 2522430.

• Boppana, Ravi; Halldórsson, Magnús M. (1992), “Approximating maximum independent sets by excluding subgraphs”, BIT 32 (2): 180–196, doi:10.1007/BF01994876, MR 1172185.

• Brandt, S.; Thomassé, S. (2006), Dense triangle-free graphs are four-colorable: a solution to the Erdős-Simonovits problem.

• Chiba, N.; Nishizeki, T. (1985), “Arboricity and subgraph listing algorithms”, SIAM Journal on Computing 14 (1): 210–223, doi:10.1137/0214017.

• Chvátal, Vašek (1974), “The minimality of the Mycielski graph”, Graphs and combinatorics (Proc. Capital Conf., George Washington Univ., Washington, D.C., 1973), Lecture Notes in Mathematics 406, Springer- Verlag, pp. 243–246.

• Erdős, P.; Simonovits, M. (1973), “On a valence problem in extremal graph theory”, Discrete Mathematics 5 (4): 323–334, doi:10.1016/0012-365X(73)90126-X.

• Erdős, P.; Suen, S.; Winkler, P. (1995), “On the size of a random maximal graph”, Random Structures and Algorithms 6 (2–3): 309–318, doi:10.1002/rsa.3240060217.

• Gimbel, John; Thomassen, Carsten (2000), “Coloring triangle-free graphs with fixed size”, Discrete Mathemat- ics 219 (1-3): 275–277, doi:10.1016/S0012-365X(00)00087-X.

• Grötzsch, H. (1959), “Zur Theorie der diskreten Gebilde, VII: Ein Dreifarbensatz für dreikreisfreie Netze auf der Kugel”, Wiss. Z. Martin-Luther-U., Halle-Wittenberg, Math.-Nat. Reihe 8: 109–120.

• Häggkvist, R. (1981), “Odd cycles of specified length in nonbipartite graphs”, Graph Theory (Cambridge, 1981), pp. 89–99.

• Imrich, Wilfried; Klavžar, Sandi; Mulder, Henry Martyn (1999), “Median graphs and triangle-free graphs”, SIAM Journal on Discrete Mathematics 12 (1): 111–118, doi:10.1137/S0895480197323494, MR 1666073.

• Itai, A.; Rodeh, M. (1978), “Finding a minimum circuit in a graph”, SIAM Journal on Computing 7 (4): 413– 423, doi:10.1137/0207033.

• Jin, G. (1995), “Triangle-free four-chromatic graphs”, Discrete Mathematics 145 (1-3): 151–170, doi:10.1016/0012- 365X(94)00063-O. • t2 Kim, J. H. (1995), “The Ramsey number R(3, t) has order of magnitude log t ", Random Structures and Algo- rithms (3 ed.) 7: 173–207, doi:10.1002/rsa.3240070302.

• Lee, Troy; Magniez, Frédéric; Santha, Miklos (2013), “Improved quantum query algorithms for triangle find- ing and associativity testing”, Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2013), New Orleans, Louisiana, pp. 1486–1502, ISBN 978-1-611972-51-1.

• Mycielski, J. (1955), “Sur le coloriage des graphes”, Colloq. Math. 3: 161–162. 316 CHAPTER 51. TRIANGLE-FREE GRAPH

• Nilli, A. (2000), “Triangle-free graphs with large chromatic numbers”, Discrete Mathematics 211 (1–3): 261– 262, doi:10.1016/S0012-365X(99)00109-0. • Shearer, J. B. (1983), “Note on the independence number of triangle-free graphs”, Discrete Mathematics 46 (1): 83–87, doi:10.1016/0012-365X(83)90273-X. • Thomassen, C. (1994), “Grötzsch’s 3-color theorem”, Journal of Combinatorial Theory, Series B 62 (2): 268– 279, doi:10.1006/jctb.1994.1069.

51.6 External links

• “triangle-free”, Information System on Graph Classes and their Inclusions Chapter 52

Vertex (graph theory)

For other uses, see Vertex (disambiguation). In mathematics, and more specifically in graph theory, a vertex (plural vertices) or node is the fundamental unit

6 5 4 1

3 2

A graph with 6 vertices and 7 edges where the vertex number 6 on the far-left is a leaf vertex or a pendant vertex of which graphs are formed: an undirected graph consists of a set of vertices and a set of edges (unordered pairs of vertices), while a directed graph consists of a set of vertices and a set of arcs (ordered pairs of vertices). In a diagram of a graph, a vertex is usually represented by a circle with a label, and an edge is represented by a line or arrow extending from one vertex to another. From the point of view of graph theory, vertices are treated as featureless and indivisible objects, although they may have additional structure depending on the application from which the graph arises; for instance, a is a graph in which the vertices represent concepts or classes of objects. The two vertices forming an edge are said to be the endpoints of this edge, and the edge is said to be incident to the vertices. A vertex w is said to be adjacent to another vertex v if the graph contains an edge (v,w). The neighborhood of a vertex v is an induced subgraph of the graph, formed by all vertices adjacent to v.

317 318 CHAPTER 52. VERTEX (GRAPH THEORY)

52.1 Types of vertices

The degree of a vertex in a graph is the number of edges incident to it. An isolated vertex is a vertex with degree zero; that is, a vertex that is not an endpoint of any edge. A leaf vertex (also pendant vertex) is a vertex with degree one. In a directed graph, one can distinguish the outdegree (number of outgoing edges) from the indegree (number of incoming edges); a source vertex is a vertex with indegree zero, while a sink vertex is a vertex with outdegree zero. A cut vertex is a vertex the removal of which would disconnect the remaining graph; a vertex separator is a collection of vertices the removal of which would disconnect the remaining graph into small pieces. A k-vertex-connected graph is a graph in which removing fewer than k vertices always leaves the remaining graph connected. An independent set is a set of vertices no two of which are adjacent, and a vertex cover is a set of vertices that includes at least one endpoint of each edge in the graph. The vertex space of a graph is a vector space having a set of basis vectors corresponding with the graph’s vertices. A graph is vertex-transitive if it has symmetries that map any vertex to any other vertex. In the context of graph enumeration and graph isomorphism it is important to distinguish between labeled vertices and unlabeled vertices. A labeled vertex is a vertex that is associated with extra information that enables it to be distinguished from other labeled vertices; two graphs can be considered isomorphic only if the correspondence between their vertices pairs up vertices with equal labels. An unlabeled vertex is one that can be substituted for any other vertex based only on its adjacencies in the graph and not based on any additional information. Vertices in graphs are analogous to, but not the same as, vertices of polyhedra: the skeleton of a polyhedron forms a graph, the vertices of which are the vertices of the polyhedron, but polyhedron vertices have additional structure (their geometric location) that is not assumed to be present in graph theory. The vertex figure of a vertex in a polyhedron is analogous to the neighborhood of a vertex in a graph.

52.2 See also

• Node (computer science) • Graph theory • Glossary of graph theory

52.3 References

• Gallo, Giorgio; Pallotino, Stefano (1988). “Shortest path algorithms”. Annals of Operations Research 13 (1): 1–79. doi:10.1007/BF02288320. • Berge, Claude, Théorie des graphes et ses applications. Collection Universitaire de Mathématiques, II Dunod, Paris 1958, viii+277 pp. (English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition. Dover, New York 2001) • Chartrand, Gary (1985). Introductory graph theory. New York: Dover. ISBN 0-486-24775-9. • Biggs, Norman; Lloyd, E. H.; Wilson, Robin J. (1986). Graph theory, 1736-1936. Oxford [Oxfordshire]: Clarendon Press. ISBN 0-19-853916-9. • Harary, Frank (1969). Graph theory. Reading, Mass.: Addison-Wesley Publishing. ISBN 0-201-41033-8. • Harary, Frank; Palmer, Edgar M. (1973). Graphical enumeration. New York, Academic Press. ISBN 0-12- 324245-2.

52.4 External links

• Weisstein, Eric W., “Graph Vertex”, MathWorld. Chapter 53

Wagner’s theorem

K5 (left) and K3,3 (right) as minors of the nonplanar Petersen graph (shown as the small colored circles and solid black edges). The minors may be formed by deleting the red vertex and contracting edges that lie within a single yellow circle in the figure.

In graph theory, Wagner’s theorem is a mathematical forbidden graph characterization of planar graphs, named after Klaus Wagner, stating that a finite graph is planar if and only if its minors include neither K5 (the complete graph on five vertices) nor K₃,₃ (the utility graph, a complete bipartite graph on six vertices). This was one of the earliest results in the theory of graph minors and can be seen as a forerunner of the Robertson–Seymour theorem.

53.1 Definitions and theorem statement

A planar embedding of a given graph is a drawing of the graph in the Euclidean plane, with points for its vertices and curves for its edges, in such a way that the only intersections between pairs of edges are at a common endpoint of the two edges. A minor of a given graph is another graph formed by deleting vertices, deleting edges, and contracting edges. When an edge is contracted, its two endpoints are merged to form a single vertex. In some versions of graph minor theory the graph resulting from a contraction is simplified by removing self-loops and multiple adjacencies, while in other version multigraphs are allowed, but this variation makes no difference to Wagner’s theorem. Wagner’s theorem states that every graph has either a planar embedding, or a minor of one of two types, the complete graph K5 or the complete bipartite graph K₃,₃. (It is also possible for a single graph to have both types of minor.) If a given graph is planar, so are all its minors: vertex and edge deletion obviously preserve planarity, and edge contraction can also be done in a planarity-preserving way, by leaving one of the two endpoints of the contracted edge in place and routing all of the edges that were incident to the other endpoint along the path of the contracted edge. A minor-minimal non-planar graph is a graph that is not planar, but in which all proper minors (minors formed by at least one deletion or contraction) are planar. Another way of stating Wagner’s theorem is that there are only

319 320 CHAPTER 53. WAGNER’S THEOREM

A clique-sum of two planar graphs and the Wagner graph, forming a K5-free graph.

two minor-minimal non-planar graphs, K5 and K₃,₃. Another result also sometimes known as Wagner’s theorem states that a four-connected graph is planar if and only if it has no K5 minor. That is, by assuming a higher level of connectivity, the graph K₃,₃ can be made unnecessary in the characterization, leaving only a single forbidden minor, K5.

53.2 History and relation to Kuratowski’s theorem

Wagner published both theorems in 1937,[1] subsequent to the 1930 publication of Kuratowski’s theorem,[2] according to which a graph is planar if and only if it does not contain as a subgraph a subdivision of one of the same two forbidden graphs K5 and K₃,₃. In a sense, Kuratowski’s theorem is stronger than Wagner’s theorem: a subdivision can be converted into a minor of the same type by contracting all but one edge in each path formed by the subdivision process, but converting a minor into a subdivision of the same type is not always possible. However, in the case of the two graphs K5 and K₃,₃, it is straightforward to prove that a graph that has at least one of these two graphs as a minor also has at least one of them as a subdivision, so the two theorems are equivalent.[3] 53.3. IMPLICATIONS 321

53.3 Implications

One consequence of the stronger version of Wagner’s theorem for four-connected graphs is to characterize the graphs that do not have a K5 minor. The theorem can be rephrased as stating that every such graph is either planar or it can be decomposed into simpler pieces. Using this idea, the K5-minor-free graphs may be characterized as the graphs that can be formed as combinations of planar graphs and the eight-vertex Wagner graph, glued together by clique-sum operations. For instance, K₃,₃ can be formed in this way as a clique-sum of three planar graphs, each of which is a copy of the tetrahedral graph K4. Wagner’s theorem is an important precursor to the theory of graph minors, which culminated in the proofs of two deep and far-reaching results: the graph structure theorem (a generalization of Wagner’s clique-sum decomposition of K5- minor-free graphs)[4] and the Robertson–Seymour theorem (a generalization of the forbidden minor characterization of planar graphs, stating that every graph family closed under the operation of taking minors has a characterization by a finite number of forbidden minors).[5] Analogues of Wagner’s theorem can also be extended to the theory of matroids: in particular, the same two graphs K5 and K₃,₃ (along with three other forbidden configurations) appear in a characterization of the graphic matroids by forbidden matroid minors.[6]

53.4 References

[1] Wagner, K. (1937), "Über eine Eigenschaft der ebenen Komplexe”, Math. Ann. 114: 570–590, doi:10.1007/BF01594196.

[2] Kuratowski, Kazimierz (1930), “Sur le problème des courbes gauches en topologie”, Fund. Math. (in French) 15: 271–283.

[3] Bondy, J. A.; Murty, U.S.R. (2008), Graph Theory, Graduate Texts in Mathematics 244, Springer, p. 269, ISBN 9781846289699.

[4] Lovász, László (2006), “Graph minor theory”, Bulletin of the American Mathematical Society 43 (1): 75–86, doi:10.1090/S0273- 0979-05-01088-8, MR 2188176.

[5] Chartrand, Gary; Lesniak, Linda; Zhang, Ping (2010), Graphs & Digraphs (5th ed.), CRC Press, p. 307, ISBN 9781439826270.

[6] Seymour, P. D. (1980), “On Tutte’s characterization of graphic matroids”, Annals of Discrete Mathematics 8: 83–90, doi:10.1016/S0167-5060(08)70855-0, MR 597159. 322 CHAPTER 53. WAGNER’S THEOREM

53.5 Text and image sources, contributors, and licenses

53.5.1 Text

• 2 × 2 real matrices Source: https://en.wikipedia.org/wiki/2_%C3%97_2_real_matrices?oldid=660100486 Contributors: Gareth Owen, Toby Bartels, Michael Hardy, TakuyaMurata, Giftlite, BenFrantzDale, Rgdboer, Jheald, SmackBot, Incnis Mrsi, Lambiam, Jim.belk, STBot, It Is Me Here, Haseldon, DavidCBryant, Geometry guy, Anchor Link Bot, Sun Creator, Marc van Leeuwen, TutterMouse, Yobot, AnomieBOT, BG19bot, Mark L MacDonald, Loraof and Anonymous: 12 • Abelian group Source: https://en.wikipedia.org/wiki/Abelian_group?oldid=668578401 Contributors: AxelBoldt, Bryan Derksen, Zun- dark, Stevertigo, Patrick, Chas zzz brown, Michael Hardy, Tango, TakuyaMurata, Karada, Theresa knott, Jdforrester, Poor Yorick, Andres, Schneelocke, Revolver, Charles Matthews, Dcoetzee, Dysprosia, Jitse Niesen, Doradus, Fibonacci, SirJective, Pakaran, Robbot, Romanm, Gandalf61, DHN, Nikitadanilov, Tobias Bergemann, Giftlite, Lethe, Fropuff, Brona, Dratman, Jorend, Waltpohl, Leonard G., Chow- bok, Zzo38, Pmanderson, Guanabot, ArnoldReinhold, Gauge, Vipul, Shenme, Kundor, Geschichte, Jumbuck, Trhaynes, Diego Moya, Keenan Pepper, Drbreznjev, Oleg Alexandrov, Imaginatorium, Isnow, Lovro, Rjwilmsi, Amire80, Salix alba, Philosophygeek, R.e.b., Brighterorange, FlaBot, Mathbot, YurikBot, Michael Slone, Grubber, Bota47, GrinBot~enwiki, SmackBot, KocjoBot~enwiki, Stifle, Gilliam, Oli Filth, Silly rabbit, DHN-bot~enwiki, SundarBot, SashatoBot, Jonathans, Mets501, Noleander, Saxbryn, Rschwieb, Mikael V, Madmath789, Newone, Aeons, Vaughan Pratt, CRGreathouse, CmdrObot, WeggeBot, Gregbard, Cydebot, Namwob0, Thijs!bot, Konradek, Headbomb, Escarbot, Mathisreallycool, Ste4k, Magic in the night, Vanish2, , Rickterp, Dr Caligari, Andy- parkerson, Warut, Policron, Joe Campbell, Steel1943, VolkovBot, LokiClock, TXiKiBoT, Synthebot, Arcfrk, AlleborgoBot, DL144, JackSchmidt, ClueBot, Razimantv, SetaLyas, DragonBot, He7d3r, Bender2k14, Zabadooken, Drgruppenpest, Johnuniq, DumZiBoT, Darkicebot, Kwjbot, Addbot, Topology Expert, Legobot, Luckas-bot, Yobot, AnomieBOT, Ciphers, Citation bot, GB fan, Drilnoth, BotPuppet, RibotBOT, SassoBot, ViolaPlayer, Kaoru Itou, Lagelspeil, Recognizance, Negi(afk), Jlaire, I dream of horses, Tom.Reding, Quotient group, EmausBot, Chricho, Quondum, D.Lazard, Holmerr, RockMagnetist, Anita5192, Coleegu, Hebert Peró, CsDix, Green- Keeper17, GeoffreyT2000, KasparBot and Anonymous: 79 • Associative algebra Source: https://en.wikipedia.org/wiki/Associative_algebra?oldid=666217098 Contributors: AxelBoldt, Bryan Derk- sen, Zundark, Michael Hardy, TakuyaMurata, Poor Yorick, Dysprosia, Jitse Niesen, Rvollmert, Mattblack82, Giftlite, Lethe, Fropuff, DefLog~enwiki, Gauss, Klemen Kocjancic, Rich Farmbrough, Guanabot, Mazi, Paul August, Rgdboer, HasharBot~enwiki, Sligocki, Oleg Alexandrov, Linas, MFH, Graham87, BD2412, Bgohla, Eubot, John Baez, YurikBot, Crasshopper, Reyk, SmackBot, HalfShadow, Silly rabbit, Mets501, Rschwieb, Kilva, VoABot II, Bogey97, Sirtrebuchet, TXiKiBoT, Anonymous Dissident, Hesam7, PaulTanenbaum, Geometry guy, Arcfrk, AlleborgoBot, Cenarium, Marc van Leeuwen, Legobot, Yobot, AnomieBOT, Darij, ZéroBot, ClueBot NG, Mark viking, CsDix, Herbmuell, Daniel ranard and Anonymous: 36 • Bijection Source: https://en.wikipedia.org/wiki/Bijection?oldid=669090030 Contributors: Damian Yerrick, AxelBoldt, Tarquin, Jan Hid- ders, XJaM, Toby Bartels, Michael Hardy, Wshun, TakuyaMurata, GTBacchus, Karada, Александър, Glenn, Poor Yorick, Rob Hooft, Pizza Puzzle, Hashar, Hawthorn, Charles Matthews, Dcoetzee, Dysprosia, Hyacinth, David Shay, Ed g2s, Bevo, Robbot, Fredrik, Benwing, Bkell, Salty-horse, Tobias Bergemann, Giftlite, Jorge Stolfi, Alberto da Calvairate~enwiki, MarkSweep, Tsemii, Vivacissamamente, Gua- nabot, Guanabot2, Quistnix, Paul August, Ignignot, MisterSheik, Nickj, Kevin Lamoreau, Obradovic Goran, Pearle, HasharBot~enwiki, Dallashan~enwiki, ABCD, Schapel, Palica, MarSch, Salix alba, FlaBot, VKokielov, RexNL, Chobot, YurikBot, Michael Slone, Member, SmackBot, RDBury, Mmernex, Octahedron80, Mhym, Bwoodacre, Dreadstar, Davipo, Loadmaster, Mets501, Dreftymac, Hilverd, John- fuhrmann, Bill Malloy, Domitori, JRSpriggs, CmdrObot, Gregbard, Yaris678, Sam Staton, Panzer raccoon!, Kilva, AbcXyz, Escarbot, Salgueiro~enwiki, JAnDbot, David Eppstein, Martynas Patasius, Paulnwatts, Cpiral, GaborLajos, Policron, Diegovb, UnicornTapestry, Yomcat, Wykypydya, Bongoman666, SieBot, Paradoctor, Paolo.dL, Smaug123, MiNombreDeGuerra, JackSchmidt, I Spel Good~enwiki, Peiresc~enwiki, Classicalecon, Adrianwn, Biagioli, Watchduck, Hans Adler, Humanengr, Neuralwarp, Baudway, FactChecker1199, Kal- El-Bot, Subversive.sound, Tanhabot, Glane23, PV=nRT, Meisam, Legobot, Luckas-bot, Yobot, Ash4Math, Shvahabi, RibotBOT, The- helpfulbot, FrescoBot, MarcelB612, CodeBlock, MastiBot, FoxBot, Duoduoduo, Xnn, EmausBot, Hikaslap, TuHan-Bot, Cobaltcigs, Wikfr, Karthikndr, Anita5192, Wcherowi, Widr, Strike Eagle, PhnomPencil, Knwlgc, Dhoke sanket, Victor Yus, Cerabot~enwiki, JPaest- preornJeolhlna, Yardimsever, CasaNostra, KoriganStone, Whamsicore, JMP EAX, Sweepy and Anonymous: 89 • Category (mathematics) Source: https://en.wikipedia.org/wiki/Category_(mathematics)?oldid=667767264 Contributors: AxelBoldt, The Anome, TakuyaMurata, Charles Matthews, Jitse Niesen, COGDEN, Tobias Bergemann, Fadmmatt, Giftlite, Jao, Fropuff, Dratman, Chris Howard, Smimram, Paul August, Ben Standeven, Lycurgus, Bobo192, Diego Moya, Linas, OdedSchramm, BD2412, SixWinged- Seraph, Salix alba, RexNL, Cat-oh, Reetep, Michael Slone, Weppens, Netrapt, Mario23, Nbarth, Go for it!, Ninte, George100, Vaughan Pratt, ShoobyD, Sam Staton, Thijs!bot, Kilva, Konradek, JAnDbot, Albmont, Sullivan.t.j, David Eppstein, N4nojohn, Maurice Carbonaro, TheSeven, Policron, VolkovBot, Shinju, Anonymous Dissident, Geometry guy, Classicalecon, Wikijens, He7d3r, Cenarium, Beroal, D.M. from Ukraine, Addbot, Мыша, Topology Expert, Alberthilbert, Legobot, Yobot, MarioS, AnomieBOT, Citation bot, TheAMmollusc, De- passp, J04n, VladimirReshetnikov, FrescoBot, ComputScientist, I dream of horses, RedBot, Updatehelper, EmausBot, John of Reading, ZéroBot, Quondum, Aramiannerses, MaximalIdeal, MerlIwBot, Helpful Pixie Bot, Beaumont877, IkamusumeFan, Freeze S, Hamoudafg, Joseph120206, Forgetfulfunctor00 and Anonymous: 41 • Complete bipartite graph Source: https://en.wikipedia.org/wiki/Complete_bipartite_graph?oldid=650669309 Contributors: PierreAb- bat, Nonenmac, Booyabazooka, Yaronf, Dcoetzee, Adoarns, McKay, Jaredwf, MathMartin, Giftlite, Dbenbenn, Tom harrison, Jeff- BobFrank, Andris, Shahab, Paul August, Keenan Pepper, FlaBot, Quuxplusone, Chobot, Michael Slone, Evilbu, J. Finkelstein, Nong- Bot~enwiki, David Eppstein, Koko90, Robert Illes, VolkovBot, Jamelan, Robert Samal, Justin W Smith, Alexbot, Bender2k14, Addbot, Numbo3-bot, PV=nRT, Calle, Twri, DSisyphBot, Miym, Erik9bot, LucienBOT, X7q, RobinK and Anonymous: 33 • Complete graph Source: https://en.wikipedia.org/wiki/Complete_graph?oldid=633886725 Contributors: AxelBoldt, Andre Engels, XJaM, PierreAbbat, Imran, Tomo, Michael Hardy, Booyabazooka, Dominus, Gabbe, Ejrh, Dcoetzee, Dysprosia, McKay, AnonMoos, Math- Martin, DHN, HaeB, Giftlite, Dbenbenn, Lupin, Bact, Knutux, Tomruen, Kelson, Zaslav, Obradovic Goran, Mailer diablo, Wojciech- Swiderski~enwiki, Mindmatrix, Isnow, Salix alba, Fred Bradstadt, FlaBot, DVdm, Alpt, Gustavb, Bota47, Arthur Rubin, SmackBot, Maksim-e~enwiki, Incnis Mrsi, Mgreenbe, BiT, Cool3, Kostmo, 16@r, CRGreathouse, Coe McSweet, Thijs!bot, Gcm, Magioladitis, Nyttend, David Eppstein, GermanX, Anton Khorev, Koko90, Fridemar, Yecril, VolkovBot, Broadbot, Radagast3, Da Joe, KoenDelaere, Thehotelambush, ClueBot, Bender2k14, Addbot, EdPeggJr, Luckas-bot, TaBOT-zerem, Twri, DSisyphBot, Omnipaedista, Howard Mc- Cay, DoostdarWKP, MastiBot, Professor Fiendish, 4, R. J. Mathar, Wild Lion, El Roih, ChrisGualtieri, JYBot, நெடுஞ்செழியன் and Anonymous: 27 53.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 323

• Complete metric space Source: https://en.wikipedia.org/wiki/Complete_metric_space?oldid=633813459 Contributors: AxelBoldt, LC~enwiki, Zundark, Toby Bartels, Miguel~enwiki, Patrick, Michael Hardy, TakuyaMurata, Dori, Snoyes, Shoecream, Hashar, Jitse Niesen, An- drewKepert, SirJective, Aenar, Robbot, Romanm, MathMartin, Robinh, Tobias Bergemann, Tosha, Giftlite, Lethe, Vasile, Hellisp, Avihu, Noisy, Rich Farmbrough, Guanabot, Sligocki, Oleg Alexandrov, Isnow, Salix alba, Mike Segal, Eubot, Chobot, Gwaihir, Trovatore, DYLAN LENNON~enwiki, Weppens, That Guy, From That Show!, SmackBot, Incnis Mrsi, Zeycus, Bluebot, Kurykh, Silly rabbit, Tekhnofiend, Sct72, Danpovey, Juan Daniel López, Khattab01~enwiki, Thijs!bot, BehnamFarid, Dugwiki, Salgueiro~enwiki, JAnDbot, Magioladitis, Sullivan.t.j, Patstuart, VolkovBot, A4bot, Picojeff, Spinningspark, Stca74, BotMultichill, Messagetolove, Msrasnw, Cheese- fondue, Chaley67, Jaan Vajakas, Loewepeter, Oboylej10, SoSaysChappy, CarsracBot, Legobot, Yobot, Erel Segal, Bdmy, Druiffic, The- helpfulbot, Tcnuk, EmausBot, Chharvey, Zephyrus Tavvier, MerlIwBot, Helpful Pixie Bot, Beaumont877, Sboosali, Dexbot and Anony- mous: 50 • Conjugacy class Source: https://en.wikipedia.org/wiki/Conjugacy_class?oldid=666425206 Contributors: AxelBoldt, Derek Ross, Zun- dark, XJaM, Nd12345, Patrick, Chas zzz brown, Michael Hardy, Dominus, TakuyaMurata, Charles Matthews, Dysprosia, Schutz, Math- Martin, Giftlite, Fropuff, Dratman, Jiy, Pyrop, Pjacobi, Rgdboer, Oleg Alexandrov, Juan Marquez, VKokielov, Quuxplusone, YurikBot, Archelon, Tong~enwiki, Welsh, Tyomitch, SmackBot, Adam majewski, Bluebot, Nbarth, SundarBot, Jim.belk, Wesley1610, Pietro KC, AutomaticWriting, JRSpriggs, Ricardogpn, Leyo, Adavidb, LokiClock, TXiKiBoT, Katzmik, Sandeepr.murthy, JackSchmidt, Anchor Link Bot, Wpoely86, Mpd1989, He7d3r, Ncsinger, Marc van Leeuwen, Addbot, Yobot, Xqbot, Howard McCay, FrescoBot, SantoBugito, Nsalwen, ZéroBot, Quondum, Helpful Pixie Bot and Anonymous: 38 • Connected space Source: https://en.wikipedia.org/wiki/Connected_space?oldid=668856336 Contributors: AxelBoldt, Zundark, Toby Bartels, Miguel~enwiki, Youandme, Michael Hardy, Wshun, Dante Alighieri, Dominus, SGBailey, Dineshjk, TakuyaMurata, Poor Yorick, Zhaoway~enwiki, Dcoetzee, Dysprosia, Jitse Niesen, Robbot, MathMartin, Henrygb, Tobias Bergemann, Tosha, Giftlite, Graeme Bartlett, Fropuff, Abdull, Rich Farmbrough, Guanabot, Yuval madar, Luqui, Paul August, Brian0918, Vipul, Kevin Lamoreau, Schissel, Msh210, Eric Kvaalen, Caesura, Fiedorow, SteinbDJ, Oleg Alexandrov, Mindmatrix, Graham87, BD2412, Ligulem, Chobot, Algebraist, Yurik- Bot, Cheesus, Crasshopper, SmackBot, Adam majewski, GraemeMcRae, Bluebot, Silly rabbit, Nbarth, DHN-bot~enwiki, Acepectif, Dreadstar, Unco, Lambiam, Breno, Cbuckley, Olivierd, Johnfuhrmann, CBM, Sopoforic, Cydebot, Salgueiro~enwiki, Wayiran, JAnD- bot, Turgidson, Gazilion, Magioladitis, Jakob.scholbach, JohnBlackburne, Lynxmb, Hqb, Plclark, Jesin, Kmhkmh, SieBot, Tommyjs, Anchor Link Bot, Beastinwith, Curtdbz, Bernie12345, Vsage, Bozo19, PCHS-NJROTC, Marc van Leeuwen, Addbot, Roentgenium111, Topology Expert, Cuaxdon, LaaknorBot, Ozob, Zorrobot, TotientDragooned, Luckas-bot, Yobot, TaBOT-zerem, Nallimbot, Erel Segal, Ciphers, Citation bot, Druiffic, Point-set topologist, LQST, Devnullnor, Rb0ne, Citation bot 1, Adlerbot, Mathtyke, Tgoodwil, Fly by Night, Qniemiec, Maschen, CountMacula, Thatguy wright, Wcherowi, An onlooker, Helpful Pixie Bot, Celestialmm, J58660, Chris- Gualtieri, Mathmensch, YiFeiBot, Mgkrupa and Anonymous: 68 • Connectivity (graph theory) Source: https://en.wikipedia.org/wiki/Connectivity_(graph_theory)?oldid=668565397 Contributors: Booy- abazooka, Kku, Meekohi, Pablo Mayrgundter, Dcoetzee, Zero0000, McKay, Kuszi, MathMartin, Bkell, Giftlite, Topaz, Paul August, Za- slav, Andygainey, Bkkbrad, Jwanders, Rjwilmsi, YurikBot, Mike.aizatsky, Bota47, RobertBorgersen, Wzhao553, Ignacioerrico, Chris the speller, JLeander, SMasters, JAnDbot, Turgidson, David Eppstein, JoergenB, Mihirpmehta, Maurice Carbonaro, Maproom, Potatoswat- ter, VolkovBot, PaulTanenbaum, Radagast3, SieBot, Wantnot, The Thing That Should Not Be, Watchduck, Bender2k14, C. lorenz, Ad- dbot, Cooldev.iitkgp, Vrajesh1989, Alex.mccarthy, Netzwerkerin, PV=nRT, Matěj Grabovský, Blaubner, Luckas-bot, Yobot, Tohd8BohaithuGh1, AnomieBOT, Ziyuang, Materialscientist, Twri, ArthurBot, Omnipaedista, Sophus Bie, Robykiwi~enwiki, Schmittz, Citation bot 1, JeromeGaltier, Itfollows, StraMic1000, ZéroBot, Sonicyouth86, El Roih, Not a similar name, BG19bot, Brad7777, Justincheng12345-bot, Makecat-bot, Lovegoodscience and Anonymous: 43 • Continuous function Source: https://en.wikipedia.org/wiki/Continuous_function?oldid=669043751 Contributors: AxelBoldt, Zundark, Ap, Toby~enwiki, Edemaine, Youandme, Michael Hardy, Wshun, Isomorphic, Ellywa, Ams80, Iulianu, Stevenj, Glenn, BenKovitz, Pizza Puzzle, Schneelocke, Charles Matthews, Stan Lioubomoudrov, Dcoetzee, Dysprosia, Jitse Niesen, Zoicon5, Hyacinth, Sabbut, Joseaperez, Bloodshedder, Phil Boswell, Robbot, MathMartin, Yacht, Bkell, Intangir, Aetheling, Tobias Bergemann, Tosha, Giftlite, Markus Krötzsch, Mikez, Lupin, MSGJ, Jason Quinn, Nayuki, Mormegil, Felix Wiemann, Rich Farmbrough, TedPavlic, Guanabot, Har- riv, Paul August, Bender235, BenjBot, Kwamikagami, Army1987, NetBot, Robotje, Andywall, Delius, Monkey 32606, Mdd, Msh210, Dallashan~enwiki, Arthena, ABCD, Sligocki, Fiedorow, Ultramarine, Oleg Alexandrov, Linas, StradivariusTV, Pdn~enwiki, Smmur- phy, Jacj, Graham87, Jshadias, Rjwilmsi, Penumbra2000, FlaBot, Splarka, Ian Pitchford, Mathbot, Jrtayloriv, Fresheneesz, Kri, Chobot, Krishnavedala, YurikBot, Jimp, Fabartus, Musicpvm, NawlinWiki, Rick Norwood, Seb35, Twin Bird, Cheeser1, Klutzy, DomenicDeni- cola, Tlevine, Igiffin, Kompik, Arthur Rubin, Netrapt, JahJah, Sardanaphalus, SmackBot, RDBury, Thierry Caro, Incnis Mrsi, Melchoir, K-UNIT, Eskimbot, MalafayaBot, Nbarth, ZyMOS, Darth Panda, AdamSmithee, T00h00, Vina-iwbot~enwiki, Igrant, SashatoBot, Lam- biam, Jim.belk, EdC~enwiki, Dr.K., Noleander, Ashted, Newone, Domitori, Rhetth, Tawkerbot2, CRGreathouse, Sniffnoy, WLior, Greg- bard, MC10, Xantharius, Thijs!bot, LachlanA, Lee Larson, Salgueiro~enwiki, Rbb l181, JAnDbot, Thenub314, 01001, Transcendence, Jakob.scholbach, Cic, Tiagofassoni, Sullivan.t.j, David Eppstein, Error792, R'n'B, Gthb, Gombang, Policron, STBotD, HyDeckar, Dor- ganBot, Izno, Quiet Silent Bob, VolkovBot, Pasixxxx, Larryisgood, Mplourde~enwiki, Leoremy, A4bot, Hqb, Ctmt, Don4of4, Wolfrock, Sapphic, AlleborgoBot, Katzmik, GirasoleDE, SieBot, Stca74, Craigy90, Henry Delforn (old), Iameukarya, Thehotelambush, Svick, Anchor Link Bot, Rinconsoleao, Sbacle, UKoch, Ramzzhakim, Mspraveen, Timhoooey, PixelBot, Johnuniq, Crowsnest, QYV, Silvonen- Bot, Jujubot~enwiki, D.M. from Ukraine, Addbot, Fgnievinski, Leszek Jańczuk, Forich, Favonian, LinkFA-Bot, Kisbesbot, Lightbot, PV=nRT, Zorrobot, Yobot, AnomieBOT, Evilchicken1234, Materialscientist, ArthurBot, Xqbot, Bdmy, RibotBOT, Raulshc, Pillcrow, Grinevitski, Scibuff, Citation bot 1, Tkuvho, JumpDiscont, Reach Out to the Truth, Grumpfel, Faolin42, Vanjka-ivanych, Slawekb, Beth- nim, Tuxedo junction, Mobius Bot, Roman3, Quondum, D.Lazard, EWikist, Ulipaul, ChuispastonBot, ClueBot NG, Wcherowi, Frietjes, ,Mark viking ,ַאְבָרָהם ,Koertefa, BG19bot, Walrus068, HGK745, Deltasun, Felidofractals, IkamusumeFan, Freeze S, APerson, Dexbot CsDix, Ginsuloft, Whikie and Anonymous: 164 • Continuous graph Source: https://en.wikipedia.org/wiki/Continuous_graph?oldid=652122056 Contributors: Phil Boswell, Teorth, Jérôme, Rjwilmsi, Lukas Mach, David Eppstein, Philip Trueman, Radagast3, Marc van Leeuwen, Download, Materialscientist, FrescoBot, Go- ingBatty, Bethnim, ClueBot NG, Mistory, TheJJJunk, HybridMJG, PErdos and Anonymous: 5 • Degree (graph theory) Source: https://en.wikipedia.org/wiki/Degree_(graph_theory)?oldid=620178773 Contributors: Michael Hardy, Booyabazooka, Ixfd64, Zero0000, McKay, Altenmann, Gandalf61, MathMartin, Giftlite, Icairns, Mormegil, Sam Derbyshire, Bender235, Obradovic Goran, Cburnett, Oleg Alexandrov, Joriki, Stolee, Maxal, YurikBot, Michael Slone, Kimchi.sg, Nethgirb, Bota47, Masatran, SmackBot, Melchoir, Ixtli, BiT, Tawkerbot2, George100, Shikaga, Quintopia, Hermel, Hut 8.5, David Eppstein, Eloz002, Yecril, Jamelan, Radagast3, Da Joe, Kamyar1, ClueBot, Thingg, SoxBot III, DumZiBoT, Tangi-tamma, Addbot, Tide rolls, Luckas-bot, ThaddeusB, Materialscientist, Twri, ArthurBot, MathsPoetry, Podgy piglet, Status quo not acceptable, RedBot, MastiBot, EmausBot, Pan BMP, Bazuz, Red Stone Arsenal, Wiki13, Deltahedron, Asturius, Athanclark, El Charpi~enwiki, SofjaKovalevskaja and Anonymous: 39 324 CHAPTER 53. WAGNER’S THEOREM

• Duality (mathematics) Source: https://en.wikipedia.org/wiki/Duality_(mathematics)?oldid=662913524 Contributors: AxelBoldt, Ede- maine, Stevertigo, Michael Hardy, Dominus, TakuyaMurata, Charles Matthews, Rvollmert, Altenmann, Gandalf61, Wayland, Tobias Bergemann, Giftlite, BenFrantzDale, Peruvianllama, Almit39, PhotoBox, Chris Howard, C S, Polluks, Tsirel, Shreevatsa, Magister Mathe- maticae, BD2412, Zbxgscqf, MarSch, Juan Marquez, Tardis, Bihzad, Wavelength, Bhny, Trovatore, Robertbyrne, Kompik, Tribaal, That Guy, From That Show!, Melchoir, Bluebot, Colonies Chris, Sina2, Antonielly, 16@r, CRGreathouse, CmdrObot, Gregbard, Ntsimp, Headbomb, Jojan, RobHar, TimVickers, Dekimasu, Jakob.scholbach, JJ Harrison, David Eppstein, Tercer, Bhudson, Jonathanzung, Map- room, Daniel5Ko, JohnBlackburne, PaulTanenbaum, Johngcarlsson, Arcfrk, Katzmik, Fcady2007, JackSchmidt, IsleLaMotte, MenoBot, Pjhenley, Bender2k14, Farisori, Addbot, Roentgenium111, SpBot, AnomieBOT, Isheden, Ringspectrum, FrescoBot, Citation bot 1, Tku- vho, DrilBot, Set theorist, Dualitynature, Quondum, Anita5192, Jochen Burghardt, Limit-theorem, JMP EAX, Loraof and Anonymous: 28 • Equivalence relation Source: https://en.wikipedia.org/wiki/Equivalence_relation?oldid=668422844 Contributors: AxelBoldt, Zundark, Toby Bartels, PierreAbbat, Ryguasu, Stevertigo, Patrick, Michael Hardy, Wshun, Dominus, TakuyaMurata, William M. Connolley, AugPi, Silverfish, Ideyal, Revolver, Charles Matthews, Dysprosia, Hyacinth, Fibonacci, Phys, McKay, GPHemsley, Robbot, Fredrik, Romanm, COGDEN, Ashley Y, Bkell, Tobias Bergemann, Tosha, Giftlite, Arved, ShaunMacPherson, Lethe, Herbee, Fropuff, LiDaobing, AlexG, Paul August, Elwikipedista~enwiki, FirstPrinciples, Rgdboer, Spearhead, Smalljim, SpeedyGonsales, Obradovic Goran, Haham hanuka, Kierano, Msh210, Keenan Pepper, PAR, Jopxton, Oleg Alexandrov, Linas, MFH, BD2412, Salix alba, [email protected], Mark J, Epitome83, Chobot, Algebraist, Roboto de Ajvol, YurikBot, Wavelength, RussBot, Nils Grimsmo, BOT-Superzerocool, Googl, Larry- LACa, Arthur Rubin, Pred, Cjfsyntropy, Draicone, RonnieBrown, SmackBot, Adam majewski, Melchoir, Stifle, Srnec, Gilliam, Kurykh, Concerned cynic, Foxjwill, Vanished User 0001, Michael Ross, Jon Awbrey, Jim.belk, Feraudyh, CredoFromStart, Michael Kinyon, JHunterJ, Cikicdragan, Mets501, Rschwieb, Captain Wacky, JForget, CRGreathouse, CBM, 345Kai, Gregbard, Doctormatt, Pepijn- vdG, Tawkerbot4, Xantharius, Hanche, BetacommandBot, Thijs!bot, Egriffin, Rlupsa, WilliamH, Rnealh, Salgueiro~enwiki, JAnDbot, Thenub314, Magioladitis, VoABot II, JamesBWatson, MetsBot, Robin S, Philippe.beaudoin, Pekaje, Pomte, Interwal, Cpiral, GaborLa- jos, Policron, Taifunbrowser, Idioma-bot, Station1, Davehi1, Billinghurst, Geanixx, AlleborgoBot, SieBot, BotMultichill, This, that and the other, Henry Delforn (old), Aspects, OKBot, Bulkroosh, C1wang, Classicalecon, Wmli, Kclchan, Watchduck, Hans Adler, Qwfp, Cdegremo, Palnot, XLinkBot, Gerhardvalentin, Libcub, LaaknorBot, CarsracBot, Dyaa, Legobot, Luckas-bot, Yobot, Ht686rg90, Gyro Copter, Andy.melnikov, ArthurBot, Xqbot, GrouchoBot, Lenore, RibotBOT, Antares5245, Sokbot3000, Anthonystevens2, ARandom- Nicole, Tkuvho, SpaceFlight89, TobeBot, Miracle Pen, EmausBot, ReneGMata, AvicBot, Vanished user fois8fhow3iqf9hsrlgkjw4tus, TyA, Donner60, Gottlob Gödel, ClueBot NG, Bethre, Helpful Pixie Bot, Mark Arsten, ChrisGualtieri, Rectipaedia, YFdyh-bot, Noix07, Adammwagner, Damonamc and Anonymous: 108 • Exponential random graph models Source: https://en.wikipedia.org/wiki/Exponential_random_graph_models?oldid=654249043 Con- tributors: Bearcat, Aetheling, Mdd, Aaron McDaid, Rjwilmsi, Madcoverboy, Jrouquie, Inks.LWC, Katharineamy, Vejvančický, Fres- coBot, Changebo, DGaffney, RjwilmsiBot, Deadlyops, Helpful Pixie Bot, Alchames, SolidPhase and Anonymous: 4 • Flow network Source: https://en.wikipedia.org/wiki/Flow_network?oldid=665182453 Contributors: Michael Hardy, Darkwind, Silver- fish, Charles Matthews, Dcoetzee, Dysprosia, Jogloran, Giftlite, BenFrantzDale, Neilc, Sebbe, Andreas Kaufmann, Mormegil, Rich Farmbrough, Paul August, Zaslav, Spoon!, Bfishburne, Mdd, Oleg Alexandrov, LOL, Myleslong, SeventyThree, Ligulem, Bubba73, Mathbot, Vonkje, Jittat~enwiki, YurikBot, Nils Grimsmo, Misza13, Nethgirb, That Guy, From That Show!, SmackBot, Nejko, Cazort, Mcld, Chris the speller, DHN-bot~enwiki, Ssnseawolf, Vina-iwbot~enwiki, Ycl6, Lim Wei Quan, Freederick, Limaner, CmdrObot, Ex- piring frog, Thijs!bot, Heineman, Jirka6, Hermel, Fallschirmjäger, David Eppstein, User A1, Craigyjack, Mange01, Maurice Carbonaro, Cobi, BernardZ, Oddbod7, Alexkorn, Mischling, Leafyplant, Serknap, Djice91, Debamf, Grumpyland, Jjurski, Svick, ClueBot, Vacio, ,AnomieBOT, Womiller99, Citation bot ,ماني ,Bender2k14, Crowsnest, Dthomsen8, Vikas.menon, Addbot, Fgnievinski, EconoPhysicist Jalpar75, FrescoBot, Kiefer.Wolfowitz, Fraxtil, TheArguer, WikitanvirBot, Sembrestels, Wakebrdkid, ClueBot NG, Dstein64, Zetta- phone, Faizan, Annick Robberecht, Dough34, Ssolanki07, F.Raab, JonMarkPerry, Ahaider3 and Anonymous: 77 • Forbidden graph characterization Source: https://en.wikipedia.org/wiki/Forbidden_graph_characterization?oldid=652696590 Con- tributors: Edemaine, Michael Hardy, Populus, Altenmann, Giftlite, Rjwilmsi, Tizio, Michael Slone, SmackBot, Headbomb, Mack2, David Eppstein, R'n'B, Koko90, Jdcrutch, PaulTanenbaum, Justin W Smith, Rohan4321, AnomieBOT, Citation bot, Twri, Gilo1969, Luis Goddyn, Citation bot 1, At-par, RobinK, Episcophagus, Helpful Pixie Bot and Anonymous: 6 • Fundamental group Source: https://en.wikipedia.org/wiki/Fundamental_group?oldid=648590804 Contributors: AxelBoldt, Zundark, Patrick, Michael Hardy, TakuyaMurata, Poor Yorick, Raven in Orbit, Charles Matthews, Dysprosia, Jitse Niesen, Phys, Tobias Berge- mann, Tosha, Giftlite, Lethe, Fropuff, Dan Gardner, Sam nead, Gauge, Andi5, Blotwell, Msh210, Linas, Cruccone, OdedSchramm, BD2412, Dpv, Rjwilmsi, Staecker, R.e.b., Bgwhite, YurikBot, RussBot, Archelon, Hirak 99, Orthografer, Finell, RonnieBrown, Sar- danaphalus, SmackBot, Rgrizza, MalafayaBot, Silly rabbit, Nbarth, Akriasas, Jim.belk, Mathsci, Md2perpe, Ranicki, Myasuda, Thijs!bot, Oerjan, Headbomb, Klausness, Turgidson, Magioladitis, Wlod, Jakob.scholbach, JoergenB, STBot, J.delanoy, Cbigorgne, HiDrNick, Haiviet~enwiki, YonaBot, JackSchmidt, ClueBot, The Thing That Should Not Be, Alksentrs, Tonkawa68, Hans Adler, SockPuppet- ForTomruen, Legobot, Luckas-bot, Ht686rg90, AnomieBOT, 9258fahsflkh917fas, Br77rino, Point-set topologist, Senouf, Ringspec- trum, Howard McCay, FrescoBot, ElNuevoEinstein, MidgleyC, EmausBot, WikitanvirBot, KonradVoelkel, Stephan Spahn, ZéroBot, Anita5192, Amulware, Helpful Pixie Bot, HUnTeR4subs, Khazar2, Enyokoyama, Kephir, Mark viking, Hamoudafg and Anonymous: 50 • Geometric graph theory Source: https://en.wikipedia.org/wiki/Geometric_graph_theory?oldid=624115184 Contributors: Tomo, Dcoet- zee, Altenmann, Andreas Kaufmann, Brim, OdedSchramm, Wavelength, Jerome Charles Potts, Mhym, David Eppstein, JohnBlackburne, Libcub, Addbot, Citation bot, Twri, Omnipaedista, EmausBot, El Roih, BG19bot, Brad7777, Monkbot, Hou710 and Anonymous: 3 • Glossary of graph theory Source: https://en.wikipedia.org/wiki/Glossary_of_graph_theory?oldid=666492791 Contributors: Damian Yerrick, XJaM, Nonenmac, Tomo, Edward, Patrick, Michael Hardy, Wshun, Booyabazooka, Dcljr, TakuyaMurata, GTBacchus, Eric119, Charles Matthews, Dcoetzee, Dysprosia, Doradus, Reina riemann, Markhurd, Maximus Rex, Hyacinth, Populus, Altenmann, MathMartin, Bkell, Giftlite, Dbenbenn, Brona, Sundar, GGordonWorleyIII, HorsePunchKid, Peter Kwok, D6, Rich Farmbrough, ArnoldReinhold, Paul August, Bender235, Zaslav, Kjoonlee, Elwikipedista~enwiki, El C, Yitzhak, TheSolomon, A1kmm, 3mta3, Jérôme, Ricky81682, Rd- vdijk, Oleg Alexandrov, Joriki, Linas, MattGiuca, Ruud Koot, Jwanders, Xiong, Lasunncty, SixWingedSeraph, Grammarbot, Tizio, Salix alba, Mathbot, Margosbot~enwiki, Sunayana, Pojo, Quuxplusone, Vonkje, N8wilson, Chobot, Algebraist, YurikBot, Me and, Altoid, Grubber, Archelon, Gaius Cornelius, Rick Norwood, Ott2, Closedmouth, SmackBot, Stux, Achab, Brick Thrower, Mgreenbe, Mcld, [email protected], Lansey, Thechao, JLeander, DVanDyck, Quaeler, RekishiEJ, CmdrObot, Csaracho, Citrus538, Jokes Free4Me, Cydebot, Starcrab, Quintopia, Ferris37, Scarpy, Headbomb, Salgueiro~enwiki, Spanningtree, David Eppstein, JoergenB, Kope, Copy- ToWiktionaryBot, R'n'B, Leyo, Mikhail Dvorkin, The Transliterator, Ratfox, MentorMentorum, Skaraoke, SanitySolipsism, Anonymous Dissident, PaulTanenbaum, Ivan Štambuk, Whorush, Eggwadi, Thehotelambush, Doc honcho, Anchor Link Bot, Rsdetsch, Denisarona, 53.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 325

Justin W Smith, Unbuttered Parsnip, Happynomad, Alexey Muranov, Addbot, Aarond144, Jfitzell, Nate Wessel, Yobot, Jalal0, Ian Kelling, Citation bot, Buenasdiaz, Twri, Kinewma, Miym, Prunesqualer, Mzamora2, JZacharyG, Pmq20, Shadowjams, Hobsonlane, DixonD- Bot, Reaper Eternal, EmausBot, John of Reading, Wikipelli, Bethnim, Mastergreg82, ClueBot NG, EmanueleMinotto, Warumwarum, DavidRideout, BG19bot, Andrey.gric, Szebenisz, ChrisGualtieri, Deltahedron, Jw489kent, Jmerm, Morgoth106, SofjaKovalevskaja and Anonymous: 131 • Graph (mathematics) Source: https://en.wikipedia.org/wiki/Graph_(mathematics)?oldid=668424269 Contributors: The Anome, Man- ning Bartlett, XJaM, Tomo, Stevertigo, Patrick, Michael Hardy, W~enwiki, Zocky, Wshun, Booyabazooka, Karada, Ahoerstemeier, Den fjättrade ankan~enwiki, Jiang, Dcoetzee, Dysprosia, Doradus, Zero0000, McKay, BenRG, Robbot, LuckyWizard, Mountain, Altenmann, Mayooranathan, Gandalf61, MathMartin, Timrollpickering, Bkell, Tobias Bergemann, Tosha, Giftlite, Dbenbenn, Harp, Tom harrison, Chinasaur, Jason Quinn, Matt Crypto, Neilc, Erhudy, Knutux, Yath, Joeblakesley, Tomruen, Peter Kwok, Aknorals, Chmod007, Ab- dull, Corti, PhotoBox, Discospinster, Rich Farmbrough, Andros 1337, Paul August, Zaslav, Gauge, Tompw, Crisófilax, Yitzhak, Kine, Bobo192, Jpiw~enwiki, Mdd, Jumbuck, Zachlipton, Sswn, Liao, Rgclegg, Paleorthid, Super-Magician, Mahanga, Joriki, Mindmatrix, Wesley Moy, Oliphaunt, Brentdax, Jwanders, Tbc2, Cbdorsett, Ch'marr, Davidfstr, Xiong, Marudubshinki, Tslocum, Magister Math- ematicae, Ilya, SixWingedSeraph, Sjö, Rjwilmsi, Salix alba, Bhadani, FlaBot, Nowhither, Mathbot, Gurch, MikeBorkowski~enwiki, Chronist~enwiki, Silversmith, Chobot, Peterl, Siddhant, Borgx, Karlscherer3, Hairy Dude, Gene.arboit, Michael Slone, Gaius Cornelius, Shanel, Gwaihir, Dtrebbien, Dureo, Doetoe, Wknight94, Netrapt, RobertBorgersen, Cjfsyntropy, Burnin1134, SmackBot, Nihonjoe, Stux, McGeddon, BiT, Algont, Ohnoitsjamie, Chris the speller, Bluebot, TimBentley, Theone256, Cornflake pirate, Zven, Anabus, Can't sleep, clown will eat me, Tamfang, Cybercobra, Jon Awbrey, Kuru, Nat2, Tomhubbard, Dicklyon, Cbuckley, Quaeler, BranStark, Wan- drer2, George100, Ylloh, Vaughan Pratt, Repied, CRGreathouse, Citrus538, Jokes Free4Me, Requestion, Myasuda, Danrah, Robertstead- man, Eric Lengyel, Headbomb, Urdutext, AntiVandalBot, Hannes Eder, JAnDbot, MER-C, Dreamster, Struthious Bandersnatch, JNW, Catgut, David Eppstein, JoergenB, MartinBot, Rettetast, R'n'B, J.delanoy, Hans Dunkelberg, Yecril, Pafcu, Ijdejter, Deor, ABF, Magh- nus, TXiKiBoT, Sdrucker, Someguy1221, PaulTanenbaum, Lambyte, Ilia Kr., Jpeeling, Falcon8765, RaseaC, Insanity Incarnate, Zenek.k, Radagast3, Debamf, Debeolaurus, SieBot, Minder2k, Dawn Bard, Cwkmail, Jon har, SophomoricPedant, Oxymoron83, Henry Delforn (old), Ddxc, Svick, Phegyi81, Anchor Link Bot, ClueBot, Vacio, Nsk92, JuPitEer, Huynl, JP.Martin-Flatin, Xavexgoem, UKoch, Mit- maro, Editor70, Watchduck, Hans Adler, Suchap, Wikidsp, Muro Bot, 3ICE, Aitias, Versus22, Djk3, Kruusamägi, SoxBot III, XLinkBot, Marc van Leeuwen, Libcub, WikiDao, Tangi-tamma, Addbot, Gutin, Athenray, Willking1979, Royerloic, West.andrew.g, Tyw7, Zor- robot, LuK3, Luckas-bot, Yobot, TaBOT-zerem, THEN WHO WAS PHONE?, E mraedarab, Tempodivalse, Пика Пика, Ulric1313, RandomAct, Materialscientist, Twri, Dockfish, Anand jeyahar, Miym, Prunesqualer, Andyman100, VictorPorton, JonDePlume, Shadow- jams, A.amitkumar, Kracekumar, Edgars2007, Citation bot 1, DrilBot, Amintora, Pinethicket, Calmer Waters, RobinK, Barras, Tgv8925, DARTH SIDIOUS 2, Powerthirst123, DRAGON BOOSTER, Mymyhoward16, Kerrick Staley, Ajraddatz, Wgunther, Bethnim, Akuta- gawa10, White Trillium, Josve05a, D.Lazard, L Kensington, Maschen, Inka 888, Chewings72, ClueBot NG, Wcherowi, MelbourneStar, Kingmash, O.Koslowski, Joel B. Lewis, Andrewsky00, Timflutre, Helpful Pixie Bot, HMSSolent, Grolmusz, Mrjohncummings, Stevetihi, Канеюку, Void-995, MRG90, Vanischenu, Tman159, Ekren, Lugia2453, Jeff Erickson, CentroBabbage, Nina Cerutti, Chip Wildon Forster, Yloreander, Manul, JaconaFrere, Monkbot, Hou710, Anon124 and Anonymous: 351 • Graph automorphism Source: https://en.wikipedia.org/wiki/Graph_automorphism?oldid=665800464 Contributors: Altenmann, Giftlite, Edcolins, Tomruen, Zaslav, Joriki, LOL, BD2412, Ott2, Mhym, Igor Markov, DeC, Magioladitis, A3nm, David Eppstein, Koko90, Robert Illes, VolkovBot, TXiKiBoT, Radagast3, Hans Adler, SchreiberBike, Addbot, Netzwerkerin, Luckas-bot, Yobot, Twri, Miym, MathsPo- etry, FrescoBot, Citation bot 1, Trappist the monk, RjwilmsiBot, Maudgalya, Joel B. Lewis, Helpful Pixie Bot, Канеюку, Mko3, Monkbot and Anonymous: 5 • Graph factorization Source: https://en.wikipedia.org/wiki/Graph_factorization?oldid=661009028 Contributors: Michael Hardy, HorsePunchKid, Andreas Kaufmann, Shahab, Zaslav, Spoon!, Blotwell, Gurch, RobertBorgersen, SmackBot, Hftf, Melchoir, Stannered, David Eppstein, Jwuthe2, Policron, Sdudah, Justin W Smith, Bobbadoc, Miym, Trappist the monk, Graphtheorystudent5 and Anonymous: 4 • Graph homomorphism Source: https://en.wikipedia.org/wiki/Graph_homomorphism?oldid=629400594 Contributors: Charles Matthews, Dcoetzee, JosephBarillari, MathMartin, DHN, Bkell, Giftlite, Dbenbenn, Peter Kwok, Rich Farmbrough, Arthena, Aquae, Mathbot, Ott2, Nekura, RonnieBrown, SmackBot, Taxipom, DVanDyck, CmdrObot, Ntsimp, David Eppstein, R'n'B, Pomte, Da Joe, Philippe Giabbanelli, Addbot, OlEnglish, PV=nRT, Calle, Citation bot, HanielBarbosa, HRoestBot, Jmichayls, John of Reading, DG-on-WP and Anonymous: 12 • Graph isomorphism Source: https://en.wikipedia.org/wiki/Graph_isomorphism?oldid=664144309 Contributors: AxelBoldt, Michael Hardy, Booyabazooka, Dcoetzee, Jitse Niesen, McKay, Altenmann, MathMartin, Bkell, Giftlite, Jason Quinn, Rich Farmbrough, Paul August, EmilJ, AdamAtlas, Trjumpet, MoraSique, Puzne~enwiki, Linas, Shreevatsa, Oliphaunt, Marudubshinki, BD2412, Rjwilmsi, Maxal, YurikBot, Michael Slone, KSmrq, Arthur Rubin, Iotatau, Itub, SmackBot, Davepape, DHN-bot~enwiki, Blake-, Loopology, J. Finkelstein, Aeons, CBM, Citrus538, SuperMidget, Blaisorblade, Kozuch, Headbomb, Nemnkim, David Eppstein, PierreCA22, Veg- asprof, Robert Illes, Daniele.tampieri, AlnoktaBOT, TXiKiBoT, Jamelan, Jludwig, Justin W Smith, Tim32, PixelBot, Sleepinj, MystBot, Addbot, DOI bot, Verbal, Lightbot, PV=nRT, Yobot, Nibbio84, Citation bot, Twri, Kingfishr, RibotBOT, Ricardo Ferreira de Oliveira, Citation bot 1, Thomasp1985, RjwilmsiBot, Norlesh, Igor Yalovecky, El Roih, Aerosprite, Ansatz, Anrnusna, Monkbot and Anonymous: 39 • Graph labeling Source: https://en.wikipedia.org/wiki/Graph_labeling?oldid=645130298 Contributors: McKay, Altenmann, MathMartin, Giftlite, Zaslav, Oleg Alexandrov, Mathbot, Jameshfisher, Quuxplusone, Vonkje, Michael Slone, Arichnad, SmackBot, Davepape, Jhinra, CRGreathouse, CBM, Hermel, David Eppstein, Justin W Smith, Basploeger, Addbot, Gthelleloid, TutterMouse, Amirobot, IRP, Twri, LilHelpa, RjwilmsiBot, WikitanvirBot, Helpful Pixie Bot, Deltahedron and Anonymous: 14 • Graph minor Source: https://en.wikipedia.org/wiki/Graph_minor?oldid=615613527 Contributors: AxelBoldt, Edemaine, Michael Hardy, Dcoetzee, Adoarns, Phil Boswell, MathMartin, Giftlite, Dbenbenn, Shahab, Qutezuce, A1kmm, Rjwilmsi, Tizio, JFromm, Mathbot, Chobot, Hairy Dude, Dkostic, El T, Ilmari Karonen, Eskimbot, Taxipom, Lambiam, JHunterJ, CmdrObot, Sniffnoy, Flamholz, Myasuda, Headbomb, David Eppstein, Wdflake, R'n'B, Karlhendrikse, Addbot, Kilom691, Cmcq, Citation bot, Twri, ArthurBot, Luis Goddyn, Genusfour, Citation bot 1, RobinK, Dr.Chuckster, RjwilmsiBot, EmausBot, GoingBatty, Bethnim, Msuperdock, BG19bot, Khazar2, Monkbot, OldMacDonalds and Anonymous: 23 • Graph theory Source: https://en.wikipedia.org/wiki/Graph_theory?oldid=667682086 Contributors: AxelBoldt, Kpjas, LC~enwiki, Robert Merkel, Zundark, Taw, Jeronimo, BlckKnght, Dze27, Oskar Flordal, Andre Engels, Karl E. V. Palmen, Shd~enwiki, XJaM, JeLuF, Arvindn, Gianfranco, Matusz, PierreAbbat, Miguel~enwiki, Boleslav Bobcik, FvdP, Camembert, Hirzel, Tomo, Patrick, Chas zzz brown, Michael Hardy, Wshun, Booyabazooka, Glinos, Meekohi, Jakob Voss, TakuyaMurata, GTBacchus, Grog~enwiki, Pcb21, Dgrant, CesarB, 326 CHAPTER 53. WAGNER’S THEOREM

Looxix~enwiki, Ellywa, Ams80, Ronz, Nanshu, Gyan, Nichtich~enwiki, Mark Foskey, Александър, Poor Yorick, Caramdir~enwiki, Mxn, Charles Matthews, Berteun, Almi, Hbruhn, Dysprosia, Daniel Quinlan, Gutza, Doradus, Zoicon5, Roachmeister, Populus, Zero0000, Doctorbozzball, McKay, Shizhao, Optim, Robbot, Brent Gulanowski, Fredrik, Altenmann, Dittaeva, Gandalf61, MathMartin, Sverdrup, Puckly, KellyCoinGuy, Thesilverbail, Bkell, Paul Murray, Fuelbottle, ElBenevolente, Aknxy, Dina, Tobias Bergemann, Giftlite, Dben- benn, Thv, The Cave Troll, Elf, Lupin, Brona, Pashute, Duncharris, Andris, Jorge Stolfi, Tyir, Sundar, GGordonWorleyIII, Alan Au, Bact, Knutux, APH, Tomruen, Tyler McHenry, Naerbnic, Peter Kwok, Robin klein, Ratiocinate, Andreas Kaufmann, Chmod007, Made- wokherd, Discospinster, Solitude, Guanabot, Qutezuce, Mani1, Paul August, Bender235, Zaslav, Tompw, Diego UFCG~enwiki, Chalst, Shanes, Renice, C S, Csl77, Jojit fb, Photonique, Jonsafari, Obradovic Goran, Jumbuck, Msh210, Alansohn, Liao, Mailer diablo, Mari- anocecowski, Aquae, Blair Azzopardi, Oleg Alexandrov, Youngster68, Linas, LOL, Ruud Koot, Tckma, Astrophil, Davidfstr, GregorB, SCEhardt, Stochata, Xiong, Graham87, Magister Mathematicae, SixWingedSeraph, Rjwilmsi, Gmelli, George Burgess, Eugeneiiim, Ar- bor, Kalogeropoulos, Fred Bradstadt, FayssalF, FlaBot, PaulHoadley, RexNL, Vonkje, Chobot, Jinma, YurikBot, Wavelength, Michael Slone, Gaius Cornelius, Alex Bakharev, Morphh, SEWilcoBot, Jaxl, Ino5hiro, Xdenizen, Daniel Mietchen, Shepazu, Rev3nant, Lt-wiki- bot, Jwissick, Arthur Rubin, LeonardoRob0t, Agro1986, Eric.weigle, Allens, Sardanaphalus, Melchoir, Brick Thrower, Ohnoitsjamie, Oli Filth, OrangeDog, Taxipom, DHN-bot~enwiki, Tsca.bot, Onorem, GraphTheoryPwns, Lpgeffen, Jon Awbrey, Henning Makholm, Mlpkr, SashatoBot, Whyfish, Disavian, MynameisJayden, Idiosyncratic-bumblebee, Dicklyon, Quaeler, Lanem, Tawkerbot2, Ylloh, Mahlerite, CRGreathouse, Dycedarg, Requestion, Bumbulski, Myasuda, RUVARD, The Isiah, Ntsimp, Abeg92, Corpx, DumbBOT, Anthonynow12, Thijs!bot, Jheuristic, King Bee, Pstanton, Hazmat2, Headbomb, Marek69, Eleuther, AntiVandalBot, Whiteknox, Hannes Eder, Space- farer, Myanw, JAnDbot, MER-C, Igodard, Restname, Sangak, Tmusgrove, Feeeshboy, Usien6, Ldecola, David Eppstein, Kope, Der- Hexer, Oroso, MartinBot, R'n'B, Uncle Dick, Joespiff, Ignatzmice, Shikhar1986, Tarotcards, Policron, XxjwuxX, Yecril, JohnBlackburne, Dggreen, Anonymous Dissident, Alcidesfonseca, Anna Lincoln, Ocolon, Magmi, PaulTanenbaum, Geometry guy, Fivelittlemonkeys, Sa- credmint, Spitfire8520, Radagast3, SieBot, Dawn Bard, Toddst1, Jon har, Bananastalktome, Titanic4000, Beda42, Maxime.Debosschere, Damien Karras, ClueBot, DFRussia, PipepBot, Justin W Smith, Vacio, Wraithful, Garyzx, Mild Bill Hiccup, DragonBot, Fchristo, Hans Adler, Dafyddg, Razorflame, Rmiesen, Kruusamägi, Pugget, Darkicebot, BodhisattvaBot, Tangi-tamma, Addbot, Dr.S.Ramachandran, Cerber, DOI bot, Ronhjones, Low-frequency internal, CanadianLinuxUser, Protonk, LaaknorBot, Smoke73, Delaszk, Favonian, Mauro- bio, Lightbot, Jarble, Ettrig, Luckas-bot, Yobot, Kilom691, Trinitrix, Jean.julius, AnomieBOT, Womiller99, Sonia, Jim1138, Piano non troppo, Gragragra, RandomAct, Citation bot, Ayda D, Xqbot, Jerome zhu, Capricorn42, Nasnema, Miym, GiveAFishABone, RibotBOT, Jalpar75, Aaditya 7, Ankitbhatt, FrescoBot, Mark Renier, SlumdogAramis, Citation bot 1, Sibian, Pinethicket, RobinK, Wsu-dm-jb, D75304, Wsu-f, Xnn, Obankston, Andrea105, RjwilmsiBot, TjBot, Powerthirst123, Aaronzat, EmausBot, Domesticenginerd, Eleferen- Bot, Jmencisom, Slawekb, Akutagawa10, D.Lazard, Netha Hussain, Tolly4bolly, ChuispastonBot, EdoBot, ClueBot NG, Watersmeetf- reak, Matthiaspaul, MelbourneStar, Outraged duck, OMurgo, Bazuz, Aks1521, Masssly, Joel B. Lewis, Johnsopc, HMSSolent, 4368a, BG19bot, Ajweinstein, Канеюку, MusikAnimal, AvocatoBot, Bereziny, Brad7777, Sofia karampataki, ChrisGualtieri, GoShow, Dexbot, Cerabot~enwiki, Omgigotanaccount, Wikiisgreat123, Faizan, Maxwell bernard, Bg9989, Zsoftua, SakeUPenn, Yloreander, StaticElec- tricity, Gold4444, Cyborgbadger, Zachwaltman, Gr pbi, KasparBot and Anonymous: 378 • Homotopy Source: https://en.wikipedia.org/wiki/Homotopy?oldid=668061606 Contributors: AxelBoldt, Zundark, Ed Poor, Michael Hardy, TakuyaMurata, Delirium, Ciphergoth, Dmoews, Charles Matthews, Joshuabowman, Dysprosia, Jitse Niesen, Wik, Hyacinth, Phys, MathMartin, Giftlite, Mintleaf~enwiki, BenFrantzDale, Lupin, Dan Gardner, Gorlim, TheObtuseAngleOfDoom, Paul August, Gauge, Wood Thrush, Blotwell, Obradovic Goran, Msh210, Cmapm, Drbreznjev, Linas, GregorB, Timinou~enwiki, Salix alba, Juan Marquez, Mathbot, Masnevets, Kummi, YurikBot, Wavelength, KSmrq, Gwaihir, Yahya Abdal-Aziz, The Raven, Sardanaphalus, SmackBot, Mhss, Chris the speller, Bluebot, Silly rabbit, Melburnian, Nbarth, Raman Arora, DMacks, Jim.belk, Physis, JoeBot, CBM, Ranicki, Mct mht, John254, E. Ripley, RobHar, Lfstevens, Sullivan.t.j, David Eppstein, JoergenB, Yonidebot, Maproom, Policron, PMajer, Am- brose H. Field, Hqb, Dmcq, Haiviet~enwiki, Subh83, Tkeu, Niceguyedc, PMDrive1061, DragonBot, His Wikiness, Addbot, Laaknor- Bot, Millerhaynes, Luckas-bot, Yobot, Ht686rg90, AnomieBOT, Alabama Moon, 9258fahsflkh917fas, Zerxp, Squa-la-la! We're off!, HenrikRueping, SassoBot, D'ohBot, MorphismOfDoom, ElNuevoEinstein, Aepound, Redfrettchen, Ripchip Bot, EmausBot, Tonyxty, ZéroBot, ClueBot NG, Ryan Vesey, BG19bot, Beaumont877, Plopman004, Makecat-bot, SummerWillow, Isarra (HG), Mark viking, Hamoudafg, Sharkeisha233232323, Monkbot, KasparBot and Anonymous: 55 • Hypergraph Source: https://en.wikipedia.org/wiki/Hypergraph?oldid=665984187 Contributors: Eloquence, Tomo, Michael Hardy, Pcb21, Charles Matthews, Dcoetzee, Dysprosia, Gandalf61, MathMartin, Bkell, Giftlite, Dbenbenn, DavidCary, Jkseppan, Herdrick, Gubbubu, Pgan002, Andreas Kaufmann, Rich Farmbrough, Paul August, Zaslav, FernandoAires, Pkledgrape, Obradovic Goran, Oleg Alexandrov, Linas, Hypercube~enwiki, Rjwilmsi, Tizio, Jaymz Height-Field, Brighterorange, Mathbot, YurikBot, Ott2, Wasseralm, RDBury, Zom-B, Tsca.bot, Tamfang, Georg-Johann, Mhym, LouScheffer, JustAnotherJoe, Ylloh, Eric Lengyel, AgentPeppermint, Escarbot, Dricherby, A3nm, David Eppstein, Gwern, Brolny, Rocchini, Lantonov, Policron, Kadirakbudak, Austinmohr, Rei-bot, Lou.weird, Mjaredd, Mi- NombreDeGuerra, Freeman77, Justin W Smith, Mild Bill Hiccup, Oliver Kullmann, Farisori, Dwiddows, Jax 0677, Tangi-tamma, Addbot, JRN08, Balabiot, JakobVoss, Luckas-bot, Yobot, TaBOT-zerem, White gecko, Rubinbot, Twri, Gilo1969, JonDePlume, Mi- capps.euler, Theorist2, Citation bot 1, 777sms, John of Reading, Catalyurek, ZéroBot, , Ort43v, Pgdx, Tijfo098, Wilson SK Jin, Wcherowi, Bruce ricard, SnehaNar, PhDStud, Abdd0e77, Jodosma, Nmkhanh1990, Monkbot, TopherDobson, JMP EAX, Brettoa and Anonymous: 40 • If and only if Source: https://en.wikipedia.org/wiki/If_and_only_if?oldid=667197434 Contributors: Damian Yerrick, AxelBoldt, Matthew Woodcraft, Vicki Rosenzweig, Zundark, Tarquin, Larry_Sanger, Toby Bartels, Ark~enwiki, Camembert, Stevertigo, Patrick, Chas zzz brown, Michael Hardy, Wshun, DopefishJustin, Dante Alighieri, Dominus, SGBailey, Wwwwolf, Delirium, Geoffrey~enwiki, Stevenj, Kingturtle, UserGoogol, Andres, Evercat, Jacquerie27, Adam Conover, Revolver, Wikiborg, Dysprosia, Itai, Ledge, McKay, Robbot, Psychonaut, Henrygb, Ruakh, Diberri, Tobias Bergemann, Adam78, Enochlau, Giftlite, DavidCary, Ævar Arnfjörð Bjarmason, Mellum, Chinasaur, Jason Quinn, Taak, Superfrank~enwiki, Wmahan, LiDaobing, Mvc, Neutrality, Urhixidur, Ropers, Jewbacca, Karl Dick- man, PhotoBox, Brianjd, Paul August, Sunborn, Elwikipedista~enwiki, Chalst, Edward Z. Yang, Pearle, Ekevu, IgorekSF, Msh210, Interiot, ABCD, Stillnotelf, Suruena, Voltagedrop, Rhialto, Forderud, Oleg Alexandrov, Joriki, Velho, Woohookitty, Mindmatrix, Ruud Koot, Ryan Reich, Adjam, Pope on a Rope, Eyu100, R.e.b., FlaBot, Mathbot, Rbonvall, Glenn L, BMF81, Bgwhite, RussBot, Post- glock, Voidxor, El Pollo Diablo, Gadget850, Jkelly, Danielpi, Lt-wiki-bot, TheMadBaron, Nzzl, Arthur Rubin, Netrapt, SmackBot, Ttzz, Gloin~enwiki, Incnis Mrsi, InverseHypercube, Melchoir, GoOdCoNtEnT, Thumperward, Jonatan Swift, Peterwhy, Acdx, Shirifan, Evil- dictaitor, Abolen, Rainwarrior, Dicklyon, Mets501, Yuide, Shoeofdeath, CRGreathouse, CBM, Joshwa, Picaroon, Gregbard, Eu.stefan, Letranova, Thijs!bot, Egriffin, Schneau, Jojan, Davkal, AgentPeppermint, Urdutext, Holyknight33, Escarbot, WinBot, Serpent’s Choice, Timlevin, Singularity, David Eppstein, Msknathan, MartinBot, AstroHurricane001, Vanished user 47736712, Kenneth M Burke, Dorgan- Bot, Bsroiaadn, TXiKiBoT, Anonymous Dissident, Abyaly, Ichtaca, Mouse is back, Rjgodoy, TrippingTroubadour, KjellG, AlleborgoBot, Lillingen, SieBot, Iamthedeus, This, that and the other, Smaug123, Skippydo, Warman06~enwiki, Tiny plastic Grey Knight, Francvs, 53.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 327

Minehava, ClueBot, Surfeited, BANZ111, Master11218, WestwoodMatt, Excirial, He7d3r, Pfhorrest, Kmddmk, Addbot, Ronhjones, Wikimichael22, Lightbot, Jarble, Yobot, Bryan.burgers, AnomieBOT, Nejatarn, Ciphers, Quintus314, Лев Дубовой, WissensDürster, Ex13, Rapsar, Mikespedia, Lotje, Igor Yalovecky, EmausBot, Ruxkor, Chricho, Sugarfoot1001, Tijfo098, ClueBot NG, Wcherowi, Widr, MerlIwBot, Helpful Pixie Bot, Chmarkine, CarrieVS, Ekips39, Epicgenius, Dr Lindsay B Yeates, Matthew Kastor and Anonymous: 144 • Isomorphism Source: https://en.wikipedia.org/wiki/Isomorphism?oldid=655612091 Contributors: AxelBoldt, Zundark, Andre Engels, Youssefsan, Ghakko, Edemaine, Ryguasu, Youandme, Stevertigo, Patrick, Michael Hardy, Isomorphic, TakuyaMurata, Glenn, Netsnipe, Mxn, Revolver, Charles Matthews, Reddi, Dysprosia, Andrewman327, Zero0000, Phys, Robbot, Bkell, Marc Venot, Tosha, Giftlite, MathKnight, Peruvianllama, MarkSweep, PhotoBox, Wrp103, Paul August, Bender235, Elwikipedista~enwiki, Nabla, EmilJ, Army1987, Msh210, Philip Cross, Oleg Alexandrov, LOL, BD2412, Yurik, Zbxgscqf, Mattmacf, Mathbot, Chobot, YurikBot, Wavelength, Space- potato, Jlittlet, Michael Slone, KSmrq, Mathwiz777, Grubber, Vanished user 1029384756, Stuhacking, Banus, SmackBot, Rljacob- son, The Rhymesmith, Kmarinas86, Lubos, Nbarth, Chlewbot, Maksim-bot, Bryanmcdonald, Spinality, Nick Green, Fantomdrives, Cronholm144, Jim.belk, 16@r, Mets501, Rschwieb, Yuide, CRGreathouse, Krauss, Sam Staton, Rlupsa, QuiteUnusual, Hannes Eder, TK-925, JAnDbot, Avaya1, Bahar, Coolhandscot, Magioladitis, Koberozendaal, JamesBWatson, Albmont, Uncle Dick, Smite-Meister, Cpiral, Trumpet marietta 45750, Policron, Bigdumbdinosaur, STBotD, Mcole13, Michael Angelkovich, Borhan0, LokiClock, Thad- deus Slamp, Anonymous Dissident, JhsBot, PaulTanenbaum, Spinningspark, Mike4ty4, SieBot, Ivan Štambuk, Ssavelan, Iamthedeus, Taemyr, Thehotelambush, Richard Molnár-Szipai, Anchor Link Bot, DixonD, Superbatfish, Martarius, Alexbot, Bender2k14, Ioan- nis.Demetriou, Brews ohare, Subversive.sound, Addbot, Omnipedian, Debresser, Ozob, PV=nRT, Jarble, Luckas-bot, Ezequiels.90, Nallimbot, AnomieBOT, Xqbot, XZeroBot, Omnipaedista, AllCluesKey, Sławomir Biały, Ebony Jackson, RedBot, Lars Washington, ,EmausBot, Fly by Night, Racerx11, GoingBatty, Slawekb, Zell08v, Quondum, D.Lazard, SporkBot ,کاشف عقیل ,Pokus9999, Gryllida YnnusOiramo, ClueBot NG, Frietjes, Mesoderm, HMSSolent, BG19bot, Канеюку, M hariprasad, Brad7777, Pratyya Ghosh, Pariefrac- ture, Mark viking, Eyesnore and Anonymous: 86 • Isomorphism class Source: https://en.wikipedia.org/wiki/Isomorphism_class?oldid=640078030 Contributors: LC~enwiki, Zundark, Michael Hardy, Seth Ilys, Marc Venot, Rgdboer, Oleg Alexandrov, Ruud Koot, Ryan Reich, Salix alba, YurikBot, SmackBot, Krauss, Addbot, JRB-Europe, Omnipaedista, Point-set topologist, Erik9bot, Fly by Night and Anonymous: 3 • Loop (graph theory) Source: https://en.wikipedia.org/wiki/Loop_(graph_theory)?oldid=640385005 Contributors: Booyabazooka, McKay, MathMartin, Paul August, Cburnett, Oliphaunt, Dmharvey, Dtrebbien, Gadget850, SmackBot, BiT, Tsca.bot, Lambiam, 16@r, Cm- drObot, Letranova, Kylemahan, David Eppstein, Rovnet, ClueBot, JP.Martin-Flatin, Addbot, Twri, Asfarer, FrescoBot, Ricardo Ferreira de Oliveira, Sinuhe20, MerlIwBot, Ibraheemmoosa and Anonymous: 6 • Matrix (mathematics) Source: https://en.wikipedia.org/wiki/Matrix_(mathematics)?oldid=667651227 Contributors: AxelBoldt, Tar- quin, Tbackstr, Hajhouse, XJaM, Ramin Nakisa, Stevertigo, Patrick, Michael Hardy, Wshun, Cole Kitchen, SGBailey, Chinju, Zeno Gantner, Dcljr, Ejrh, Looxix~enwiki, Muriel Gottrop~enwiki, Angela, Александър, Poor Yorick, Rmilson, Andres, Schneelocke, Charles Matthews, Dysprosia, Jitse Niesen, Lou Sander, Dtgm, Bevo, Francs2000, Robbot, Mazin07, Sander123, Chrism, Fredrik, R3m0t, Gan- dalf61, MathMartin, Sverdrup, Rasmus Faber, Bkell, Paul Murray, Neckro, Tobias Bergemann, Tosha, Giftlite, Jao, Arved, BenFrantz- Dale, Netoholic, Dissident, Dratman, Michael Devore, Waltpohl, Duncharris, Macrakis, Utcursch, Alexf, MarkSweep, Profvk, Wiml, Urhixidur, Sam nead, Azuredu, Barnaby dawson, Porges, PhotoBox, Shahab, Rich Farmbrough, FiP, ArnoldReinhold, Pavel Vozenilek, Paul August, ZeroOne, El C, Rgdboer, JRM, NetBot, The strategy freak, La goutte de pluie, Obradovic Goran, Mdd, Tsirel, LutzL, Landroni, Jumbuck, Jigen III, Alansohn, ABCD, Fritzpoll, Wanderingstan, Mlm42, Jheald, Simone, RJFJR, Dirac1933, AN(Ger), Adrian.benko, Oleg Alexandrov, Nessalc, Woohookitty, Igny, LOL, Webdinger, David Haslam, UbiquitousUK, Username314, Table- top, Waldir, Prashanthns, Mandarax, SixWingedSeraph, Grammarbot, Porcher, Sjakkalle, Koavf, Joti~enwiki, Watcharakorn, Schumin- Web, Old Moonraker, RexNL, Jrtayloriv, Krun, Fresheneesz, Srleffler, Vonkje, Masnevets, NevilleDNZ, Chobot, Krishnavedala, Karch, DVdm, Bgwhite, YurikBot, Wavelength, Borgx, RussBot, Michael Slone, Bhny, NawlinWiki, Rick Norwood, Jfheche, 48v, Bayle Shanks, Jimmyre, Misza13, Samuel Huang, Merosonox, DeadEyeArrow, Bota47, Glich, Szhaider, Jezzabr, Leptictidium, Mythobeast, Spon- doolicks, Alasdair, Lunch, Sardanaphalus, SmackBot, RDBury, CyclePat, KocjoBot~enwiki, Jagged 85, GoonerW, Minglai, Scott Paeth, Gilliam, Skizzik, Saros136, Chris the speller, Optikos, Bduke, Silly rabbit, DHN-bot~enwiki, Darth Panda, Foxjwill, Can't sleep, clown will eat me, Smallbones, KaiserbBot, Rrburke, Mhym, SundarBot, Jon Awbrey, Tesseran, Aghitza, The undertow, Lambiam, Wvbai- ley, Attys, Nat2, Cronholm144, Terry Bollinger, Nijdam, Aleenf1, Jacobdyer, WhiteHatLurker, Beetstra, Kaarebrandt, Mets501, Ned- dyseagoon, Dr.K., P199, MTSbot~enwiki, Quaeler, Rschwieb, Levineps, JMK, Tawkerbot2, Dlohcierekim, DKqwerty, AbsolutDan, Propower, CRGreathouse, JohnCD, INVERTED, SelfStudyBuddy, HalJor, MC10, Pascal.Tesson, Bkgoodman, Alucard (Dr.), Juansem- -Epbr123, Paragon12321, Markus Pössel, Aeriform, Gamer007, Headbomb, Marek69, RobHar, Urdu ,הסרפד ,pere, Codetiger, Bellayet text, AntiVandalBot, Lself, Jj137, Hermel, Oatmealcookiemon, JAnDbot, Fullverse, MER-C, Yanngeffrotin~enwiki, Bennybp, VoABot II, Fusionmix, T@nn, JNW, Jakob.scholbach, Rivertorch, EagleFan, JJ Harrison, Sullivan.t.j, David Eppstein, User A1, ANONYMOUS COWARD0xC0DE, JoergenB, Philg88, Nevit, Hbent, Gjd001, Doccolinni, Yodalee327, R'n'B, Alfred Legrand, J.delanoy, Rlshee- han, Maurice Carbonaro, Richard777, Wayp123, Toghrul Talibzadeh, Aqwis, It Is Me Here, Cole the ninja, TomyDuby, Peskydan, AntiSpamBot, JonMcLoone, Policron, Doug4, Fylwind, Kevinecahill, Ben R. Thomas, CardinalDan, OktayD, Egghead06, X!, Malik Shabazz, UnicornTapestry, Shiggity, VolkovBot, Dark123, JohnBlackburne, LokiClock, VasilievVV, DoorsAjar, TXiKiBoT, Hlevkin, Rei-bot, Anonymous Dissident, D23042304, PaulTanenbaum, LeaveSleaves, BigDunc, Wolfrock, Wdrev, Brianga, Dmcq, KjellG, Alle- borgoBot, Symane, Anoko moonlight, W4chris, Typofier, Neparis, T-9000, D. Recorder, ChrisMiddleton, GirasoleDE, Dogah, SieBot, Ivan Štambuk, Bachcell, Gerakibot, Cwkmail, Yintan, Radon210, Elcobbola, Paolo.dL, Oxymoron83, Ddxc, Oculi, Manway, AlanUS, Anchor Link Bot, Rinconsoleao, Denisarona, Canglesea, Myrvin, DEMcAdams, ClueBot, Sural, Wpoely86, Remag Kee, SuperHam- ster, LizardJr8, Masterpiece2000, Excirial, Da rulz07, Bender2k14, Ftbhrygvn, Muhandes, Brews ohare, Tyler, Livius3, Jotterbot, Hans Adler, Manco Capac, MiraiWarren, Qwfp, Johnuniq, TimothyRias, Lakeworks, XLinkBot, Marc van Leeuwen, Rror, AndreNatas, Jaan Vajakas, Porphyro, Stephen Poppitt, Addbot, Proofreader77, Deepmath, RPHv, Steve.jaramillov~enwiki, WardenWalk, Jccwiki, Cac- tusWriter, Mohamed Magdy, MrOllie, Tide rolls, Gail, Jarble, CountryBot, LuK3, Luckas-bot, Yobot, Senator Palpatine, QueenCake, TestEditBot, AnomieBOT, Autarkaw, Gazzawi, IDangerMouse, MattTait, Kingpin13, Materialscientist, Citation bot, Wrelwser43, Lil- Helpa, FactSpewer, Xqbot, Capricorn42, Drilnoth, HHahn, El Caro, BrainFRZ, J04n, Nickmn, RibotBOT, Cerniagigante, Smallman12q, WaysToEscape, Much noise, LucienBOT, Tobby72, VS6507, Recognizance, Sławomir Biały, Izzedine, IT2000, HJ Mitchell, Sae1962, Jamesooders, Cafreen, Citation bot 1, Swordsmankirby, I dream of horses, Kiefer.Wolfowitz, MarcelB612, NoFlyingCars, RedBot, RobinK, Kallikanzarid, Jordgette, ItsZippy, Vairoj, SeoMac, MathInclined, The last username left was taken, Birat lamichhane, Ka- tovatzschyn, Soupjvc, Sfbaldbear, Salvio giuliano, Mandolinface, EmausBot, Lkh2099, Nurath224, DesmondSteppe, RIS cody, Slawekb, Quondum, Chocochipmuffin, U+003F, Rcorcs, තඹරු විජේසේකර, Maschen, Babababoshka, Adjointh, Donner60, Puffin, JFB80, Anita5192, Petrb, ClueBot NG, Wcherowi, Michael P. Barnett, Rtucker913, Satellizer, Rank Penguin, Tyrantbrian, Dsperlich, Helpful Pixie Bot, Rxnt, Christian Matt, MarcoPotok, BG19bot, Wiki13, Muscularmussel, MusikAnimal, Brad7777, René Vápeník, Sofia karam- 328 CHAPTER 53. WAGNER’S THEOREM

pataki, BattyBot, Freesodas, IkamusumeFan, Lucaspentzlp, OwenGage, APerson, Dexbot, Mark L MacDonald, Numbermaniac, Frosty, JustAMuggle, Reatlas, Acetotyce, Debouch, Wamiq, Ugog Nizdast, Zenibus, SwimmerOfAwesome, Jianhui67, OrthogonalFrog, Air- woz, Derpghvdyj, Mezafo, CarnivorousBunny, Xxhihi, Sordin, Username89911998, Gronk Oz, Hidrolandense, Kellywacko, JArnold99, Kavya l and Anonymous: 624 • Natural number Source: https://en.wikipedia.org/wiki/Natural_number?oldid=668052730 Contributors: AxelBoldt, Brion VIBBER, Bryan Derksen, Zundark, The Anome, Koyaanis Qatsi, AstroNomer~enwiki, Ed Poor, XJaM, Toby Bartels, Patrick, Infrogmation, TeunSpaans, Michael Hardy, Wshun, Wapcaplet, TakuyaMurata, Ellywa, Stevenj, Angela, Den fjättrade ankan~enwiki, Александър, Nikai, Andres, Panoramix, YishayMor, Revolver, Charles Matthews, Berteun, Crissov, Dcoetzee, Dysprosia, Jitse Niesen, Daniel Quin- lan, Markhurd, VeryVerily, Paul-L~enwiki, Mosesklein, Bevo, Shizhao, Elwoz, Qianfeng, Pakaran, Daran, RadicalBender, Sewing, Jni, Skeetch, Robbot, Murray Langton, Fredrik, Altenmann, Peak, Henrygb, Bkell, Moink, Wikibot, Aetheling, Fuelbottle, Tobias Bergemann, Tosha, Giftlite, Dbenbenn, Vfp15, Ævar Arnfjörð Bjarmason, Dissident, Wwoods, Elias, Joe Kress, Dmmaus, Siroxo, Chameleon, DemonThing, Knutux, MarkSweep, Bob.v.R, Gauss, Bumm13, Pmanderson, Arcturus, Gscshoyru, Petershank, Joyous!, Fanghong~enwiki, Mormegil, Perey, Discospinster, Liso, Mani1, Paul August, Khalid, ZeroOne, EmilJ, Grick, Randall Holmes, Reiny- day, .:Ajvol:., Brim, Jojit fb, Obradovic Goran, Juanpabl, Jumbuck, JohnyDog, Aisaac, Kuratowski’s Ghost, Msh210, Jeltz, AzaToth, Mystyc1, Gbeeker, Woodstone, Talkie tim, Alem Dain, Oleg Alexandrov, Linas, LOL, StradivariusTV, JonH, MFH, GregorB, Xiong, Graham87, SixWingedSeraph, Roger McCoy, Mendaliv, Jshadias, Drbogdan, Rjwilmsi, MarSch, Salix alba, FlaBot, Nihiltres, Lmatt, Haonhien, Chobot, DVdm, Bgwhite, Algebraist, YurikBot, RobotE, Michael Slone, Sasuke Sarutobi, KSmrq, Stephenb, Tenebrae, Gaius Cornelius, Ihope127, Rsrikanth05, Wimt, B-Con, Rick Norwood, Grafen, Trovatore, Zwobot, Martinwilke1980, Lt-wiki-bot, Arthur Rubin, QmunkE, Katieh5584, That Guy, From That Show!, Yvwv, SmackBot, Travuun, Tom Lougheed, Unyoyega, Bomac, Eskimbot, BiT, Xaosflux, Gilliam, Hmains, JAn Dudík, PJTraill, Raja Hussain, MalafayaBot, SchfiftyThree, Akanemoto, Alink, DHN-bot~enwiki, Ladislav Mecir, Can't sleep, clown will eat me, Jamnik~enwiki, SundarBot, Grover cleveland, Khoikhoi, JackSlash, Jiddisch~enwiki, Dreadstar, RandomP, BullRangifer, Jon Awbrey, SashatoBot, Lambiam, Nishkid64, IronGargoyle, John H, Morgan, 16@r, Loadmaster, Mets501, Asyndeton, Stephen B Streater, Quaeler, Hrumph, Tawkerbot2, JRSpriggs, Heikobot, JForget, CRGreathouse, Ale jrb, Cxw, Simian1k, CBM, SuperMidget, Doctormatt, Cream147, Chasingsol, SimenH, Odie5533, Thijs!bot, Epbr123, Kahriman~enwiki, Com- positeFan, Escarbot, AntiVandalBot, Majorly, Opelio, JAnDbot, Husond, Martinkunev, Hut 8.5, Dricherby, .anacondabot, Hurmata, Io Katai, Magioladitis, Bongwarrior, VoABot II, JNW, JamesBWatson, Kajasudhakarababu, Animum, David Eppstein, SlamDiego, Jo- ergenB, Wdflake, Patstuart, Darksniperdragon, Catmoongirl, Ttwo, Maproom, Being blunt, Mahewa, Gill110951, Rommels, Policron, Fylwind, Vinsfan368, Xiahou, Idioma-bot, Vlma111, Ramanujam first, TXiKiBoT, Thomas1617, Abdullais4u, Meters, Synthebot, Alle- borgoBot, Demmy100, SieBot, BotMultichill, ToePeu.bot, Vanished User 8a9b4725f8376, Keilana, Prestonmag, OKBot, Angielaj, An- chor Link Bot, TheCatalyst31, Sfan00 IMG, ClueBot, PipepBot, Snigbrook, The Thing That Should Not Be, Cliff, Bballgrl42351, Drmies, Cp111, Pallida Mors, Boing! said Zebedee, Joshwashere10, CounterVandalismBot, Excirial, Da rulz07, He7d3r, Sun Creator, Cenar- ium, Jotterbot, Leohenrique0908~enwiki, Hans Adler, Principianewton, InternetMeme, Aaron north, Marc van Leeuwen, Aoeuidhtns, Pichpich, SilvonenBot, Paulginz, Airplaneman, Addbot, Proofreader77, WardenWalk, Fluffernutter, Download, Debresser, LinkFA-Bot, Numbo3-bot, Tide rolls, Muiranec, Gail, MuZemike, Ben Ben, Legobot, Luckas-bot, Yobot, Denispir, Caracho, KamikazeBot, IW.HG, AnomieBOT, DemocraticLuntz, Fullmetalactor, Neptune5000, ArthurBot, FactSpewer, MauritsBot, Xqbot, Bdmy, Txebixev, TechBot, InsérerNombreHere, RibotBOT, DosDIS 778, Aaron Kauppi, A. di M., Carl cuthbert, Phoebe alison, Auclairde, FrescoBot, HJ Mitchell, Machine Elf 1735, Maggyero, Tkuvho, Pinethicket, Ebony Jackson, Robertas.Vilkas, Achim1999, December21st2012Freak, FoxBot, TobeBot, SepIHw, Mrs.Barbera, Tsunhimtse, Lotje, Ajb1947, PleaseStand, Jesse V., DARTH SIDIOUS 2, Shafaet, BCtl, EmausBot, RenamedUser01302013, Paul Martyn-Smith, Wham Bam Rock II, Tommy2010, Ornithikos, Velpaedia Jenkuklordanus, Tubalubalu, ZéroBot, Alpha Quadrant (alt), Quondum, D.Lazard, SporkBot, Wayne Slam, Staszek Lem, Stephanos21, Matsievsky, Chuispaston- Bot, Davey2010, Anita5192, ResearchRave, ClueBot NG, Raiden10, Jack Greenmaven, Wcherowi, Widr, Helpful Pixie Bot, BG19bot, Max Longint, Rasheeq1, Calagahan, Drift chambers, Writ Keeper, Ezhu94, ThatOtherJacob, Mdann52, Aqw400, Comatmebro, Dexbot, TheKing44, Brirush, Salon Alure, FrigidNinja, Eyesnore, Peter162534, Galactic Citizen 299495038858569, Aureomarginata, Mjol- nirPants, Taowa, Erin.Annette.Brown, Peiffers, Mark D. Marquez, EvilLair, Purgy Purgatorio, Girlyhorsegirl, Spmishrag, Degenerate prodigy, Gov vj, Citogenitor, Kameronchia1234 and Anonymous: 473 • Network science Source: https://en.wikipedia.org/wiki/Network_science?oldid=666195437 Contributors: Edward, Michael Hardy, Kku, IceKarma, RayBirks, D6, Giraffedata, Mdd, Jérôme, Oleg Alexandrov, Imersion, Rjwilmsi, Salix alba, Madcoverboy, SmackBot, Bull- Rangifer, Kvng, Mdanziger, Quaeler, Twas Now, Connection, CmdrObot, Sprhodes, Labongo, Douglas R. White, R'n'B, AgarwalSumeet, Skullers, Yecril, JohnDoe0007, Dggreen, Anna Lincoln, Iamthedeus, Jojalozzo, Antonio Lopez, ClueBot, Antipopxx, Sun Creator, Arjayay, Addbot, JBsupreme, Skynet1, Download, Chicagoian, Smoke73, Netzwerkerin, NYNetwork, Luckas-bot, Yobot, 2themoon, Jean.julius, AnomieBOT, Andrewrp, Materialscientist, GaborPete, Citation bot, Aquacool91, Quebec99, Xqbot, TheAMmollusc, Anna Frodesiak, Crzer07, FrescoBot, Mentatseb, Sidna, Plan92bsure, Asaalt, Beteltreuse, Worldcontrol, Brentd25, John of Reading, Karayang, Mambonofive, Djdjdjdjb, Traxs7, Sharkface1, Anselrill, Akseli.palen, Zdorovo, Pribs, ClueBot NG, Satellizer, Gavin.perch, Helpful Pixie Bot, Bibcode Bot, Whatsamattau, Jmcatania, BG19bot, Bereziny, 507WVS, Alchames, Ettypaldos, Admatkin2, Sharkbite3, 1hip- ster, Blah314, Nrifel, Carloseu, Mhbeals, LieutenantLatvia, Yeda123, Snoodlebug, Monkbot, Sefer12, Srijankedia, Sidharth10, Pcurley, Thebucketmanfromhades, GlennLawyer, What4uknow, BenjaminDHorne, Shikang Liu, Edric Dayne, Puccio.b, Bhatia.u, Nicholasbarry, Datzr, BrunoCoutinho, BrunoGCoutinho, Aimlesslyknows, Kent Krupa, Calvinius, Mil686 and Anonymous: 56 • Ordinal number Source: https://en.wikipedia.org/wiki/Ordinal_number?oldid=668709653 Contributors: AxelBoldt, Mav, Bryan Derk- sen, Zundark, The Anome, Iwnbap, LA2, Christian List, B4hand, Olivier, Stevertigo, Patrick, Michael Hardy, Llywrch, Jketola, Chinju, TakuyaMurata, Karada, Docu, Vargenau, Revolver, Charles Matthews, Dysprosia, Malcohol, Owen, Rogper~enwiki, Hmackiernan, Bald- hur, Adhemar, Fuelbottle, Tobias Bergemann, Giftlite, Markus Krötzsch, Ævar Arnfjörð Bjarmason, Lethe, Fropuff, Gro-Tsen, That- tommyhall, Jorend, Siroxo, Wmahan, Beland, Joeblakesley, Elroch, 4pq1injbok, Luqui, Silence, Paul August, EmilJ, Babomb, Randall Holmes, Wood Thrush, Robotje, Blotwell, Crust, Jumbuck, Sligocki, SidP, DV8 2XL, Jim Slim, Oleg Alexandrov, Warbola, Linas, Miaow Miaow, Graham87, Grammarbot, Jorunn, Rjwilmsi, Bremen, Salix alba, Mike Segal, R.e.b., FlaBot, Jak123, Chobot, YurikBot, Wave- length, RobotE, Hairy Dude, CanadianCaesar, Archelon, Gaius Cornelius, Trovatore, Crasshopper, DeadEyeArrow, Pooryorick~enwiki, Hirak 99, Closedmouth, Arthur Rubin, PhS, GrinBot~enwiki, Brentt, Nicholas Jackson, SmackBot, Pokipsy76, KocjoBot~enwiki, Blue- bot, AlephNull~enwiki, Jiddisch~enwiki, Dreadstar, Mmehdi.g, Lambiam, Khazar, Minna Sora no Shita, Bjankuloski06en~enwiki, 16@r, Loadmaster, Limaner, Quaeler, Jason.grossman, Joseph Solis in Australia, Easwaran, Zero sharp, Tawkerbot2, JRSpriggs, Vaughan Pratt, CRGreathouse, CBM, Gregbard, FilipeS, HdZ, Pcu123456789, Lyondif02, Odoncaoa, Jj137, Hannes Eder, Shlomi Hillel, JAnDbot, Agol, BrentG, Smartcat, Bongwarrior, Swpb, David Eppstein, Jondaman21, R'n'B, IPonomarev, RockMFR, Ttwo, It Is Me Here, Policron, VolkovBot, Dommedagsprofet, Hotfeba, Jeff G., LokiClock, PMajer, Alphaios~enwiki, Cremepuff222, Wikithesource, Arcfrk, SieBot, Mrw7, J-puppy, TheCatalyst31, ClueBot, DFRussia, DanielDeibler, DragonBot, Hans Adler, StevenDH, Lacce, Against the current, 53.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 329

Dthomsen8, Addbot, Dyaa, Mathemens, Unzerlegbarkeit, Luckas-bot, Yobot, Utvik old, THEN WHO WAS PHONE?, KamikazeBot, AnomieBOT, Angry bee, Citation bot, Nexx892, Twri, Xqbot, Freebirth Toad, Capricorn42, RJGray, GrouchoBot, VladimirReshetnikov, SassoBot, Citation bot 1, RedBot, Burritoburritoburrito, TheStrayCat, Raiden09, EmausBot, Fly by Night, Jens Blanck, SporkBot, Clue- Bot NG, Frietjes, Rezabot, Helpful Pixie Bot, BG19bot, Anthony.de.almeida.lopes, Jochen Burghardt, Mark viking, Pop-up casket, Jose Brox, The Horn Blower, Dustin V. S., George8211, Dconman2, Garfield Garfield, Lalaloopsy1234, SoSivr, Eth450, Neposner, Mircea BRT, Divad42, Smwrd, Wilsonator5000 and Anonymous: 139 • Power set Source: https://en.wikipedia.org/wiki/Power_set?oldid=655854839 Contributors: AxelBoldt, Zundark, Tarquin, Awaterl, Boleslav Bobcik, Michael Hardy, Wshun, TakuyaMurata, GTBacchus, Pcb21, Mxn, Charles Matthews, Berteun, Dysprosia, Jay, Hyacinth, Ed g2s, .mau., Aleph4, Robbot, Tobias Bergemann, Adam78, Giftlite, Dratman, DefLog~enwiki, Tomruen, Mormegil, Rich Farmbrough, Ted- Pavlic, Paul August, Zaslav, Elwikipedista~enwiki, Spayrard, SgtThroat, Obradovic Goran, Jumbuck, Kocio, Tony Sidaway, Ultramarine, Kenyon, Oleg Alexandrov, Linas, Flamingspinach, GregorB, Yurik, Salix alba, FlaBot, VKokielov, SchuminWeb, Small potato, Cia- Pan, NevilleDNZ, Chobot, YurikBot, Stephenb, Trovatore, Bota47, Deville, Closedmouth, MathsIsFun, Realkyhick, GrinBot~enwiki, SmackBot, InverseHypercube, Persian Poet Gal, SMP, Alink, Octahedron80, Kostmo, Armend, Shdwfeather, 16@r, Mike Fikes, Malter, Freelance Intellectual, JRSpriggs, Vaughan Pratt, CBM, Gregbard, Sam Staton, Goldencako, DumbBOT, Cj67, Abu-Fool Danyal ibn Amir al-Makhiri, Felix C. Stegerman, David Eppstein, R'n'B, RJASE1, UnicornTapestry, VolkovBot, Camrn86, Anonymous Dissident, PaulTanenbaum, Dmcq, AlleborgoBot, Pcruce, Faradayplank, MiNombreDeGuerra, Megaloxantha, KrustallosIce28, S2000magician, Classicalecon, Dmitry Dzhus, PipepBot, DragonBot, He7d3r, Marc van Leeuwen, Addbot, Freakmighty, Download, Luckas-bot, Yobot, Ht686rg90, ArthurBot, La Mejor Ratonera, FrescoBot, Showgun45, ComputScientist, Throw it in the Fire, Tkuvho, HRoestBot, El- Lutzo, John of Reading, WikitanvirBot, Lunaibis, Set theorist, Josve05a, AMenteLibera, Wcherowi, Helpful Pixie Bot, Sebastien.noir, Deltahedron, QuantumNico, Mark viking, Jadiker, Rajiv1965 and Anonymous: 87 • Quantum graph Source: https://en.wikipedia.org/wiki/Quantum_graph?oldid=666503627 Contributors: Michael Hardy, Altenmann, Cthuljew, Sadads, Mets501, Alaibot, David Eppstein, Pomte, Lamro, Jon har, DFRussia, Yobot, Baxxterr, AnomieBOT, Twri, Specfun- fan, BG19bot, Deltahedron and Anonymous: 13 • Quiver (mathematics) Source: https://en.wikipedia.org/wiki/Quiver_(mathematics)?oldid=654809228 Contributors: AxelBoldt, Michael Hardy, Charles Matthews, Giftlite, Waltpohl, Sigfpe, Discospinster, Linas, Arneth, R.e.b., Aholtman, John Baez, Masnevets, Kinser, Maksim-e~enwiki, Incnis Mrsi, Gilliam, Nbarth, Bruno321, Al Lemos, Headbomb, OhanaUnited, Enlil2, Magioladitis, Etale, David Epp- stein, Saibod, Geometry guy, Thehotelambush, JackSchmidt, Muro Bot, Addbot, Download, Wohingenau, Citation bot, Twri, Semista- blesystem, FrescoBot, Graphalgebra, Citation bot 1, Darij, Tinfoilcat, Helpful Pixie Bot, Monkbot, Ennesse86, Rabid Pumkin and Anony- mous: 20 • Random graph Source: https://en.wikipedia.org/wiki/Random_graph?oldid=666999798 Contributors: Michael Hardy, Vinod, Charles Matthews, Populus, McKay, Jredmond, MathMartin, Rebrane, Hadal, Aknxy, Giftlite, Dbenbenn, Sundar, Zeimusu, Andreas Kaufmann, Paul August, El C, Jérôme, PAR, AlexKarpman~enwiki, Malo, SteinbDJ, Oleg Alexandrov, Shreevatsa, Dysepsion, Rjwilmsi, Brighteror- ange, FlaBot, Mathbot, Zijie~enwiki, Algebraist, Wavelength, Michael Slone, Wiki alf, Madcoverboy, Trovatore, Brandon, Schmock, 21655, Scorpiona, Roger Hui, Eskimbot, Fetofs, Ajaxkroon, Mhym, Radagast83, Dreadstar, Mdrine, Iridescent, Pgr94, Xaman, Thijs!bot, T.Friedrich, Alphachimpbot, JAnDbot, .anacondabot, David Eppstein, Owlgorithm, Katalaveno, Bilbobee, Nm420, VolkovBot, Dggreen, Philip Trueman, TXiKiBoT, Lradrama, PaulTanenbaum, Radagast3, Phe-bot, Faradayplank, Melcombe, ClueBot, Thingg, YouRang?, Webonfim, Little Mountain 5, Drsadeq, Addbot, Eyeofyou, CanadianLinuxUser, Colepeck23, LaaknorBot, Netzwerkerin, Satch01, Zor- robot, Luckas-bot, Yobot, AnomieBOT, Rubinbot, Xqbot, Shadowjams, Xnn, Domesticenginerd, Elwwod, BG19bot, Bereziny, NotWith, Faizan, Yllse.sw, Stefano Nasini and Anonymous: 88 • Ring theory Source: https://en.wikipedia.org/wiki/Ring_theory?oldid=666198229 Contributors: AxelBoldt, Zundark, Tarquin, Michael Hardy, Wshun, TakuyaMurata, Karada, Revolver, Charles Matthews, Dysprosia, Humbly~enwiki, MathMartin, Saforrest, Tobias Berge- mann, Giftlite, Fropuff, Waltpohl, Mschlindwein, D6, Tompw, Rgdboer, Shenme, Giraffedata, EmmetCaulfield, Jef-Infojef, Tbsmith, FlaBot, Margosbot~enwiki, Chobot, Digitalme, YurikBot, Michael Slone, Grubber, Wiki alf, SmackBot, GraemeMcRae, Hmains, Kmari- nas86, Dicklyon, Mets501, Rschwieb, CmdrObot, Myasuda, Namwob0, JoergenB, Pomte, Laurusnobilis, JohnBlackburne, Anonymous Dissident, Magmi, Arcfrk, Henry Delforn (old), JackSchmidt, Admjuptjilbdnl, The Thing That Should Not Be, Mild Bill Hiccup, Ed- ,Ciphers, Materialscientist, Xenomancer, Citation bot, Isheden ,דוד שי ,winconnell, Arjayay, Marc van Leeuwen, Addbot, Tassedethe Charvest, FrescoBot, Citation bot 1, Throwaway85, Suslindisambiguator, Quondum, D.Lazard, Wikfr, Otaria, Brad7777, Brirush, Lax- fan1977, Grabigail, Lanzdsey and Anonymous: 35 • Robertson–Seymour theorem Source: https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour_theorem?oldid=625420971 Con- tributors: AxelBoldt, FvdP, Michael Hardy, Dominus, Cyan, Charles Matthews, Dcoetzee, Populus, Altenmann, Psychonaut, Giftlite, Dbenbenn, MSGJ, Zaslav, EmilJ, Dfeldmann, Mandarax, Tizio, R.e.b., Quuxplusone, R.e.s., PhS, RDBury, BeteNoir, Melchoir, Wzhao553, Glasser, Jon Awbrey, Headbomb, A3nm, David Eppstein, Udufruduhu, Sfztang, Svick, Justin W Smith, MystBot, Addbot, DOI bot, Un- zerlegbarkeit, Yobot, Citation bot, Twri, Alex Dainiak, At-par, RobinK, Dexbot and Anonymous: 15 • Split-quaternion Source: https://en.wikipedia.org/wiki/Split-quaternion?oldid=659996250 Contributors: Michael Hardy, Charles Matthews, Giftlite, Phe, Sam Hocevar, Rgdboer, Jheald, Oleg Alexandrov, Linas, Commander Keane, Grammarbot, Ketiltrout, Staecker, Salix alba, R.e.b., Mathbot, Algebraist, SmackBot, Incnis Mrsi, Jim.belk, Rschwieb, Vanished user, MOBle, Koeplinger, Phy1729, David Eppstein, ,Luckas-bot, Yobot, Xqbot, FrescoBot ,.עוזי ו ,Maurice Carbonaro, It Is Me Here, Geometry guy, MystBot, Addbot, Ozob, Tassedethe Foobarnix, John of Reading, Quondum, T.seppelt, ChrisGualtieri, Weux082690 and Anonymous: 11 • Statistical model Source: https://en.wikipedia.org/wiki/Statistical_model?oldid=665300695 Contributors: Tobias Hoevekamp, Ap, Den fjättrade ankan~enwiki, Henrygb, Wmahan, Ary29, Rich Farmbrough, Bender235, Janna Isabot, Mdd, Avenue, Btyner, Graham87, Salix alba, Kri, Msbmsb, SmackBot, DanielPenfield, Jab843, Ignacioerrico, Reza1615, Joshuav, Jackzhp, Skittleys, Gurol, MikeLynch, Mbhiii, TheSeven, Cogit080, Seraphim, Melcombe, DumZiBoT, BarretB, FlagrantUsername, Tayste, Zorrobot, Legobot, Luckas-bot, Yobot, AnakngAraw, Flavio Guitian, Erik9bot, LucienBOT, Diminuendo-se, MastiBot, Grégoire Détrez, Xuxingzhong, Chewings72, Tablelooks, Rxnt, Fowlerism, Illia Connell, BillWhiten, Loraof, SolidPhase and Anonymous: 55 • Topological graph theory Source: https://en.wikipedia.org/wiki/Topological_graph_theory?oldid=640710294 Contributors: XJaM, Tomo, Michael Hardy, Silverfish, Dcoetzee, MathMartin, Giftlite, Andreas Kaufmann, 4pq1injbok, Rich Farmbrough, NatusRoma, Vonkje, Headbomb, Scanlan, David Eppstein, JohnBlackburne, OKBot, Yasmar, Addbot, DOI bot, OlEnglish, Luckas-bot, Ciphers, Citation bot, Miym, Charvest, Genusfour, FrescoBot, Brad7777, Pintoch and Anonymous: 4 • Triangle graph Source: https://en.wikipedia.org/wiki/Triangle_graph?oldid=668635397 Contributors: David Eppstein, Koko90, Emaus- Bot, Nathann.cohen and Anonymous: 1 330 CHAPTER 53. WAGNER’S THEOREM

• Triangle-free graph Source: https://en.wikipedia.org/wiki/Triangle-free_graph?oldid=652758625 Contributors: Booyabazooka, Rich Farmbrough, Igorpak, Rjwilmsi, RobertBorgersen, Citrus538, Headbomb, David Eppstein, Koko90, DOI bot, Lightbot, Fraggle81, AnomieBOT, Citation bot, Miym, Citation bot 1, RobinK, RjwilmsiBot, ClueBot NG, Gruberan, Mathmasterx, Hnridder and Anony- mous: 8 • Vertex (graph theory) Source: https://en.wikipedia.org/wiki/Vertex_(graph_theory)?oldid=628902495 Contributors: XJaM, Altenmann, MathMartin, Giftlite, Dbenbenn, Purestorm, Rich Farmbrough, Cburnett, RussBot, Anomalocaris, InverseHypercube, BiT, Chetvorno, Ylloh, Univer, Escarbot, David Eppstein, JaGa, Mange01, Hans Dunkelberg, Geekdiva, VolkovBot, TXiKiBoT, Synthebot, Anoko moon- light, SieBot, Hxhbot, Kl4m, JP.Martin-Flatin, Niemeyerstein en, Albambot, DOI bot, Zorrobot, Luckas-bot, TaBOT-zerem, Kamikaze- Bot, Ciphers, Citation bot, Twri, Xqbot, Miym, Amaury, Phil1881, Trappist the monk, WillNess, DARTH SIDIOUS 2, TjBot, Orphan Wiki, ZéroBot, ClueBot NG, Gchrupala, Maxwell bernard, SamX and Anonymous: 16 • Wagner’s theorem Source: https://en.wikipedia.org/wiki/Wagner’{}s_theorem?oldid=614819212 Contributors: Taxipom, David Epp- stein, Yobot and El Roih

53.5.2 Images

• File:10-simplex_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/86/10-simplex_graph.svg License: CC BY- SA 3.0 Contributors: Own work Original artist: Koko90 • File:11-simplex_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/11-simplex_graph.svg License: CC BY- SA 3.0 Contributors: Own work Original artist: Koko90 • File:3-simplex_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/be/3-simplex_graph.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:4-critical_graph.png Source: https://upload.wikimedia.org/wikipedia/commons/7/73/4-critical_graph.png License: CC0 Contrib- utors: Own work Original artist: Jmerm • File:4-simplex_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2d/4-simplex_graph.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:5-simplex_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e9/5-simplex_graph.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:6-simplex_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c8/6-simplex_graph.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:6n-graf.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5b/6n-graf.svg License: Public domain Contributors: Image: 6n-graf.png simlar input data Original artist: User:AzaToth • File:6n-graph2.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/28/6n-graph2.svg License: Public domain Contribu- tors: ? Original artist: ? • File:7-simplex_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/cb/7-simplex_graph.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:8-simplex_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2c/8-simplex_graph.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:9-simplex_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/9-simplex_graph.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:Acap.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/52/Acap.svg License: Public domain Contributors: Own work Original artist: F l a n k e r • File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do- main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs) • File:Area_parallellogram_as_determinant.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Area_parallellogram_ as_determinant.svg License: Public domain Contributors: Own work, created with Inkscape Original artist: Jitse Niesen • File:Arrow_east.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Arrow_east.svg License: Public domain Contrib- utors: ? Original artist: ? • File:Arrow_north.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/Arrow_north.svg License: Public domain Con- tributors: ? Original artist: ? • File:Arrow_south.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/ff/Arrow_south.svg License: Public domain Con- tributors: ? Original artist: ? • File:Arrow_west.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9f/Arrow_west.svg License: Public domain Contrib- utors: ? Original artist: ? • File:Barabasi-albert_model_degree_distribution.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a8/Barabasi-albert_ model_degree_distribution.svg License: CC BY-SA 3.0 Contributors: Created by the NetworkX module of the Python Original artist: Arpad Horvath • File:Biclique_K_3_3.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f3/Biclique_K_3_3.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:Biclique_K_3_5.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d6/Biclique_K_3_5.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:Bijection.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Bijection.svg License: Public domain Contributors: enwiki Original artist: en:User:Schapel 53.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 331

• File:Bijective_composition.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/Bijective_composition.svg License: Pub- lic domain Contributors: ? Original artist: ? • File:Brain.png Source: https://upload.wikimedia.org/wikipedia/commons/7/73/Nicolas_P._Rougier%27s_rendering_of_the_human_ brain.png License: GPL Contributors: http://www.loria.fr/~{}rougier Original artist: Nicolas Rougier • File:Brent_method_example.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2d/Brent_method_example.svg License: CC0 Contributors: Own work Original artist: Krishnavedala • File:Category_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/ff/Category_SVG.svg License: CC BY-SA 4.0 Contributors: Own work Original artist: IkamusumeFan • File:CircuitoDosMallas.png Source: https://upload.wikimedia.org/wikipedia/commons/2/2c/CircuitoDosMallas.png License: Public domain Contributors: Dibujo del autor (own work) Original artist: José Luis Gálvez • File:Clique-sum.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/47/Clique-sum.svg License: Public domain Contrib- utors: Own work Original artist: David Eppstein • File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Origi- nal artist: ? • File:Complete-edge-coloring.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/Complete-edge-coloring.svg License: CC0 Contributors: Own work Original artist: David Eppstein • File:Complete-quads.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7b/Complete-quads.svg License: Public domain Contributors: Transferred from en.wikipedia to Commons. Original artist: David Eppstein at English Wikipedia • File:Complete_graph_K1.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Complete_graph_K1.svg License: Pub- lic domain Contributors: Own work Original artist: David Benbennick wrote this file. • File:Complete_graph_K2.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/96/Complete_graph_K2.svg License: Pub- lic domain Contributors: Own work Original artist: David Benbennick wrote this file. • File:Complete_graph_K3.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5a/Complete_graph_K3.svg License: Pub- lic domain Contributors: Own work Original artist: David Benbennick wrote this file. • File:Complete_graph_K5.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/cf/Complete_graph_K5.svg License: Pub- lic domain Contributors: Own work Original artist: David Benbennick wrote this file. • File:Complete_graph_K7.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9e/Complete_graph_K7.svg License: Pub- lic domain Contributors: Own work Original artist: David Benbennick • File:Conjugate-dessins.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5d/Conjugate-dessins.svg License: Public do- main Contributors: Self-made; redrawn from figures 2.9 and 2.10 of Lando, Sergei K. & Zvonkin, Alexander K. (2004), Graphs on Surfaces and Their Applications, vol. 141, Encyclopaedia of Mathematical Sciences: Lower-Dimensional Topology II, Springer-Verlag. Original artist: David Eppstein • File:Connectedness-of-set-difference.png Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/Connectedness-of-set-difference. png License: CC BY-SA 4.0 Contributors: Own work Original artist: Erel Segal • File:Continuity_topology.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Continuity_topology.svg License: Pub- lic domain Contributors: Own work Original artist: Derrick Coetzee • File:Depth-first-tree.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5d/Depth-first-tree.png License: CC-BY-SA- 3.0 Contributors: • Wolfram Esser Original artist: Wolfram Esser • File:Desargues_graph_3color_edge.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/03/Desargues_graph_3color_edge. svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:Determinant_example.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Determinant_example.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Krishnavedala • File:Directed.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/Directed.svg License: Public domain Contributors: ? Original artist: ? • File:Directed_cycle.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/dc/Directed_cycle.svg License: Public domain Con- tributors: en:Image:Directed cycle.png Original artist: en:User:Dcoetzee, User:Stannered • File:Disconnected-union-of-connected-sets.png Source: https://upload.wikimedia.org/wikipedia/commons/8/83/Disconnected-union-of-connected-sets. png License: CC BY-SA 4.0 Contributors: Own work Original artist: Erel Segal • File:Dodecahedral_graph.neato.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/05/Dodecahedral_graph.neato.svg Li- cense: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:Dual_Cube-Octahedron.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e7/Dual_Cube-Octahedron.svg License: CC-BY-SA-3.0 Contributors: Own work Original artist: 4C • File:Duals_graphs.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/ba/Duals_graphs.svg License: CC0 Contributors: \documentclass{article} \thispagestyle{empty} \usepackage{tikz} \usetikzlibrary{calc} \begin{document} \begin{tikzpicture}[node distance=2.5cm,semithick] \tikzstyle{origVertex} = [draw, blue, fill, shape=circle] \tikzstyle{dualVertex} = [draw, red, fill, shape=circle] \tikzstyle{invisibleVertex

class="sy0">} = [shape=circle] \tikzstyle{origEdge} = [blue] \tikzstyle{dualEdge} = [red, densely dashed] \node[origVertex] (0) {}; \node[invisibleVertex] (i1) [right of=0] {}; \node[origVertex] (1) [right of=i1] {}; \node[origVertex] (2) [below right of=1] {}; \node[origVertex] (3) [below left of=2] {}; \node[invisibleVertex] (i2) [left of=3] {}; \node[origVertex] (4) [left of=i2] {}; \path (0) edge[origEdge] (1) edge[origEdge] (4) (1) edge[origEdge] (2) edge[origEdge] (3) edge[origEdge] (4) (2) edge[origEdge] (3) (3) edge[origEdge] (4); \path let \p1 = (0), \p2 = (1), \p3 = (4) in node[dualVertex] (d0) at (\x1/3+\x2/3+\x3/3,\y1/3+\y2/3+\y3/3) {}; \path let \p1 = (1), \p2 = (3), \p3 = (4) in node[dualVertex] (d1) at (\x1/3+\x2/3+\x3/3,\y1/3+\y2/3+\y3/3) {}; \path let \p1 = (1), \p2 = (2), \p3 = (3) in node[dualVertex] (d2) at (\x1/3+\x2/3+\x3/3,\y1/3+\y2/3+\y3/3) {}; \node[dualVertex] (d3) [below left of=3] {}; \path (d0) edge[dualEdge] (d1) edge[dualEdge, out= 120, in=−170, looseness=4] (d3) edge[dualEdge, out=−150, in= 170, looseness=2.5] (d3) (d1) edge[dualEdge] (d2) edge[dualEdge] (d3) (d2) edge[dualEdge, out= 35, in= 0, looseness=2] (d3) edge[dualEdge, out= −90, in= 20, looseness=1.5] (d3); \end{tikzpicture} \end{document}

Original artist: Self • File:ER_model.png Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/ER_model.png License: CC BY-SA 3.0 Contribu- tors: Own work Original artist: Jmcatania • File:Ellipse_in_coordinate_system_with_semi-axes_labelled.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Ellipse_ in_coordinate_system_with_semi-axes_labelled.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Jakob.scholbach • File:Example_of_continuous_function.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7f/Example_of_continuous_ function.svg License: Public domain Contributors: This file was derived from: Example of continuous function.png: Example of continuous function.png Original artist: Example_of_continuous_function.png: User:Pasixxxx • File:F26A_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/F26A_graph.svg License: CC BY-SA 3.0 Con- tributors: Own work Original artist: Koko90 • File:Flip_map.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3f/Flip_map.svg License: CC BY-SA 3.0 Contributors: derived from File:Rotation_by_pi_over_6.svg Original artist: Jakob.scholbach • File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc- by-sa-3.0 Contributors: ? Original artist: ? • File:Folkman_Lombardi.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Folkman_Lombardi.svg License: Public domain Contributors: Own work Original artist: David Eppstein • File:Frucht_graph.neato.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5d/Frucht_graph.neato.svg License: CC BY- SA 3.0 Contributors: Own work Original artist: Koko90 • File:Graceful_labeling.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/97/Graceful_labeling.svg License: Public do- main Contributors: Transferred from en.wikipedia; transfer was stated to be made by User:Raulshc. Original artist: Original uploader was Arichnad at en.wikipedia 53.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 333

• File:GraphMinorExampleA.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e2/GraphMinorExampleA.png License: Public domain Contributors: ? Original artist: ? • File:GraphMinorExampleB.svg Source: https://upload.wikimedia.org/wikipedia/en/c/cf/GraphMinorExampleB.svg License: PD Con- tributors: ? Original artist: ? • File:GraphMinorExampleC.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/GraphMinorExampleC.svg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Graph_isomorphism_a.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9a/Graph_isomorphism_a.svg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Graph_isomorphism_b.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/84/Graph_isomorphism_b.svg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Hasse_diagram_of_powerset_of_3.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ea/Hasse_diagram_of_powerset_ of_3.svg License: CC-BY-SA-3.0 Contributors: self-made using graphviz's dot. Original artist: KSmrq • File:Holt_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/ba/Holt_graph.svg License: CC BY-SA 3.0 Contrib- utors: Own work Original artist: Koko90 • File:Homografia.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c4/Homografia.svg License: Public domain Contrib- utors: own work (vector version of raster graphics uploaded on plwiki by user Wojteks) Original artist: Wojciech Muła • File:HomotopySmall.gif Source: https://upload.wikimedia.org/wikipedia/commons/7/7e/HomotopySmall.gif License: CC0 Contribu- tors: Own work Original artist: Jim.belk • File:Hyperbola2_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d9/Hyperbola2_SVG.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: IkamusumeFan • File:HyperboloidOfOneSheet.PNG Source: https://upload.wikimedia.org/wikipedia/commons/5/5f/HyperboloidOfOneSheet.PNG Li- cense: Public domain Contributors: Transferred from it.wikipedia; transferred to Commons by User:Yuma using CommonsHelper. Orig- inal artist: Original uploader was Cassioli at it.wikipedia • File:HyperboloidOfTwoSheets.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b5/HyperboloidOfTwoSheets.svg Li- cense: CC BY-SA 3.0 Contributors: Own work Original artist: Krishnavedala • File:Hypergraph-wikipedia.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/57/Hypergraph-wikipedia.svg License: CC BY-SA 3.0 Contributors: • Hypergraph.svg Original artist: Hypergraph.svg: Kilom691 • File:Internet_map_1024.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d2/Internet_map_1024.jpg License: CC BY 2.5 Contributors: Originally from the English Wikipedia; description page is/was here. Original artist: The Opte Project • File:Jordan_blocks.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4f/Jordan_blocks.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Jakob.scholbach • File:Kleinsche_Flasche.png Source: https://upload.wikimedia.org/wikipedia/commons/b/b9/Kleinsche_Flasche.png License: CC BY- SA 3.0 Contributors: Own work Original artist: Rutebir • File:Konigsberg_bridges.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5d/Konigsberg_bridges.png License: CC- BY-SA-3.0 Contributors: Public domain (PD), based on the image • Image- Koenigsberg, Map by Merian-Erben 1652.jpg

Original artist: Bogdan Giuşcă • File:Labelled_undirected_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a5/Labelled_undirected_graph.svg License: CC BY-SA 3.0 Contributors: derived from http://en.wikipedia.org/wiki/File:6n-graph2.svg Original artist: Jakob.scholbach • File:Lipschitz_continuity.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8d/Lipschitz_continuity.png License: CC BY-SA 3.0 Contributors: Originally uploaded to en.wikipedia (file log). Uploaded to zh.wikipedia by Mhss. Transferred to Commons by User:Shizhao using CommonsHelper. Original artist: Own work by Army1987 • File:Markov_chain_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/29/Markov_chain_SVG.svg License: CC BY-SA 3.0 Contributors: This graphic was created with matplotlib. Original artist: IkamusumeFan • File:Matrix.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/Matrix.svg License: GFDL Contributors: Own work Original artist: Lakeworks • File:Matrix_multiplication_diagram_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/Matrix_multiplication_diagram_ 2.svg License: CC-BY-SA-3.0 Contributors: This file was derived from: Matrix multiplication diagram.svg Original artist: File:Matrix multiplication diagram.svg:User:Bilou • File:Mergefrom.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0f/Mergefrom.svg License: Public domain Contribu- tors: ? Original artist: ? • File:Moreno_Sociogram_1st_Grade.png Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/Moreno_Sociogram_1st_Grade. png License: CC BY-SA 4.0 Contributors: Own work Original artist: MartinGrandjean • File:Mug_and_Torus_morph.gif Source: https://upload.wikimedia.org/wikipedia/commons/2/26/Mug_and_Torus_morph.gif License: Public domain Contributors: ? Original artist: ? 334 CHAPTER 53. WAGNER’S THEOREM

• File:Multi-pseudograph.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Multi-pseudograph.svg License: CC BY- SA 3.0 Contributors: Own work Original artist: 0x24a537r9 • File:Naphthalene2.png Source: https://upload.wikimedia.org/wikipedia/commons/9/9a/Naphthalene2.png License: Public domain Con- tributors: Transferred from en.wikipedia to Commons by Ronhjones using CommonsHelper. Original artist: Jon har at English Wikipedia • File:Nauru_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c5/Nauru_graph.svg License: CC BY-SA 3.0 Con- tributors: Own work Original artist: Koko90 • File:Network_Community_Structure.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f4/Network_Community_Structure. svg License: CC BY-SA 3.0 Contributors: Own work Original artist: j_ham3 • File:Network_Flow_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Network_Flow_SVG.svg License: CC BY-SA 4.0 Contributors: Own work Original artist: Limaner • File:Network_flow_residual_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ae/Network_flow_residual_SVG. svg License: CC-BY-SA-3.0 Contributors: • Network_flow_residual.png Original artist: Network_flow_residual.png: Maksim • File:Nuvola_apps_edu_mathematics_blue-p.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Nuvola_apps_edu_ mathematics_blue-p.svg License: GPL Contributors: Derivative work from Image:Nuvola apps edu mathematics.png and Image:Nuvola apps edu mathematics-p.svg Original artist: David Vignoni (original icon); Flamurai (SVG convertion); bayo (color) • File:Nuvola_apps_kaboodle.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1b/Nuvola_apps_kaboodle.svg License: LGPL Contributors: http://ftp.gnome.org/pub/GNOME/sources/gnome-themes-extras/0.9/gnome-themes-extras-0.9.0.tar.gz Original artist: David Vignoni / ICON KING • File:OEISicon_light.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d8/OEISicon_light.svg License: Public domain Contributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk) • File:Omega-exp-omega-labeled.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e6/Omega-exp-omega-labeled.svg License: CC0 Contributors: Own work Original artist: Pop-up casket (talk); original by User:Fool • File:Omega_squared.png Source: https://upload.wikimedia.org/wikipedia/commons/8/83/Omega_squared.png License: Public domain Contributors: Own work Original artist: Gro-Tsen • File:One5Root.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/40/One5Root.svg License: CC BY-SA 3.0 Contribu- tors: Own work Original artist: Loadmaster (David R. Tribble) • File:Os_d'Ishango_IRSNB.JPG Source: https://upload.wikimedia.org/wikipedia/commons/4/42/Os_d%27Ishango_IRSNB.JPG Li- cense: CC-BY-SA-3.0 Contributors: Own work Original artist: Ben2 • File:Paley13_no_label.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/72/Paley13_no_label.svg License: Public do- main Contributors: • Paley13.svg Original artist: Paley13.svg: Original uploader was David Eppstein at en.wikipedia • File:Path-connected_space.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b8/Path-connected_space.svg License: Pub- lic domain Contributors: ? Original artist: ? • File:People_icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/People_icon.svg License: CC0 Contributors: Open- Clipart Original artist: OpenClipart • File:Petersen-graph-factors.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a9/Petersen-graph-factors.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Miym • File:Petersen1_tiny.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Petersen1_tiny.svg License: CC BY-SA 3.0 Contributors: Own work by uploader based on http://en.wikipedia.org/wiki/File:Heawood_Graph.svg Original artist: Leshabirukov • File:Petersen_Wagner_minors.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/87/Petersen_Wagner_minors.svg Li- cense: CC0 Contributors: Own work Original artist: David Eppstein • File:Petersen_family.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/04/Petersen_family.svg License: Public domain Contributors: Own work Original artist: David Eppstein • File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ? Original artist: ? • File:Quantumgraph.png Source: https://upload.wikimedia.org/wikipedia/en/c/c5/Quantumgraph.png License: PD Contributors: ? Orig- inal artist: ? • File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0 Contributors: Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist: Tkgd2007 • File:Rapid_Oscillation.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7d/Rapid_Oscillation.svg License: CC BY 3.0 Contributors: Transferred from en.wikipedia; transfered to Commons by User:Pbroks13 using CommonsHelper. Original artist: --pbroks13talk? Original uploader was Pbroks13 at en.wikipedia • File:Regular_polygon_5_annotated.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/01/Regular_polygon_5_annotated. svg License: CC0 Contributors: Own work Original artist: László Németh • File:Rotation_by_pi_over_6.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Rotation_by_pi_over_6.svg License: Public domain Contributors: Own work using Inkscape Original artist: RobHar 53.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 335

• File:Saddle_Point_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0d/Saddle_Point_SVG.svg License: CC BY- SA 3.0 Contributors: This graphic was created with matplotlib. Original artist: IkamusumeFan • File:Sample-graph.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/cb/Sample-graph.jpg License: CC BY 3.0 Con- tributors: Own work Original artist: Lovegoodscience • File:Scaling_by_1.5.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c7/Scaling_by_1.5.svg License: Public domain Contributors: Own work using Inkscape Original artist: RobHar • File:Set_partitions_5;_matrices.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Set_partitions_5%3B_matrices. svg License: CC BY 3.0 Contributors: Own work Original artist: Watchduck (a.k.a. Tilman Piesk) • File:Shrikhande_graph_square.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/19/Shrikhande_graph_square.svg Li- cense: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:Signum_function.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4f/Signum_function.svg License: CC-BY-SA- 3.0 Contributors: ? Original artist: ? • File:Simply_connected,_connected,_and_non-connected_spaces.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/ 16/Simply_connected%2C_connected%2C_and_non-connected_spaces.svg License: CC0 Contributors: Connected and disconnected spaces cd.svg, Connected and disconnected spaces.svg Original artist: Gazilion • File:Squeeze_r=1.5.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/67/Squeeze_r%3D1.5.svg License: Public domain Contributors: Own work Original artist: RobHar • File:Star_graphs.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7d/Star_graphs.svg License: CC BY-SA 3.0 Contrib- utors: Own work Original artist: Koko90 • File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_ with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svg from the Tango project. Original artist: Benjamin D. Esham (bdesham) • File:Thomae_function_(0,1).svg Source: https://upload.wikimedia.org/wikipedia/commons/1/15/Thomae_function_%280%2C1%29. svg License: Public domain Contributors: Own work Original artist: Smithers888 • File:Three_apples(1).svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f2/Three_apples%281%29.svg License: CC0 Con- tributors: based on File:Three apples.svg in the public domain Original artist: MjolnirPants • File:Tree_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/24/Tree_graph.svg License: Public domain Contrib- utors: ? Original artist: ? • File:Truncated_tetrahedral_graph.circo.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/73/Truncated_tetrahedral_ graph.circo.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Koko90 • File:U+2115.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4f/U%2B2115.svg License: Public domain Contributors: Transferred from en.wikipedia; transferred to Commons by User:Common Good using CommonsHelper. Original artist: Original uploader was Joey-das-WBF at en.wikipedia • File:Undirected.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bf/Undirected.svg License: Public domain Contribu- tors: ? Original artist: ? • File:UndirectedDegrees.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/97/UndirectedDegrees.svg License: GFDL Contributors: Own work Original artist: Melchoir • File:UndirectedDegrees_(Loop).svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d6/UndirectedDegrees_%28Loop% 29.svg License: GFDL Contributors: UndirectedDegrees.svg Original artist: Melchoir (source); pan BMP • File:Uniform_continuity_animation.gif Source: https://upload.wikimedia.org/wikipedia/commons/3/39/Uniform_continuity_animation. gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Jakob.scholbach • File:Union_et_intersection_d'ensembles.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bc/Union_et_intersection_ d%27ensembles.svg License: CC-BY-SA-3.0 Contributors: Own work Original artist: Benoît Stella alias BenduKiwi • File:Venn’{}s_four_ellipse_construction.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ac/Venn%27s_four_ellipse_ construction.svg License: CC BY-SA 3.0 Contributors: Own work by uploader - I made this in Inkscape from my own recollection of Venn’s construction Original artist: RupertMillard • File:Venn_A_intersect_B.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Venn_A_intersect_B.svg License: Pub- lic domain Contributors: Own work Original artist: Cepheus • File:VerticalShear_m=1.25.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/92/VerticalShear_m%3D1.25.svg License: Public domain Contributors: Own work using Inkscape Original artist: RobHar • File:Watts-Strogatz-rewire.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e1/Watts-Strogatz-rewire.png License: CC BY-SA 3.0 Contributors: Own work Original artist: Jmcatania • File:Whitneys_theorem_exception.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ec/Whitneys_theorem_exception. svg License: Public domain Contributors: • Complete_graph_K3.svg Original artist: User:Dcoetzee *derivative work: Dcoetzee (talk) • File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan. svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al. • File:Wikipedia_multilingual_network_graph_July_2013.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5b/Wikipedia_ multilingual_network_graph_July_2013.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Computermacgyver 336 CHAPTER 53. WAGNER’S THEOREM

• File:Wikiversity-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Wikiversity-logo.svg License: CC BY-SA 3.0 Contributors: Snorky (optimized and cleaned up by verdy_p) Original artist: Snorky (optimized and cleaned up by verdy_p) • File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public domain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs), based on original logo tossed together by Brion Vibber • File:Z_2xZ_3.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4f/Z_2xZ_3.svg License: Public domain Contributors: Own work Original artist: Tosha

53.5.3 Content license

• Creative Commons Attribution-Share Alike 3.0