Differential Equations and Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Differential Equations and Linear Algebra Index A beat, 128 absolute stability, 188 bell-shaped curve, 16, 189, 458 absolute value, 83, 86 Bernoulli equation, 61 acceleration, 73, 478 Bessel function, 364, 460, 478 accuracy, 183, 185, 190, 191 better notation, 113, 124, 125 Adams method, 191, 192 big picture, 298, 301, 304, 397 addition formula, 87 Black-Scholes, 457 add exponents, 9 block matrix, 230, 236, 418 adjacency matrix, 316, 318, 425 block multiplication, 225, 226 boundaryconditions, 417, 403, 409, 429, 457 Airy’s equation, 130 boundary value problem, 403, 457, 470 albedo, 49 box, 175 amplitude, 75, 82, 111 box function, 404, 437, 443, 469, 478, 488 amplitude response, 34, 77 Brauer, 179 antisymmetric, 244, 321, 349, 406 applied mathematics, 314, 421, 487 C arrows, 155, 316 capacitance, 119 associative law, 219 carbon, 46 attractor, 169, 180 carrying capacity, 53, 55, 61 augmented matrix, 230, 257, 271, 278 Castillo-Chavez, 179 autocorrelation, 480 catalyst, 179 autonomous, 57, 71, 156, 157, 159 Cayley-Hamilton theorem, 345 average, 434, 438 cell phone, 44, 175 center, 160, 162, 173 B centered difference, 6, 189 back substitution, 212, 262 chain rule, 3, 4, 365, 368 backslash, 220 change of variables, 362 backward difference, 6, 12, 245, 413 chaos, 154, 180 backward Euler, 187, 188 characteristic equation, 90, 103, 108, 163 bad news, 326 chebfun, 402 balance equation, 48, 118, 314, 424 chemical engineering, 457 balance of forces, 118 chess matrix, 309 bank, 12, 40, 485 Cholesky factorization, 400 bar, 403, 405, 409, 455, 457 circulant matrix, 204, 448, 486, 488 basis, 283, 287, 289, 291, 295, 335, 444, 445 circular motion, 76, 348 beam, 469 closed-loop, 64 493 494 Index closest line, 384, 390 Convolution Rule, 476, 480, 484, 485 coefficient matrix, 198 Cooley-Tukey, 451 cofactor, 328 cooling (Newton’s Law), 46 column picture, 197, 205 cosine series, 434 column rank, 273, 320 Counting Theorem, 265, 302, 312 column space, 252, 257, 276 Cramer’s Rule, 328 column-times-row, 221, 225, 427 critical damping, 96, 100, 115 combination of columns, 198, 201 critical point, 169, 170, 181 combination of eigenvectors, 326, 346, cubic spline, 139 353, 368, 371 Current Law, 123, 315, 316 commute, 219, 223 cyclic convolution, 485–487 companion matrix, 163, 164, 166, 332, D 351–353, 357, 366 d’Alembert, 464, 467 competition, 53, 173 damped frequency, 99, 105, 113 complete graph, 425, 426 damped gain, 113 complete solution, 1, 17, 18, 105, 106, damping, 96, 112, 118, 122 202, 210, 263, 272, 274 damping ratio, 99, 113, 114 complex conjugate, 32, 87, 94, 376 dashpot, 118 complex eigenvalues, 165 data, 398, 429 complex exponential, 13, 430 DCT, 454 complex Fourier series, 438 decay rate, 46, 435, 442, 456, 467 complex gain, 111 deconvolution, 485, 487 complex impedance, 120 degree matrix, 316, 421, 425 complex matrix, 373 delta function, 23, 28, 78, 97, 98, 404, complex numbers, 32, 82 436, 437, 440, 458, 471 complex roots, 90, 162 delta vector, 413, 445, 482 complex solution, 36, 38, 39, 89 dependent, 286 complex vector, 431 dependent columns, 208 compound interest, 12, 184 derivative rule, 141, 439, 476 computational mechanics, 369 determinant, 174, 227, 231, 323, 327, computational science, 417, 445 329, 333, 344, 350, 399, 492 concentration, 47, 179 DFT, 430, 444, 448, 454, 485 condition number, 398 diagonal matrix, 228, 395 conductance matrix, 124, 382, 423, 424 diagonalizable, 354, 379 conjugate transpose, 374 difference equation, 45, 52, 183, 187, 335 constant coefficients, 1, 98, 117, 430, difference matrix, 239, 312, 402, 421 470, 487 differential equation, 1, 40 constant diagonals, 482, 486, 487 diffusion, 355, 456, 457 constant source, 20 diagonalization, 334, 397 continuous, 153, 314 dimension, 44, 52, 265, 283, 289–291, continuous interest, 44 302, 320 convergence, 10, 195 dimensionless, 34, 99, 113, 124 convex, 73 direction field, 156 convolution, 117, 136, 479-489 Discrete Fourier Transform, (see DFT) Index 495 discrete sines, 402, 430, 454 F displacements, 124 factorization, 379, 490 distributive law, 219 farad, 122 divergence, 415 Fast Fourier Transform, 88, 449 dot product, 200, 213, 247, 374 feedback, 64 double angle, 84 FFT, 430, 444, 445, 449, 451 double pole, 145, 472 fftw, 452 double root, 90, 92, 101 Fibonacci, 337, 342, 402 doublet, 151 filter, 480 doubling time, 46, 47 finite elements, 124, 370, 417, 428 driving function, 77, 112, 476 finite speed, 463 dropoff curve, 57, 62, 156 first order, 163 E flow graph, 452 echelon matrix, 261, 264, 265 football, 175, 177 edge, 311, 421 force balance, 424 eigenfunction, 405, 419, 455, 459, 467 forced oscillation, 80, 105, 110 eigenvalue, 163, 322, 323, 379 forward differences, 239 eigenvalue matrix, 334 Four Fundamental Subspaces, 298, 301 eigenvector, 166, 322, 323, 379 Fourier coefficients, 433–435, 438 eigenvector matrix, 334, 360 Fourier cosine series, 457 Einstein, 464 Fourier Integral Transform, 448 elapsed time, 98 Fourier matrix, 85, 242, 444-447, 449 elimination, 209, 211, 331 Fourier series, 419, 434, 437, 441, 455 elimination matrix, 223, 228, 301 Fourier sine series, 407, 432, 467 empty set, 291 fourth order, 80, 93, 469 energy, 393, 394, 406, 408, 422, 441 foxes, 171, 173 energy balance, 48 free column, 260 energy identity, 438, 442 free variable, 262, 264, 267, 268, 272 enzyme, 179 free-free boundary conditions, 409 epidemic, 178, 179 frequency, 31, 76, 79, 370, 466 equal roots, 90, 92, 101 frequency domain, 120, 145, 448, 480 equilibrium, 415 frequency response, 36, 77, 430 error, 184, 185, 190, 193 frisbee, 175 error function, 458 full rank, 273-275, 279, 285, 382 error vector, 383, 391 function space, 291, 296, 431, 438, 480 Euler, 315 fundamental matrix, 363, 368, 381 Euler equations, 175, 182 fundamental solution, 78, 81, 97, 117, 458 Euler’s Formula, 13, 82, 83, 449 Fundamental Theorem, 5, 8, 42, 243, Euler’s method, 184, 185, 188, 381 302, 305, 397 even permutation, 245 exact equations, 65 G existence, 153, 195 gain, 30, 33, 84, 104, 111 exponential, 2, 7, 10, 25, 131, 359, 366 Gauss-Jordan, 229–231, 235, 281, 328 exponential response, 104, 108, 117 gene, 429 496 Index general solution, 278 impulse response, 23, 24, 78, 97, 102, generalized eigenvalues, 369 117, 121, 136, 140, 150, 482 geometric series, 7 incidence matrix, 124, 311, 315, 318, 421 Gibbs phenomenon, 433, 434 independence, 203 gold, 152 independent columns, 271, 274, 288, Gompertz equation, 63 320, 382, 388 Google, 325 independent eigenvectors, 359 GPS, 464 independent rows, 271 gradient, 415, 419 inductance, 119 graph, 311, 315, 316, 318, 414, 421 infection rate, 178 graph Laplacian, 316, 318, 422 infinite series, 10, 13, 326, 366, 432, 455 Green’s function, 136, 482, 483 inflection point, 54, 55 greenhouse effect, 49 initial conditions, 2, 40, 73, 346, 457 grid, 414, 417, 427 initial values, 470, 483 ground a node, 422, 424 inner product, 225, 321, 374, 406, 431 growth factor, 24, 40–42, 51, 97, 135, 482 instability, 192 growth rate, 2, 40, 362 integrating factor, 19, 26, 41, 482 integration by parts, 247, 321, 406, 411, 429 H interest rate, 12, 43, 485 Henon map, 180 intersection, 200, 256, 297 HadamardK matrix, 242, 341 inverse matrix, 31, 227, 230, 482 half-life, 46 inverse transform, 140, 444, 473, 477 harmonic motion, 75, 76, 79 invertible, 204, 212, 227, 288 harvesting, 59, 60, 62 isocline, 155, 158, 159 hat function, 467 J heat equation, 407, 455, 456 Jacobian matrix, 170, 176 heat kernel, 457, 458, 460 Jordan form, 354, 379, 380 Heaviside, 21, 477 Julia, 327 Henry, 122 jump, 21, 474, 475 Hermitian matrix, 374 Hertz, 76 K higher order, 93, 102, 105, 107, 117, 352 key formula, 8, 19, 78, 112, 117, 135, 482 Hilbert space, 431 kinetic energy, 79 homogeneous, 17, 103 Kirchhoff’s Current Law, 314, 422 Hooke’s Law, 74, 371, 422 Kirchhoff’s Laws, 123, 270 hyperplane, 206 Kirchhoff’s Voltage Law, 313 KKT matrix, 426 I kron .A;B/, 418 identity matrix, 200, 218 image, 484 L imaginary eigenvalues, 329, 348 l’Hopital’s Rule, 43, 109 O impedance, 39, 120, 121, 127 LAPACK, 241, 329 implicit, 67, 187 Laplace convolution, 481, 483 impulse, 23, 78 Laplace equation, 414, 415 Index 497 Laplace transform, 121, 140-150, 470-478 companion, 163, 352, 357 Laplace’s equation, 416, 440, 441 complex, 373 Laplacian matrix, 316, 318, 422 difference, 239, 312, 402, 421, law of mass action, 179 echelon, 264 least squares, 382–384 eigenvalue, 334 left eigenvectors, 345 eigenvector, 334, 360 left nullspace, 298, 300 elimination, 223, 228, 301 left-inverse, 227, 231, 241 exponential, 14, 359, 367 length, 241 factorizations, 379, 490 Lienard, 181 K Fourier, 85, 242, 444, 447, 449 linear combination, 198, 200, 252, 286 fundamental, 363 linear equation, 4, 17, 105, 130, 176, 346 Hadamard, 242, 341 linear shift-invariant, 459 Hermitian, 374 linear time-invariant (LTI), 71, 346 identity, 200, 218 linear transformation, 208 incidence, 124, 311, 312, 315, 421 linearity, 220, 471 inverse, 227, 230 linearization, 171-178 invertible, 204, 212, 227, 288 linearly independent, 275, 285, 287 Jacobian, 170, 176 lobster trap, 158 KKT, 426, 428 logistic equation, 47, 53, 62, 156, 189 Laplacian, 316, 318, 422 loop, 313–315 Markov, 324, 330 loop equation, 119, 120, 123 Lorenz equation, ix, 154, 180 orthogonal, 237, 246, 373 Lotka-Volterra, 172 permutation, 240, 245, 297, 449 positive definite, 369, 382, 393 M projection orthogonal, 237, 241, 246, magic matrix, 208 331, 373, 378, 379, 386, 391 magnitude, 112 rank one, 303, 379, 398 magnitude response, 34, 77 rectangular, 382 Markov matrix, 324, 326, 330, 379 reflection, 246 mass action, 179 rotation, 328 mass matrix, 369, 378 saddle-point, 426, 428 Mathematica, 193, 467 second difference, 412 mathematical finance, 457 semidefinite, 395, 409, 411 MATLAB, 190, 327, 369, 444, 451, 486 similar, 362, 367, 380 singular, 201, 323, 325, 500 The single heading “Matrix” indexes skew-symmetric, 379 the active life of linear algebra.
Recommended publications
  • Quiz7 Problem 1
    Quiz 7 Quiz7 Problem 1. Independence The Problem. In the parts below, cite which tests apply to decide on independence or dependence. Choose one test and show complete details. 1 ! 1 ! (a) Vectors ~v = ;~v = 1 −1 2 1 0 1 1 0 1 1 0 1 1 B C B C B C (b) Vectors ~v1 = @ −1 A ;~v2 = @ 1 A ;~v3 = @ 0 A 0 −1 −1 2 5 (c) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = x ; y = x ; y = x10 on (−∞; 1). (d) Vectors ~v1;~v2;~v3 are data packages constructed from the equations y = 1 + x; y = 1 − x; y = x3 on (−∞; 1). Basic Test. To show three vectors are independent, form the system of equations c1~v1 + c2~v2 + c3~v3 = ~0; then solve for c1; c2; c3. If the only solution possible is c1 = c2 = c3 = 0, then the vectors are independent. Linear Combination Test. A list of vectors is independent if and only if each vector in the list is not a linear combination of the remaining vectors, and each vector in the list is not zero. Subset Test. Any nonvoid subset of an independent set is independent. The basic test leads to three quick independence tests for column vectors. All tests use the augmented matrix A of the vectors, which for 3 vectors is A =< ~v1j~v2j~v3 >. Rank Test. The vectors are independent if and only if rank(A) equals the number of vectors. Determinant Test. Assume A is square. The vectors are independent if and only if the deter- minant of A is nonzero.
    [Show full text]
  • Boundary Value Problems on Weighted Paths
    Introduction and basic concepts BVP on weighted paths Bibliography Boundary value problems on a weighted path Angeles Carmona, Andr´esM. Encinas and Silvia Gago Depart. Matem`aticaAplicada 3, UPC, Barcelona, SPAIN Midsummer Combinatorial Workshop XIX Prague, July 29th - August 3rd, 2013 MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts BVP on weighted paths Bibliography Outline of the talk Notations and definitions Weighted graphs and matrices Schr¨odingerequations Boundary value problems on weighted graphs Green matrix of the BVP Boundary Value Problems on paths Paths with constant potential Orthogonal polynomials Schr¨odingermatrix of the weighted path associated to orthogonal polynomials Two-side Boundary Value Problems in weighted paths MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts Schr¨odingerequations BVP on weighted paths Definition of BVP Bibliography Weighted graphs A weighted graphΓ=( V ; E; c) is composed by: V is a set of elements called vertices. E is a set of elements called edges. c : V × V −! [0; 1) is an application named conductance associated to the edges. u, v are adjacent, u ∼ v iff c(u; v) = cuv 6= 0. X The degree of a vertex u is du = cuv . v2V c34 u4 u1 c12 u2 c23 u3 c45 c35 c27 u5 c56 u7 c67 u6 MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts Schr¨odingerequations BVP on weighted paths Definition of BVP Bibliography Matrices associated with graphs Definition The weighted Laplacian matrix of a weighted graph Γ is defined as di if i = j; (L)ij = −cij if i 6= j: c34 u4 u c u c u 1 12 2 23 3 0 d1 −c12 0 0 0 0 0 1 c B −c12 d2 −c23 0 0 0 −c27 C 45 B C c B 0 −c23 d3 −c34 −c35 0 0 C 35 B C c27 L = B 0 0 −c34 d4 −c45 0 0 C B C u5 B 0 0 −c35 −c45 d5 −c56 0 C c56 @ 0 0 0 0 −c56 d6 −c67 A 0 −c27 0 0 0 −c67 d7 u7 c67 u6 MCW 2013, A.
    [Show full text]
  • Adjacency and Incidence Matrices
    Adjacency and Incidence Matrices 1 / 10 The Incidence Matrix of a Graph Definition Let G = (V ; E) be a graph where V = f1; 2;:::; ng and E = fe1; e2;:::; emg. The incidence matrix of G is an n × m matrix B = (bik ), where each row corresponds to a vertex and each column corresponds to an edge such that if ek is an edge between i and j, then all elements of column k are 0 except bik = bjk = 1. 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 2 / 10 The First Theorem of Graph Theory Theorem If G is a multigraph with no loops and m edges, the sum of the degrees of all the vertices of G is 2m. Corollary The number of odd vertices in a loopless multigraph is even. 3 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G. Then the rank of B is n − 1 if G is bipartite and n otherwise. Example 1 2 e 21 1 13 f 61 0 07 3 B = 6 7 g 40 1 05 4 0 0 1 4 / 10 Linear Algebra and Incidence Matrices of Graphs Recall that the rank of a matrix is the dimension of its row space. Proposition Let G be a connected graph with n vertices and let B be the incidence matrix of G.
    [Show full text]
  • A Brief Introduction to Spectral Graph Theory
    A BRIEF INTRODUCTION TO SPECTRAL GRAPH THEORY CATHERINE BABECKI, KEVIN LIU, AND OMID SADEGHI MATH 563, SPRING 2020 Abstract. There are several matrices that can be associated to a graph. Spectral graph theory is the study of the spectrum, or set of eigenvalues, of these matrices and its relation to properties of the graph. We introduce the primary matrices associated with graphs, and discuss some interesting questions that spectral graph theory can answer. We also discuss a few applications. 1. Introduction and Definitions This work is primarily based on [1]. We expect the reader is familiar with some basic graph theory and linear algebra. We begin with some preliminary definitions. Definition 1. Let Γ be a graph without multiple edges. The adjacency matrix of Γ is the matrix A indexed by V (Γ), where Axy = 1 when there is an edge from x to y, and Axy = 0 otherwise. This can be generalized to multigraphs, where Axy becomes the number of edges from x to y. Definition 2. Let Γ be an undirected graph without loops. The incidence matrix of Γ is the matrix M, with rows indexed by vertices and columns indexed by edges, where Mxe = 1 whenever vertex x is an endpoint of edge e. For a directed graph without loss, the directed incidence matrix N is defined by Nxe = −1; 1; 0 corresponding to when x is the head of e, tail of e, or not on e. Definition 3. Let Γ be an undirected graph without loops. The Laplace matrix of Γ is the matrix L indexed by V (G) with zero row sums, where Lxy = −Axy for x 6= y.
    [Show full text]
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS June 6
    EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS HAW-REN FANG∗ AND DIANNE P. O’LEARY† June 6, 2010 Abstract. A Euclidean distance matrix is one in which the (i, j) entry specifies the squared distance between particle i and particle j. Given a partially-specified symmetric matrix A with zero diagonal, the Euclidean distance matrix completion problem (EDMCP) is to determine the unspecified entries to make A a Euclidean distance matrix. We survey three different approaches to solving the EDMCP. We advocate expressing the EDMCP as a nonconvex optimization problem using the particle positions as variables and solving using a modified Newton or quasi-Newton method. To avoid local minima, we develop a randomized initial- ization technique that involves a nonlinear version of the classical multidimensional scaling, and a dimensionality relaxation scheme with optional weighting. Our experiments show that the method easily solves the artificial problems introduced by Mor´e and Wu. It also solves the 12 much more difficult protein fragment problems introduced by Hen- drickson, and the 6 larger protein problems introduced by Grooms, Lewis, and Trosset. Key words. distance geometry, Euclidean distance matrices, global optimization, dimensional- ity relaxation, modified Cholesky factorizations, molecular conformation AMS subject classifications. 49M15, 65K05, 90C26, 92E10 1. Introduction. Given the distances between each pair of n particles in Rr, n r, it is easy to determine the relative positions of the particles. In many applications,≥ though, we are given only some of the distances and we would like to determine the missing distances and thus the particle positions. We focus in this paper on algorithms to solve this distance completion problem.
    [Show full text]
  • Variants of the Graph Laplacian with Applications in Machine Learning
    Variants of the Graph Laplacian with Applications in Machine Learning Sven Kurras Dissertation zur Erlangung des Grades des Doktors der Naturwissenschaften (Dr. rer. nat.) im Fachbereich Informatik der Fakult¨at f¨urMathematik, Informatik und Naturwissenschaften der Universit¨atHamburg Hamburg, Oktober 2016 Diese Promotion wurde gef¨ordertdurch die Deutsche Forschungsgemeinschaft, Forschergruppe 1735 \Structural Inference in Statistics: Adaptation and Efficiency”. Betreuung der Promotion durch: Prof. Dr. Ulrike von Luxburg Tag der Disputation: 22. M¨arz2017 Vorsitzender des Pr¨ufungsausschusses: Prof. Dr. Matthias Rarey 1. Gutachterin: Prof. Dr. Ulrike von Luxburg 2. Gutachter: Prof. Dr. Wolfgang Menzel Zusammenfassung In s¨amtlichen Lebensbereichen finden sich Graphen. Zum Beispiel verbringen Menschen viel Zeit mit der Kantentraversierung des Internet-Graphen. Weitere Beispiele f¨urGraphen sind soziale Netzwerke, ¨offentlicher Nahverkehr, Molek¨ule, Finanztransaktionen, Fischernetze, Familienstammb¨aume,sowie der Graph, in dem alle Paare nat¨urlicher Zahlen gleicher Quersumme durch eine Kante verbunden sind. Graphen k¨onnendurch ihre Adjazenzmatrix W repr¨asentiert werden. Dar¨uber hinaus existiert eine Vielzahl alternativer Graphmatrizen. Viele strukturelle Eigenschaften von Graphen, beispielsweise ihre Kreisfreiheit, Anzahl Spannb¨aume,oder Random Walk Hitting Times, spiegeln sich auf die ein oder andere Weise in algebraischen Eigenschaften ihrer Graphmatrizen wider. Diese grundlegende Verflechtung erlaubt das Studium von Graphen unter Verwendung s¨amtlicher Resultate der Linearen Algebra, angewandt auf Graphmatrizen. Spektrale Graphentheorie studiert Graphen insbesondere anhand der Eigenwerte und Eigenvektoren ihrer Graphmatrizen. Dabei ist vor allem die Laplace-Matrix L = D − W von Bedeutung, aber es gibt derer viele Varianten, zum Beispiel die normalisierte Laplacian, die vorzeichenlose Laplacian und die Diplacian. Die meisten Varianten basieren auf einer \syntaktisch kleinen" Anderung¨ von L, etwa D +W anstelle von D −W .
    [Show full text]
  • Problems in Abstract Algebra
    STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth 10.1090/stml/082 STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell (Chair) Erica Flapan Serge Tabachnikov 2010 Mathematics Subject Classification. Primary 00A07, 12-01, 13-01, 15-01, 20-01. For additional information and updates on this book, visit www.ams.org/bookpages/stml-82 Library of Congress Cataloging-in-Publication Data Names: Wadsworth, Adrian R., 1947– Title: Problems in abstract algebra / A. R. Wadsworth. Description: Providence, Rhode Island: American Mathematical Society, [2017] | Series: Student mathematical library; volume 82 | Includes bibliographical references and index. Identifiers: LCCN 2016057500 | ISBN 9781470435837 (alk. paper) Subjects: LCSH: Algebra, Abstract – Textbooks. | AMS: General – General and miscellaneous specific topics – Problem books. msc | Field theory and polyno- mials – Instructional exposition (textbooks, tutorial papers, etc.). msc | Com- mutative algebra – Instructional exposition (textbooks, tutorial papers, etc.). msc | Linear and multilinear algebra; matrix theory – Instructional exposition (textbooks, tutorial papers, etc.). msc | Group theory and generalizations – Instructional exposition (textbooks, tutorial papers, etc.). msc Classification: LCC QA162 .W33 2017 | DDC 512/.02–dc23 LC record available at https://lccn.loc.gov/2016057500 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society.
    [Show full text]
  • Chapter 7 Block Designs
    Chapter 7 Block Designs One simile that solitary shines In the dry desert of a thousand lines. Epilogue to the Satires, ALEXANDER POPE The collection of all subsets of cardinality k of a set with v elements (k < v) has the ³ ´ v¡t property that any subset of t elements, with 0 · t · k, is contained in precisely k¡t subsets of size k. The subsets of size k provide therefore a nice covering for the subsets of a lesser cardinality. Observe that the number of subsets of size k that contain a subset of size t depends only on v; k; and t and not on the specific subset of size t in question. This is the essential defining feature of the structures that we wish to study. The example we just described inspires general interest in producing similar coverings ³ ´ v without using all the k subsets of size k but rather as small a number of them as possi- 1 2 CHAPTER 7. BLOCK DESIGNS ble. The coverings that result are often elegant geometrical configurations, of which the projective and affine planes are examples. These latter configurations form nice coverings only for the subsets of cardinality 2, that is, any two elements are in the same number of these special subsets of size k which we call blocks (or, in certain instances, lines). A collection of subsets of cardinality k, called blocks, with the property that every subset of size t (t · k) is contained in the same number (say ¸) of blocks is called a t-design. We supply the reader with constructions for t-designs with t as high as 5.
    [Show full text]
  • Arxiv:1711.06300V1
    EXPLICIT BLOCK-STRUCTURES FOR BLOCK-SYMMETRIC FIEDLER-LIKE PENCILS∗ M. I. BUENO†, M. MARTIN ‡, J. PEREZ´ §, A. SONG ¶, AND I. VIVIANO k Abstract. In the last decade, there has been a continued effort to produce families of strong linearizations of a matrix polynomial P (λ), regular and singular, with good properties, such as, being companion forms, allowing the recovery of eigen- vectors of a regular P (λ) in an easy way, allowing the computation of the minimal indices of a singular P (λ) in an easy way, etc. As a consequence of this research, families such as the family of Fiedler pencils, the family of generalized Fiedler pencils (GFP), the family of Fiedler pencils with repetition, and the family of generalized Fiedler pencils with repetition (GFPR) were con- structed. In particular, one of the goals was to find in these families structured linearizations of structured matrix polynomials. For example, if a matrix polynomial P (λ) is symmetric (Hermitian), it is convenient to use linearizations of P (λ) that are also symmetric (Hermitian). Both the family of GFP and the family of GFPR contain block-symmetric linearizations of P (λ), which are symmetric (Hermitian) when P (λ) is. Now the objective is to determine which of those structured linearizations have the best numerical properties. The main obstacle for this study is the fact that these pencils are defined implicitly as products of so-called elementary matrices. Recent papers in the literature had as a goal to provide an explicit block-structure for the pencils belonging to the family of Fiedler pencils and any of its further generalizations to solve this problem.
    [Show full text]
  • Polynomial Sequences Generated by Linear Recurrences
    Innocent Ndikubwayo Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Linear Recurrences: Location by Sequences Generated Polynomial Innocent Ndikubwayo ISBN 978-91-7911-462-6 Department of Mathematics Doctoral Thesis in Mathematics at Stockholm University, Sweden 2021 Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Innocent Ndikubwayo Academic dissertation for the Degree of Doctor of Philosophy in Mathematics at Stockholm University to be publicly defended on Friday 14 May 2021 at 15.00 in sal 14 (Gradängsalen), hus 5, Kräftriket, Roslagsvägen 101 and online via Zoom, public link is available at the department website. Abstract In this thesis, we study the problem of location of the zeros of individual polynomials in sequences of polynomials generated by linear recurrence relations. In paper I, we establish the necessary and sufficient conditions that guarantee hyperbolicity of all the polynomials generated by a three-term recurrence of length 2, whose coefficients are arbitrary real polynomials. These zeros are dense on the real intervals of an explicitly defined real semialgebraic curve. Paper II extends Paper I to three-term recurrences of length greater than 2. We prove that there always exist non- hyperbolic polynomial(s) in the generated sequence. We further show that with at most finitely many known exceptions, all the zeros of all the polynomials generated by the recurrence lie and are dense on an explicitly defined real semialgebraic curve which consists of real intervals and non-real segments. The boundary points of this curve form a subset of zero locus of the discriminant of the characteristic polynomial of the recurrence.
    [Show full text]
  • Explicit Inverse of a Tridiagonal (P, R)–Toeplitz Matrix
    Explicit inverse of a tridiagonal (p; r){Toeplitz matrix A.M. Encinas, M.J. Jim´enez Departament de Matemtiques Universitat Politcnica de Catalunya Abstract Tridiagonal matrices appears in many contexts in pure and applied mathematics, so the study of the inverse of these matrices becomes of specific interest. In recent years the invertibility of nonsingular tridiagonal matrices has been quite investigated in different fields, not only from the theoretical point of view (either in the framework of linear algebra or in the ambit of numerical analysis), but also due to applications, for instance in the study of sound propagation problems or certain quantum oscillators. However, explicit inverses are known only in a few cases, in particular when the tridiagonal matrix has constant diagonals or the coefficients of these diagonals are subjected to some restrictions like the tridiagonal p{Toeplitz matrices [7], such that their three diagonals are formed by p{periodic sequences. The recent formulae for the inversion of tridiagonal p{Toeplitz matrices are based, more o less directly, on the solution of second order linear difference equations, although most of them use a cumbersome formulation, that in fact don not take into account the periodicity of the coefficients. This contribution presents the explicit inverse of a tridiagonal matrix (p; r){Toeplitz, which diagonal coefficients are in a more general class of sequences than periodic ones, that we have called quasi{periodic sequences. A tridiagonal matrix A = (aij) of order n + 2 is called (p; r){Toeplitz if there exists m 2 N0 such that n + 2 = mp and ai+p;j+p = raij; i; j = 0;:::; (m − 1)p: Equivalently, A is a (p; r){Toeplitz matrix iff k ai+kp;j+kp = r aij; i; j = 0; : : : ; p; k = 0; : : : ; m − 1: We have developed a technique that reduces any linear second order difference equation with periodic or quasi-periodic coefficients to a difference equation of the same kind but with constant coefficients [3].
    [Show full text]