Preconditioning Techniques Based on the Birkhoff–Von Neumann

Total Page:16

File Type:pdf, Size:1020Kb

Preconditioning Techniques Based on the Birkhoff–Von Neumann Preconditioning techniques based on the Birkhoff–von Neumann decomposition Michele Benzi∗ Alex Potheny Bora U¸carz Abstract same property. These motivations are shared by recent We introduce a class of preconditioners for general sparse work on ILU preconditioners, where their fine-grained matrices based on the Birkhoff–von Neumann decomposition computation [8] and approximate application [1] are in- of doubly stochastic matrices. These preconditioners are vestigated for GPU-like systems. aimed primarily at solving challenging linear systems with The paper is organized as follows. We first give highly unstructured and indefinite coefficient matrices. We necessary background (Section 2) on doubly stochastic present some theoretical results and numerical experiments matrices and the BvN decomposition. We then develop on linear systems from a variety of applications. splittings for doubly stochastic matrices (Section 3), where we analyze convergence properties and discuss 1 Introduction algorithms to construct the preconditioners. Later in the same Section, we discuss how the preconditioners We consider the solution of linear systems Ax = b where can be used for arbitrary matrices by some preprocess- A = [a ] 2 n×n, b is a given vector and x is the ij R ing. Here our approach results in a generalization of the unknown vector. Our aim is to develop and investigate Birkhoff-von Neumann decomposition for matrices with preconditioners for Krylov subspace methods for solving positive and negative entries where the sum of the ab- such linear systems, where A is highly unstructured and solute values of the entries in any given row or column indefinite. is one. This generalization could be of interest in other For a given matrix A, we first preprocess it to get a areas. Then, we give experimental results (Section 4) doubly stochastic matrix (whose row and column sums with nonnegative and also arbitrary matrices, and then are one). Then using this doubly stochastic matrix, we conclude the paper. select some fraction of some of the nonzeros of A to be included in the preconditioner. Our main tools are 2 Background and definitions the well-known Birkhoff-von Neumann (BvN) decompo- sition (this will be discussed in Section 2 for complete- Here we define several properties of matrices: irre- ness), and a splitting of the input matrix in the form ducible, full indecomposable, and doubly stochastic ma- A = M − N based on its BvN decomposition. When trices. such a splitting is defined, M −1 or solvers for My = z An n × n matrix A is reducible if there exists a are required. We discuss sufficient conditions when such permutation matrix P such that a splitting is convergent and discuss specialized solvers A A for My = z when these conditions are met. In case the P AP T = 1;1 1;2 ; OA conditions become restrictive in practice, we build pre- 2;2 conditioners based on this splitting by using the LU de- where A is an r × r submatrix, A is an (n − r) × composition of M. Our motivation is that that the pre- 1;1 2;2 −1 (n−r) submatrix, and 1 ≤ r < n. If such a permutation conditioner M can be applied to vectors by a number matrix does not exist, then A is irreducible [21, Ch. 1]. of highly concurrent steps, where the number of steps When A is reducible, either A1;1 or A2;2 can be reducible is controlled by the user. Therefore, the precondition- as well, and we can recursively identify their diagonal ers (or the splittings) can be advantageous for use in blocks, until all diagonal blocks are irreducible. That many-core computing systems. In the context of split- is, we can obtain tings, the application of N to vectors can also enjoy the 2 3 ∗Department of Mathematics and Computer Science, Emory A1;1 A1;2 ··· A1;s University, Atlanta, USA ([email protected]). 6 0 A2;2 ··· A2;s7 yDepartment of Computer Science, Purdue University, West (2.1) P AP T = 6 7 ; 6 . .. 7 Lafayette, USA ([email protected]). 4 . 5 z CNRS and LIP, ENS Lyon, France ([email protected]) 0 0 ··· As;s where each Ai;i is square and irreducible. This block where upper triangular form, with square irreducible diagonal blocks is called Frobenius normal form [15, p. 532]. (3.4) M = α1P1 + ··· + αrPr; An n × n matrix A is fully indecomposable if there N = −αr+1Pr+1 − · · · − αkPk: exists a permutation matrix Q such that AQ has a zero- free diagonal and is irreducible [6, Chs. 3 and 4]. If A Note that M and −N are doubly substochastic matri- is not fully indecomposable, but nonsingular, it can be ces. permuted into the block upper triangular form Definition 1. A splitting of the form (3.3) with M and A A P AQT = 1;1 1;2 ; N given by (3.4) is said to be a doubly substochastic OA2;2 splitting. where each Ai;i is fully indecomposable or can be further Definition 2. A doubly substochastic splitting A = permuted into the block upper triangular form. If M − N of a doubly stochastic matrix A is said to be the coefficient matrix of a linear system is not fully standard if M is invertible. We will call such a splitting indecomposable, the block upper triangular form should an SDS splitting. be obtained, and only the small systems with the diagonal blocks should be factored for simplicity and In general, it is not easy to guarantee that a given efficiency [9, Ch. 6]. We therefore assume without loss doubly substochastic splitting is standard, except for of generality that matrix A is fully indecomposable. some trivial situation such as the case r = 1, in An n × n matrix A is doubly stochastic if aij ≥ 0 which case M is always invertible. We also have a for all i; j and Ae = AT e = e, where e is the vector characterization for invertible M when r = 2. of all ones. This means that the row sums and col- umn sums are equal to one. If these sums are less Theorem 3.1. Let M = α1P1 + α2P2. Then, M is than one, then the matrix A is doubly substochastic. invertible if (i) α1 6= α2, or (ii) α1 = α2 and all the A doubly stochastic matrix is fully indecomposable or fully indecomposable blocks of M have an odd number is block diagonal where each block is fully indecompos- of rows (and columns). If any such block is of even able. By Birkhoff's Theorem [3], there exist coefficients order, M is singular. α ; α ; : : : ; α 2 (0; 1) with Pk α = 1, and permuta- 1 2 k i=1 i Proof. We investigate the two cases separately. tion matrices P ;P ;:::;P such that 1 2 k Case (i): Without loss of generality assume that (2.2) A = α1P1 + α2P2 + ··· + αkPk : α1 > α2. We have T Such a representation of A as a convex combination M = α1P1 + α2P2 = P1(α1I + α2P1 P2): of permutation matrices is known as a Birkhoff–von T Neumann decomposition (BvN); in general, it is not The matrix α1I + α2P1 P2 is nonsingular. Indeed, its unique. The Marcus{Ree Theorem states that there eigenvalues are of the form α1 + α2λj, where λj is the are BvN decompositions with k ≤ n2 − 2n + 2 for generic eigenvalue of the (orthogonal, doubly stochastic) T dense matrices; Brualdi and Gibson [5] and Brualdi [4] matrix P1 P2, and since jλjj = 1 for all j and α1 > α2, show that for a fully indecomposable sparse matrix it follows that α1 + α2λj 6= 0 for all j. Thus, M is with τ nonzeros, we have BvN decompositions with invertible. k ≤ τ − 2n + 2. Case (ii): This is a consequence of the Perron{ An n × n nonnegative, fully indecomposable matrix Frobenius Theorem. To see this, observe that we A can be uniquely scaled with two positive diagonal ma- need to show that under the stated conditions the sum T trices R and C such that RAC is doubly stochastic [20]. P1 + P2 = P1(I + P1 P2) is invertible, i.e., λ = −1 T cannot be an eigenvalue of P1 P2. Since both P1 and P2 T 3 Splittings of doubly stochastic matrices are permutation matrices, P1 P2 is also a permutation T T n matrix and the Frobenius normal form T = Π(P1 P2)Π 3.1 Definition and properties. Let b 2 R be T given and consider solving the linear system Ax = b of P1 P2, i.e., the block triangular matrix where A is doubly stochastic. Hereafter we assume that 2 3 A is invertible. After finding a representation of A in T1;1 T1;2 ··· T1;s the form (2.2), pick an integer r between 1 and k − 1 6 0 T2;2 ··· T2;s7 and split A as (3.5) T = 6 7 ; 6 . .. 7 4 . 5 (3.3) A = M − N; 0 0 ··· Ts;s T 0 has Ti;j = 0 for i 6= j. The eigenvalues of P1 P2 are just any x if and only if ρ(H) < 1. Hence, we are interested the eigenvalues of the diagonal blocks Ti;i of T . Note in conditions that guarantee that the spectral radius of that there may be only one such block, corresponding the iteration matrix T to the case where P P2 is irreducible. Each diagonal 1 −1 block Ti;i is also a permutation matrix. Thus, each Ti;i H = M N = −1 is doubly stochastic, orthogonal, and irreducible. Any − (α1P1 + ··· + αrPr) (αr+1Pr+1 + ··· + αkPk) matrix of this kind corresponds to a cyclic permutation and has its eigenvalues on the unit circle.
Recommended publications
  • Two-State Systems
    1 TWO-STATE SYSTEMS Introduction. Relative to some/any discretely indexed orthonormal basis |n) | ∂ | the abstract Schr¨odinger equation H ψ)=i ∂t ψ) can be represented | | | ∂ | (m H n)(n ψ)=i ∂t(m ψ) n ∂ which can be notated Hmnψn = i ∂tψm n H | ∂ | or again ψ = i ∂t ψ We found it to be the fundamental commutation relation [x, p]=i I which forced the matrices/vectors thus encountered to be ∞-dimensional. If we are willing • to live without continuous spectra (therefore without x) • to live without analogs/implications of the fundamental commutator then it becomes possible to contemplate “toy quantum theories” in which all matrices/vectors are finite-dimensional. One loses some physics, it need hardly be said, but surprisingly much of genuine physical interest does survive. And one gains the advantage of sharpened analytical power: “finite-dimensional quantum mechanics” provides a methodological laboratory in which, not infrequently, the essentials of complicated computational procedures can be exposed with closed-form transparency. Finally, the toy theory serves to identify some unanticipated formal links—permitting ideas to flow back and forth— between quantum mechanics and other branches of physics. Here we will carry the technique to the limit: we will look to “2-dimensional quantum mechanics.” The theory preserves the linearity that dominates the full-blown theory, and is of the least-possible size in which it is possible for the effects of non-commutivity to become manifest. 2 Quantum theory of 2-state systems We have seen that quantum mechanics can be portrayed as a theory in which • states are represented by self-adjoint linear operators ρ ; • motion is generated by self-adjoint linear operators H; • measurement devices are represented by self-adjoint linear operators A.
    [Show full text]
  • Package 'Networkdistance'
    Package ‘NetworkDistance’ December 8, 2017 Type Package Title Distance Measures for Networks Version 0.1.0 Description Network is a prevalent form of data structure in many fields. As an object of analy- sis, many distance or metric measures have been proposed to define the concept of similarity be- tween two networks. We provide a number of distance measures for networks. See Jur- man et al (2011) <doi:10.3233/978-1-60750-692-8-227> for an overview on spectral class of in- ter-graph distance measures. License GPL (>= 3) Encoding UTF-8 LazyData true Imports Matrix, Rcpp, Rdpack, RSpectra, doParallel, foreach, parallel, stats, igraph, network LinkingTo Rcpp, RcppArmadillo RdMacros Rdpack RoxygenNote 6.0.1 NeedsCompilation yes Author Kisung You [aut, cre] (0000-0002-8584-459X) Maintainer Kisung You <[email protected]> Repository CRAN Date/Publication 2017-12-08 09:54:21 UTC R topics documented: nd.centrality . .2 nd.csd . .3 nd.dsd . .4 nd.edd . .5 nd.extremal . .6 nd.gdd . .7 nd.hamming . .8 1 2 nd.centrality nd.him . .9 nd.wsd . 11 NetworkDistance . 12 Index 13 nd.centrality Centrality Distance Description Centrality is a core concept in studying the topological structure of complex networks, which can be either defined for each node or edge. nd.centrality offers 3 distance measures on node-defined centralities. See this Wikipedia page for more on network/graph centrality. Usage nd.centrality(A, out.dist = TRUE, mode = c("Degree", "Close", "Between"), directed = FALSE) Arguments A a list of length N containing (M × M) adjacency matrices. out.dist a logical; TRUE for computed distance matrix as a dist object.
    [Show full text]
  • Flowcore: Data Structures Package for Flow Cytometry Data
    flowCore: data structures package for flow cytometry data N. Le Meur F. Hahne B. Ellis P. Haaland May 19, 2021 Abstract Background The recent application of modern automation technologies to staining and collecting flow cytometry (FCM) samples has led to many new challenges in data management and analysis. We limit our attention here to the associated problems in the analysis of the massive amounts of FCM data now being collected. From our viewpoint, see two related but substantially different problems arising. On the one hand, there is the problem of adapting existing software to apply standard methods to the increased volume of data. The second problem, which we intend to ad- dress here, is the absence of any research platform which bioinformaticians, computer scientists, and statisticians can use to develop novel methods that address both the volume and multidimen- sionality of the mounting tide of data. In our opinion, such a platform should be Open Source, be focused on visualization, support rapid prototyping, have a large existing base of users, and have demonstrated suitability for development of new methods. We believe that the Open Source statistical software R in conjunction with the Bioconductor Project fills all of these requirements. Consequently we have developed a Bioconductor package that we call flowCore. The flowCore package is not intended to be a complete analysis package for FCM data. Rather, we see it as providing a clear object model and a collection of standard tools that enable R as an informatics research platform for flow cytometry. One of the important issues that we have addressed in the flowCore package is that of using a standardized representation that will insure compatibility with existing technologies for data analysis and will support collaboration and interoperability of new methods as they are developed.
    [Show full text]
  • Package 'Trustoptim'
    Package ‘trustOptim’ September 23, 2021 Type Package Title Trust Region Optimization for Nonlinear Functions with Sparse Hessians Version 0.8.7.1 Date 2021-09-21 Maintainer Michael Braun <[email protected]> URL https://github.com/braunm/trustOptim BugReports https://github.com/braunm/trustOptim/issues Description Trust region algorithm for nonlinear optimization. Efficient when the Hessian of the objective function is sparse (i.e., relatively few nonzero cross-partial derivatives). See Braun, M. (2014) <doi:10.18637/jss.v060.i04>. License MPL (>= 2.0) Depends R (>= 3.6) Suggests testthat, knitr Imports Matrix (>= 1.2.18), Rcpp (>= 1.0.3), methods LinkingTo Rcpp, RcppEigen (>= 0.3.3.7.0) Copyright (c) 2015-2021 Michael Braun Encoding UTF-8 VignetteBuilder knitr SystemRequirements C++11 RoxygenNote 7.1.2 NeedsCompilation yes Author Michael Braun [aut, cre, cph] (<https://orcid.org/0000-0003-4774-2119>) Repository CRAN Date/Publication 2021-09-23 04:40:02 UTC 1 2 binary-data R topics documented: binary . .2 binary-data . .2 trust.optim . .3 trustOptim . .6 Index 8 binary Binary choice example Description Functions for binary choice example in the vignette. Usage binary.f(P, data, priors, order.row = FALSE) binary.grad(P, data, priors, order.row = FALSE) binary.hess(P, data, priors, order.row = FALSE) Arguments P Numeric vector of length (N+1)*k. First N*k elements are heterogeneous coef- ficients. The remaining k elements are population parameters. data List of data matrices Y and X, and choice count integer T priors List of named matrices inv.Omega and inv.Sigma order.row Determines order of heterogeneous coefficients in parameter vector.
    [Show full text]
  • Markov Matrix Can Have Zero Eigenvalues and Determinant
    Math 19b: Linear Algebra with Probability Oliver Knill, Spring 2011 2 The example 0 0 A = 1 1 Lecture 33: Markov matrices shows that a Markov matrix can have zero eigenvalues and determinant. 3 The example A n × n matrix is called a Markov matrix if all entries are nonnegative and the 0 1 sum of each column vector is equal to 1. A = 1 0 1 The matrix shows that a Markov matrix can have negative eigenvalues. and determinant. 1/2 1/3 A = 4 The example 1/2 2/3 1 0 is a Markov matrix. A = 0 1 stochastic matrices Markov matrices are also called . Many authors write the transpose shows that a Markov matrix can have several eigenvalues 1. of the matrix and apply the matrix to the right of a row vector. In linear algebra we write Ap. This is of course equivalent. 5 If all entries are positive and A is a 2 × 2 Markov matrix, then there is only one eigenvalue 1 and one eigenvalue smaller than 1. Lets call a vector with nonnegative entries pk for which all the pk add up to 1 a a b stochastic vector. For a stochastic matrix, every column is a stochastic vector. A = 1 − a 1 − b T T If p is a stochastic vector and A is a stochastic matrix, then Ap is a stochastic Proof: we have seen that there is one eigenvalue 1 because A has [1, 1] as an eigenvector. vector. The trace of A is 1 + a − b which is smaller than 2.
    [Show full text]
  • Doubly Stochastic Matrices and the Assignment Problem
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by NORA - Norwegian Open Research Archives DOUBLY STOCHASTIC MATRICES AND THE ASSIGNMENT PROBLEM by Maria Sivesind Mehlum Thesis for the degree of Master of Science (Master i Matematikk) Desember 2012 Faculty of Mathematics and Natural Sciences University of Oslo ii iii Acknowledgements First of all, I would like to thank my supervisor, Geir Dahl for valuable guidance and advice, for helping me see new expansions and for believing I could do this when I didn't do it myself. I would also like to thank my fellow "LAP Real" students, both the original and current, for long study hours and lunches filled with discussions, frustration and laughter. Further, I would like to thank my mum and dad for always supporting me, and for all their love and care through the years. Finally, I would like to thank everyone who has been there for me over the past 17 weeks with encouragement and motivational distractions which has made this time enjoyable. iv Contents 1 Introduction 1 2 Theoretical Background 3 2.1 Linear optimization . .3 2.2 Convexity . .5 2.3 Network - Flow . .9 3 Doubly stochastic matrices 13 3.1 The Birkhoff Polytope . 13 3.2 Examples . 14 3.3 Majorization . 16 4 The Assignment Problem 23 4.1 Introducing the problem . 23 4.2 Matchings . 24 4.3 K¨onig'sTheorem . 26 4.4 Algorithms . 27 4.4.1 The Hungarian Method . 28 4.4.2 The Shortest Augmenting Path Algorithms . 32 4.4.3 The Auction Algorithm .
    [Show full text]
  • Lecture Notes on Matrix Analysis
    Lecture notes on matrix analysis Mark W. Meckes April 27, 2019 Contents 1 Linear algebra background 3 1.1 Fundamentals . .3 1.2 Matrices and linear maps . .4 1.3 Rank and eigenvalues . .6 2 Matrix factorizations 7 2.1 SVD: The fundamental theorem of matrix analysis . .7 2.2 A first application: the Moore{Penrose inverse . .9 2.3 The spectral theorems and polar decomposition . 10 2.4 Factorizations involving triangular matrices . 14 2.5 Simultaneous factorizations . 18 3 Eigenvalues of Hermitian matrices 20 3.1 Variational formulas . 20 3.2 Inequalities for eigenvalues of two Hermitian matrices . 22 3.3 Majorization . 26 4 Norms 31 4.1 Vector norms . 31 4.2 Special classes of norms . 33 4.3 Duality . 35 4.4 Matrix norms . 37 4.5 The spectral radius . 40 4.6 Unitarily invariant norms . 43 4.7 Duality for matrix norms . 48 5 Some topics in solving linear systems 50 5.1 Condition numbers . 50 5.2 Sparse signal recovery . 52 6 Positive (semi)definite matrices 55 6.1 Characterizations . 55 6.2 Kronecker and Hadamard products . 57 6.3 Inequalities for positive (semi)definite matrices . 58 1 7 Locations and perturbations of eigenvalues 60 7.1 The Gerˇsgorincircle theorem . 60 7.2 Eigenvalue perturbations for non-Hermitian matrices . 61 8 Nonnegative matrices 64 8.1 Inequalities for the spectral radius . 64 8.2 Perron's theorem . 67 8.3 Irreducible nonnegative matrices . 71 8.4 Stochastic matrices and Markov chains . 73 8.5 Reversible Markov chains . 76 8.6 Convergence rates for Markov chains .
    [Show full text]
  • Systems of Differential Equations
    Chapter 11 Systems of Differential Equations 11.1: Examples of Systems 11.2: Basic First-order System Methods 11.3: Structure of Linear Systems 11.4: Matrix Exponential 11.5: The Eigenanalysis Method for x′ = Ax 11.6: Jordan Form and Eigenanalysis 11.7: Nonhomogeneous Linear Systems 11.8: Second-order Systems 11.9: Numerical Methods for Systems Linear systems. A linear system is a system of differential equa- tions of the form x = a x + + a x + f , 1′ 11 1 · · · 1n n 1 x2′ = a21x1 + + a2nxn + f2, (1) . · · · . · · · x = a x + + a x + f , m′ m1 1 · · · mn n m where ′ = d/dt. Given are the functions aij(t) and fj(t) on some interval a<t<b. The unknowns are the functions x1(t), . , xn(t). The system is called homogeneous if all fj = 0, otherwise it is called non-homogeneous. Matrix Notation for Systems. A non-homogeneous system of linear equations (1) is written as the equivalent vector-matrix system x′ = A(t)x + f(t), where x1 f1 a11 a1n . · · · . x = . , f = . , A = . . · · · xn fn am1 amn · · · 11.1 Examples of Systems 521 11.1 Examples of Systems Brine Tank Cascade ................. 521 Cascades and Compartment Analysis ................. 522 Recycled Brine Tank Cascade ................. 523 Pond Pollution ................. 524 Home Heating ................. 526 Chemostats and Microorganism Culturing ................. 528 Irregular Heartbeats and Lidocaine ................. 529 Nutrient Flow in an Aquarium ................. 530 Biomass Transfer ................. 531 Pesticides in Soil and Trees ................. 532 Forecasting Prices ................. 533 Coupled Spring-Mass Systems ................. 534 Boxcars ................. 535 Electrical Network I ................. 536 Electrical Network II ................. 537 Logging Timber by Helicopter ................
    [Show full text]
  • Lecture 1: Quantum Information Processing Basics
    Lecture 1: Quantum information processing basics Mark M. Wilde∗ The simplest quantum system is the physical quantum bit or qubit. The qubit is a two-level quantum system|example qubit systems are the spin of an electron, the polarization of a photon, or a two-level atom with a ground state and an excited state. We do not worry too much about physical implementations in this lecture, but instead focus on the mathematical postulates of the quantum theory and operations that we can perform on qubits. Noise can affect quantum systems, and we must understand methods of modeling noise in the quantum theory because our ultimate aim is to construct schemes for protecting quantum systems against the detrimental effects of noise. In some sense, there are different types of noise that occur in nature. The first, and perhaps more easily comprehensible type of noise, is that which is due to our lack of information about a given scenario. We observe this type of noise in a casino, with every shuffle of cards or toss of dice. These events are random, and the random variables of probability theory model them because the outcomes are unpredictable. This noise is the same as that in all classical information processing systems. We can engineer physical systems to improve their robustness to noise. On the other hand, the quantum theory features a fundamentally different type of noise. Quan- tum noise is inherent in nature and is not due to our lack of information, but is due rather to nature itself. An example of this type of noise is the \Heisenberg noise" that results from the uncertainty principle.
    [Show full text]
  • Chopsticks: the 'Snp.Matrix'
    Package ‘chopsticks’ September 23, 2021 Title The 'snp.matrix' and 'X.snp.matrix' Classes Version 1.58.0 Date 2018-02-05 Author Hin-Tak Leung <[email protected]> Description Implements classes and methods for large-scale SNP association studies Maintainer Hin-Tak Leung <[email protected]> Imports graphics, stats, utils, methods, survival Suggests hexbin License GPL-3 URL http://outmodedbonsai.sourceforge.net/ Collate ld.with.R ss.R contingency.table.R glm_test.R ibs.stats.R indata.R ld.snp.R ld.with.R.eml.R misc.R outdata.R qq_chisq.R read.chiamo.R read.HapMap.R read.snps.pedfile.R single.R structure.R wtccc.sample.list.R wtccc.signals.R xstuff.R zzz.R LazyLoad yes biocViews Microarray, SNPsAndGeneticVariability, SNP, GeneticVariability NeedsCompilation yes git_url https://git.bioconductor.org/packages/chopsticks git_branch RELEASE_3_13 git_last_commit 67665da git_last_commit_date 2021-05-19 Date/Publication 2021-09-23 R topics documented: snpMatrix-package . .2 epsout.ld.snp . .4 for.exercise . .6 1 2 snpMatrix-package glm.test.control . .7 ibs.stats . .8 ibsCount . .9 ibsDist . 10 ld.snp . 11 ld.with . 13 pair.result.ld.snp . 14 plot.snp.dprime . 15 qq.chisq . 17 read.HapMap.data . 19 read.pedfile.info . 21 read.pedfile.map . 22 read.snps.chiamo . 23 read.snps.long . 24 read.snps.long.old . 26 read.snps.pedfile . 28 read.wtccc.signals . 29 row.summary . 30 single.snp.tests . 31 snp-class . 33 snp.cbind . 34 snp.cor . 35 snp.dprime-class . 37 snp.lhs.tests . 39 snp.matrix-class . 41 snp.pre . 43 snp.rhs.tests .
    [Show full text]
  • Flowcore: Data Structures Package for Flow Cytometry
    flowCore: data structures package for flow cytometry data N. Le Meur F. Hahne B. Ellis P. Haaland December 4, 2019 Abstract Background The recent application of modern automation technologies to staining and collecting flow cytometry (FCM) samples has led to many new challenges in data management and analysis. We limit our attention here to the associated problems in the analysis of the massive amounts of FCM data now being collected. From our viewpoint, see two related but substantially different problems arising. On the one hand, there is the problem of adapting existing software to apply standard methods to the increased volume of data. The second problem, which we intend to ad- dress here, is the absence of any research platform which bioinformaticians, computer scientists, and statisticians can use to develop novel methods that address both the volume and multidimen- sionality of the mounting tide of data. In our opinion, such a platform should be Open Source, be focused on visualization, support rapid prototyping, have a large existing base of users, and have demonstrated suitability for development of new methods. We believe that the Open Source statistical software R in conjunction with the Bioconductor Project fills all of these requirements. Consequently we have developed a Bioconductor package that we call flowCore. The flowCore package is not intended to be a complete analysis package for FCM data. Rather, we see it as providing a clear object model and a collection of standard tools that enable R as an informatics research platform for flow cytometry. One of the important issues that we have addressed in the flowCore package is that of using a standardized representation that will insure compatibility with existing technologies for data analysis and will support collaboration and interoperability of new methods as they are developed.
    [Show full text]
  • Notes on LINEAR ALGEBRA with a Few Remarks on LINEAR SYSTEMS of DIFFERENTIAL EQUATIONS
    640:244:17–19 SPRING 2011 Notes on LINEAR ALGEBRA with a few remarks on LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS 1. Systems of linear equations Suppose we are given a system of m linear equations in n unknowns x1,...,xn: a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . (1) . am1x1 + am2x2 + ··· + amnxn = bm To write this in a more compact form we introduce a matrix and two vectors, x1 a11 a12 ··· a1n b1 x2 a21 a22 ··· a2n b2 A = . , b = . and x = x3 , . . . . . . am1 am2 ··· amn bm xn so that (1) becomes Ax = b. In the next section we turn to the problem of determining whether or not the system has any solutions and, if it does, of finding all of them. Before that, however, we make some general comments on the nature of solutions. Notice how similar these are to the results we saw earlier for linear second-order differential equa- tions; we will point out in Section 7 below their similarity to results for linear systems of differential equations. The homogeneous problem. Suppose first that the system (1) is homogeneous, that is, that the right hand side is zero, or equivalently that b1 = b2 = ··· = bm =0 or b = 0: Ax = 0. (2) Suppose further that we have found, by some method, two solutions x1 and x2 of the equations. Then for any constants c and d, x = cx1 + dx2 is also a solution, since Ax = A(cx1 + dx2) = cAx1 + dAx2 = c · 0 + d · 0 = 0. The argument extends to any number of solutions, and we have the Principle 1: Superposition.
    [Show full text]