Part I Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Part I Linear Algebra Part I Linear Algebra 4 Chapter 1 Vectors 1.1 Vectors Basics 1.1.1 Definition Vectors • A vector is a collection of n real numbers, x1; x2; :::; xn, as a single point in a n-dimensional space arranged in a column or a row, and can be thought of as point in space, or as providing a direction. • Each number xi is called a component or element of the vector. • The inner product define the length of a vector, as well as generalize the notion of angle between two vectors. • Via the inner product, we can view a vector as a linear function. We can also compute the projection of a vector onto a line defined by another. 5 • We usually write vectors in column format: 2 3 x1 6 x2 7 x = 6 7 6 . 7 4 . 5 xm • Geometry. A vector represents both a direction from the origin and a point in the multi-dimensional space Rn, where each component corresponds to coor- dinate of the point. • Transpose. If x is a column vector, xT (transpose) denotes the corresponding T row vector, and vice-versa. x = x1; :::xn . 1.1.2 Independence n • A set of m vectors x1; :::; xm in R is said to be independent if no vector in the set can be expressed as a linear combination of the others. This means that the condition m m X λ 2 R : λixi = 0 i=1 implies λ = 0. • If two vectors are linear independent, then they cannot be a scaled version of each other. • Example. The vectors x1 = [1; 2; 3] and x2 = [3; 6; 9] are not independent, since 3x1 − x2 = 0, x2 is a scaled version of x1. 1.1.3 Subspace, Span, Affine Sets Subspace and Span • An nonempty subspace, V, of Rn is a subset that is closed under addition and scalar multiplication. That is, for any scalars α; β x; y 2 V ! αx + βy 2 V 6 • A subspace always contains the zero element. • Geometrically, subspaces are "flat" (like a line or plane in 3D) and pass through the origin. n • A subspace S can always be represented as the span of a set of vectors xi in R , that is, a set of the form m X m S = span x1; :::; xm := λixi : λ 2 R i=0 • The set of all possible linear combinations of the vectors in S = fx(1); ··· ; x(m)g forms a subspace, which is called the subspace generated by S, or the span of S, denoted by span(S). Direct Sum • Given two subspaces X ; Y 2 Rn, the direct sum of X ; Y, denoted X ⊕ Y, is the set of vectors of the form x + y, with x 2 X ; y 2 Y. • X ⊕ Y is itself a subspace. Affine Sets • An affine set is a translation of a subspace. It is "flat" but does not necessarily pass through 0, as a subspace would. (Like a line or a plane that does not go through the origin.) • Affine set A can always be represented as the translation (a constant term) of the subspace spanned by some vectors: m X m A = x0 + λixi : λ 2 R = x0 + S i=1 where x0 is a given point and S is a given subspace. Affine is linear plus a constant term. 7 • Subspaces (or sometimes called linear subspaces) are just affine spaces contain- ing the origin. • Line. When S is the span of a single non-zero vector (1 dimension), the set A is called a line passing through the point x0. u is the direction of the line, t is the magnitude and x0 is a point through which it passes. A = x0 + tu : t 2 R 1.1.4 Basis and Dimension Basis • A basis of Rn is a set of n independent (irreducible) vectors. • If the vectors u1; ··· ; un form a basis, we can express any vector as a linear Pn combination of ui, x = i=1 λiui for appropriate numbers λ1; ··· ; λn. n • Standard basis. Standard basis (natural basis) in R consists of the vector ei, where the i-th element is 1 and the rest are 0. 213 203 203 3 e1 = 405 ; e2 = 415 ; e3 = 405 2 R 0 0 1 Basis of a Subspace • The basis of a given subspace S ⊆ Rn is any independent set of vectors whose span is S. • If vectors (u1; ··· ; ur) form a basis of S, we can express any vector in the Pr subspace S as a linear combination of (u1; ··· ; ur), x = i=1 λiui. • Dimension. The number of vectors in the basis is independent of the choice of the basis. We will always find a fixed minimum number of independent (ir- reducible) vectors for the subspace S. This minimum number is called the di- mension of S. 8 • Example. In R3, you need 2 independent vectors to describe a plane contain- ing the origin. (dimension of 2). The dimension of a line is 1, since a line is x0 + span(x1) for non-zero x1. Dimension of an Affine Subspace • The set L in R3, x1 − 13x2 + 4x3 = 2 3x2 − x3 = 9 is an affine subspace of dimension 1. The linear subspace can be obtained by setting the constant term to 0, x1 − 13x2 + 4x3 = 0 3x2 − x3 = 0 • Solve for x3 and we get x1 = x2; x3 = 3x2. The representation of linear subspace x 2 R3: 213 x = 415 t; for scalar t = 2 3 • The linear subspace is the span of u = (1; 1; 3) of dimension 1. We can find a particular solution x0 = (38; 0; −9) and the affine subspace L is thus the line x0 + span(u). 1.2 Orthogonality and Orthogonal Complements 1.2.1 Orthogonal Vectors • Orthogonal. Two vectors x; y in an inner product space X are orthogonal, denoted x ? y, if hx; yi = 0. 9 • Mutually orthogonal. Nonzeros vectors x(1); x(2); ··· ; x(d) are said to be mutually orthogonal if hx(i); x(j)i = 0 whenever i 6= j. In other words, each vector is orthogonal to all other vectors in the collection. • Mutually orthogonal vectors are linearly independent but linearly independent vectors are not necessary mutually orthogonal. 1.2.2 Orthogonal Complement • Orthogonal complement. A vector x 2 X is orthogonal to a subset S of an inner product space X if x ? s; 8s 2 S. The set of vectors in X that are orthogonal to S is called the orthogonal complement of S, denoted as S?. • Direct sum and orthogonal decomposition. If X is a subspace of an inner product space X , then any vector x 2 X can be written in a unique way of the sum of one element in S and one in the orthogonal complement S?. X = S ⊕ S?; for any subspace S ⊆ X x = y + z; x 2 X ; y 2 S; z 2 S? • Fundamental properties of inner product spaces. Let x; z be any two ele- ments of a inner product space X , let kxk = phx; xi, and let α be a scalar. Then: – jhx; zij ≤ kxkkzk, and equality holds iff x = αz, or z = 0 (Cauchy- Schwartz). – kx + zk2 + kx − zk2 = 2kxk2 + 2kzk2 (parallelogram law) – if x ? z, then kx + zk2 = kxk2 + kzk2 (Pythagoras theorem) – for any subspace S ⊆ X it holds that X = S ⊕ S? – for any subspace S ⊆ X it holds that dim X = dim S + dim S? 10 Figure 1.1: Left: Two dimension subspace X in R3 and its orthogonal complement S?. Right: Any vector can be written as the sum of an element x in a subspace S and one y in its orthogonal complement S? 1.3 Inner Product, Norms and Angles 1.3.1 Inner Product • The Inner product. The inner product (scalar product, dot product) on a (real) vector space X is a real-valued function which maps any pair of elements x; y 2 X into a scalar denoted by hx; yi. • Axioms The inner product satisfies the following axioms: for any x; y; z 2 X and scalar α – hx; yi ≥ 0 – hx; x = 0i if and only inf x = 0 – hx + y; zi = hx; zi + hy; zi – hαx; yi = αhx; yi – hx; yi = hy; xi • The standard inner product defined in Rn is the "row-column" product of two 11 vectors n T X hx; yi = x y = xiyi i=1 • Orthogonality. Two vectors x; y 2 Rn are orthogonal if xT y = 0. 1.3.2 Norms • When we try to define the notion of size, or length, of a vector in high dimen- sions (not just a scalar), we are faced with many choices. These choices are called norm. • The norm of a vector x, denoted by kxk, is a real-valued function that maps any element x 2 X into a real number kvk that satisfy a set of rules that the notion of size should involved. • Definition of norm. A function from X to Rn is a norm if 1: kxk ≥ 0 8x 2 X ; and kxk = 0 if and only if x = 0 2: kx + yk ≤ kxk + kyk; for any x; y 2 X (triangle inequality) 3: kαxk = jαjkxk; (for any scalar α and any x 2 X ) • The Euclidean Norm (l2-norm). The euclidean norm corresponds to the usual notion of distance in two or three dimensions. The set of points with equal l2-norm is a circle (in 2D) and a sphere (in 3D), or a hyper-sphere in higher dimensions. v u m p uX 2 T p jjxjj2 = t xi = x x = hx; xi i=1 12 • The l1-norm.
Recommended publications
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Does Geometric Algebra Provide a Loophole to Bell's Theorem?
    Discussion Does Geometric Algebra provide a loophole to Bell’s Theorem? Richard David Gill 1 1 Leiden University, Faculty of Science, Mathematical Institute; [email protected] Version October 30, 2019 submitted to Entropy Abstract: Geometric Algebra, championed by David Hestenes as a universal language for physics, was used as a framework for the quantum mechanics of interacting qubits by Chris Doran, Anthony Lasenby and others. Independently of this, Joy Christian in 2007 claimed to have refuted Bell’s theorem with a local realistic model of the singlet correlations by taking account of the geometry of space as expressed through Geometric Algebra. A series of papers culminated in a book Christian (2014). The present paper first explores Geometric Algebra as a tool for quantum information and explains why it did not live up to its early promise. In summary, whereas the mapping between 3D geometry and the mathematics of one qubit is already familiar, Doran and Lasenby’s ingenious extension to a system of entangled qubits does not yield new insight but just reproduces standard QI computations in a clumsy way. The tensor product of two Clifford algebras is not a Clifford algebra. The dimension is too large, an ad hoc fix is needed, several are possible. I further analyse two of Christian’s earliest, shortest, least technical, and most accessible works (Christian 2007, 2011), exposing conceptual and algebraic errors. Since 2015, when the first version of this paper was posted to arXiv, Christian has published ambitious extensions of his theory in RSOS (Royal Society - Open Source), arXiv:1806.02392, and in IEEE Access, arXiv:1405.2355.
    [Show full text]
  • Slices, Slabs, and Sections of the Unit Hypercube
    Slices, Slabs, and Sections of the Unit Hypercube Jean-Luc Marichal Michael J. Mossinghoff Institute of Mathematics Department of Mathematics University of Luxembourg Davidson College 162A, avenue de la Fa¨ıencerie Davidson, NC 28035-6996 L-1511 Luxembourg USA Luxembourg [email protected] [email protected] Submitted: July 27, 2006; Revised: August 6, 2007; Accepted: January 21, 2008; Published: January 29, 2008 Abstract Using combinatorial methods, we derive several formulas for the volume of convex bodies obtained by intersecting a unit hypercube with a halfspace, or with a hyperplane of codimension 1, or with a flat defined by two parallel hyperplanes. We also describe some of the history of these problems, dating to P´olya’s Ph.D. thesis, and we discuss several applications of these formulas. Subject Class: Primary: 52A38, 52B11; Secondary: 05A19, 60D05. Keywords: Cube slicing, hyperplane section, signed decomposition, volume, Eulerian numbers. 1 Introduction In this note we study the volumes of portions of n-dimensional cubes determined by hyper- planes. More precisely, we study slices created by intersecting a hypercube with a halfspace, slabs formed as the portion of a hypercube lying between two parallel hyperplanes, and sections obtained by intersecting a hypercube with a hyperplane. These objects occur natu- rally in several fields, including probability, number theory, geometry, physics, and analysis. In this paper we describe an elementary combinatorial method for calculating volumes of arbitrary slices, slabs, and sections of a unit cube. We also describe some applications that tie these geometric results to problems in analysis and combinatorics. Some of the results we obtain here have in fact appeared earlier in other contexts.
    [Show full text]
  • Linear Algebra Handout
    Artificial Intelligence: 6.034 Massachusetts Institute of Technology April 20, 2012 Spring 2012 Recitation 10 Linear Algebra Review • A vector is an ordered list of values. It is often denoted using angle brackets: ha; bi, and its variable name is often written in bold (z) or with an arrow (~z). We can refer to an individual element of a vector using its index: for example, the first element of z would be z1 (or z0, depending on how we're indexing). Each element of a vector generally corresponds to a particular dimension or feature, which could be discrete or continuous; often you can think of a vector as a point in Euclidean space. p 2 2 2 • The magnitude (also called norm) of a vector x = hx1; x2; :::; xni is x1 + x2 + ::: + xn, and is denoted jxj or kxk. • The sum of a set of vectors is their elementwise sum: for example, ha; bi + hc; di = ha + c; b + di (so vectors can only be added if they are the same length). The dot product (also called scalar product) of two vectors is the sum of their elementwise products: for example, ha; bi · hc; di = ac + bd. The dot product x · y is also equal to kxkkyk cos θ, where θ is the angle between x and y. • A matrix is a generalization of a vector: instead of having just one row or one column, it can have m rows and n columns. A square matrix is one that has the same number of rows as columns. A matrix's variable name is generally a capital letter, often written in bold.
    [Show full text]
  • Signing a Linear Subspace: Signature Schemes for Network Coding
    Signing a Linear Subspace: Signature Schemes for Network Coding Dan Boneh1?, David Freeman1?? Jonathan Katz2???, and Brent Waters3† 1 Stanford University, {dabo,dfreeman}@cs.stanford.edu 2 University of Maryland, [email protected] 3 University of Texas at Austin, [email protected]. Abstract. Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to modify data packets en route; for this reason, standard signature schemes are inapplicable and it is a challenge to provide resilience to tampering by malicious nodes. Here, we propose two signature schemes that can be used in conjunction with network coding to prevent malicious modification of data. In particular, our schemes can be viewed as signing linear subspaces in the sense that a signature σ on V authenticates exactly those vectors in V . Our first scheme is homomorphic and has better performance, with both public key size and per-packet overhead being constant. Our second scheme does not rely on random oracles and uses weaker assumptions. We also prove a lower bound on the length of signatures for linear subspaces showing that both of our schemes are essentially optimal in this regard. 1 Introduction Network coding [1, 25] refers to a general class of routing mechanisms where, in contrast to tra- ditional “store-and-forward” routing, intermediate nodes modify data packets in transit. Network coding has been shown to offer a number of advantages with respect to traditional routing, the most well-known of which is the possibility of increased throughput in certain network topologies (see, e.g., [21] for measurements of the improvement network coding gives even for unicast traffic).
    [Show full text]
  • Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations Jason Terry
    University of New Mexico UNM Digital Repository Mathematics & Statistics ETDs Electronic Theses and Dissertations 2-14-2014 Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations Jason Terry Follow this and additional works at: https://digitalrepository.unm.edu/math_etds Recommended Citation Terry, Jason. "Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations." (2014). https://digitalrepository.unm.edu/math_etds/48 This Dissertation is brought to you for free and open access by the Electronic Theses and Dissertations at UNM Digital Repository. It has been accepted for inclusion in Mathematics & Statistics ETDs by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected]. Candidate Department This dissertation is approved, and it is acceptable in quality and form for publication: Approved by the Dissertation Committee: , Chairperson Quantification of Stability in Systems of Nonlinear Ordinary Differential Equations by Jason Terry B.A., Mathematics, California State University Fresno, 2003 B.S., Computer Science, California State University Fresno, 2003 M.A., Interdiscipline Studies, California State University Fresno, 2005 M.S., Applied Mathematics, University of New Mexico, 2009 DISSERTATION Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Mathematics The University of New Mexico Albuquerque, New Mexico December, 2013 c 2013, Jason Terry iii Dedication To my mom. iv Acknowledgments I would like
    [Show full text]
  • Lecture 6: Linear Codes 1 Vector Spaces 2 Linear Subspaces 3
    Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 6: Linear Codes January 26, 2009 Lecturer: Atri Rudra Scribe: Steve Uurtamo 1 Vector Spaces A vector space V over a field F is an abelian group under “+” such that for every α ∈ F and every v ∈ V there is an element αv ∈ V , and such that: i) α(v1 + v2) = αv1 + αv2, for α ∈ F, v1, v2 ∈ V. ii) (α + β)v = αv + βv, for α, β ∈ F, v ∈ V. iii) α(βv) = (αβ)v for α, β ∈ F, v ∈ V. iv) 1v = v for all v ∈ V, where 1 is the unit element of F. We can think of the field F as being a set of “scalars” and the set V as a set of “vectors”. If the field F is a finite field, and our alphabet Σ has the same number of elements as F , we can associate strings from Σn with vectors in F n in the obvious way, and we can think of codes C as being subsets of F n. 2 Linear Subspaces Assume that we’re dealing with a vector space of dimension n, over a finite field with q elements. n We’ll denote this as: Fq . Linear subspaces of a vector space are simply subsets of the vector space that are closed under vector addition and scalar multiplication: n n In particular, S ⊆ Fq is a linear subspace of Fq if: i) For all v1, v2 ∈ S, v1 + v2 ∈ S. ii) For all α ∈ Fq, v ∈ S, αv ∈ S. Note that the vector space itself is a linear subspace, and that the zero vector is always an element of every linear subspace.
    [Show full text]
  • Worksheet 1, for the MATLAB Course by Hans G. Feichtinger, Edinburgh, Jan
    Worksheet 1, for the MATLAB course by Hans G. Feichtinger, Edinburgh, Jan. 9th Start MATLAB via: Start > Programs > School applications > Sci.+Eng. > Eng.+Electronics > MATLAB > R2007 (preferably). Generally speaking I suggest that you start your session by opening up a diary with the command: diary mydiary1.m and concluding the session with the command diary off. If you want to save your workspace you my want to call save today1.m in order to save all the current variable (names + values). Moreover, using the HELP command from MATLAB you can get help on more or less every MATLAB command. Try simply help inv or help plot. 1. Define an arbitrary (random) 3 × 3 matrix A, and check whether it is invertible. Calculate the inverse matrix. Then define an arbitrary right hand side vector b and determine the (unique) solution to the equation A ∗ x = b. Find in two different ways the solution to this problem, by either using the inverse matrix, or alternatively by applying Gauss elimination (i.e. the RREF command) to the extended system matrix [A; b]. In addition look at the output of the command rref([A,eye(3)]). What does it tell you? 2. Produce an \arbitrary" 7 × 7 matrix of rank 5. There are at east two simple ways to do this. Either by factorization, i.e. by obtaining it as a product of some 7 × 5 matrix with another random 5 × 7 matrix, or by purposely making two of the rows or columns linear dependent from the remaining ones. Maybe the second method is interesting (if you have time also try the first one): 3.
    [Show full text]
  • Cutting Hyperplane Arrangements*
    Discrete Comput Geom 6:385 406 (1991) GDieometryscrete & Computational 1991 Springer-VerlagNew York Inc Cutting Hyperplane Arrangements* Jifi Matou~ek Department of Applied Mathematics, Charles University, Malostransk6 nfim. 25, 118 00 Praha 1. Czechoslovakia Abstract. We consider a collection H of n hyperplanes in E a (where the dimension d is fixed). An e.-cuttin9 for H is a collection of (possibly unbounded) d-dimensional simplices with disjoint interiors, which cover all E a and such that the interior of any simplex is intersected by at most en hyperplanes of H. We give a deterministic algorithm for finding a (1/r)-cutting with O(r d) simplices (which is asymptotically optimal). For r < n I 6, where 6 > 0 is arbitrary but fixed, the running time of this algorithm is O(n(log n)~ In the plane we achieve a time bound O(nr) for r _< n I-6, which is optimal if we also want to compute the collection of lines intersecting each simplex of the cutting. This improves a result of Agarwal, and gives a conceptually simpler algorithm. For an n point set X ~_ E d and a parameter r, we can deterministically compute a (1/r)-net of size O(r log r) for the range space (X, {X c~ R; R is a simplex}), in time O(n(log n)~ d- 1 + roe11). The size of the (1/r)-net matches the best known existence result. By a simple transformation, this allows us to find e-nets for other range spaces usually encountered in computational geometry.
    [Show full text]
  • 15 BASIC PROPERTIES of CONVEX POLYTOPES Martin Henk, J¨Urgenrichter-Gebert, and G¨Unterm
    15 BASIC PROPERTIES OF CONVEX POLYTOPES Martin Henk, J¨urgenRichter-Gebert, and G¨unterM. Ziegler INTRODUCTION Convex polytopes are fundamental geometric objects that have been investigated since antiquity. The beauty of their theory is nowadays complemented by their im- portance for many other mathematical subjects, ranging from integration theory, algebraic topology, and algebraic geometry to linear and combinatorial optimiza- tion. In this chapter we try to give a short introduction, provide a sketch of \what polytopes look like" and \how they behave," with many explicit examples, and briefly state some main results (where further details are given in subsequent chap- ters of this Handbook). We concentrate on two main topics: • Combinatorial properties: faces (vertices, edges, . , facets) of polytopes and their relations, with special treatments of the classes of low-dimensional poly- topes and of polytopes \with few vertices;" • Geometric properties: volume and surface area, mixed volumes, and quer- massintegrals, including explicit formulas for the cases of the regular simplices, cubes, and cross-polytopes. We refer to Gr¨unbaum [Gr¨u67]for a comprehensive view of polytope theory, and to Ziegler [Zie95] respectively to Gruber [Gru07] and Schneider [Sch14] for detailed treatments of the combinatorial and of the convex geometric aspects of polytope theory. 15.1 COMBINATORIAL STRUCTURE GLOSSARY d V-polytope: The convex hull of a finite set X = fx1; : : : ; xng of points in R , n n X i X P = conv(X) := λix λ1; : : : ; λn ≥ 0; λi = 1 : i=1 i=1 H-polytope: The solution set of a finite system of linear inequalities, d T P = P (A; b) := x 2 R j ai x ≤ bi for 1 ≤ i ≤ m ; with the extra condition that the set of solutions is bounded, that is, such that m×d there is a constant N such that jjxjj ≤ N holds for all x 2 P .
    [Show full text]
  • Exact Hyperplane Covers for Subsets of the Hypercube 1
    EXACT HYPERPLANE COVERS FOR SUBSETS OF THE HYPERCUBE JAMES AARONSON, CARLA GROENLAND, ANDRZEJ GRZESIK, TOM JOHNSTON, AND BARTLOMIEJKIELAK Abstract. Alon and F¨uredi(1993) showed that the number of hyper- planes required to cover f0; 1gn n f0g without covering 0 is n. We initiate the study of such exact hyperplane covers of the hypercube for other sub- sets of the hypercube. In particular, we provide exact solutions for covering f0; 1gn while missing up to four points and give asymptotic bounds in the general case. Several interesting questions are left open. 1. Introduction A vector v 2 Rn and a scalar α 2 R determine the hyperplane n fx 2 R : hv; xi := v1x1 + ··· + vnxn = αg in Rn. How many hyperplanes are needed to cover f0; 1gn? Only two are required; for instance, fx : x1 = 0g and fx : x1 = 1g will do. What happens however if 0 2 Rn is not allowed on any of the hyperplanes? We can `exactly' cover f0; 1gn n f0g with n hyperplanes: for example, the collections ffx : Pn xi = 1g : i 2 [n]g or ffx : i=1 xi = jg : j 2 [n]g can be used, where [n] := f1; 2; : : : ; ng. Alon and F¨uredi[2] showed that in fact n hyperplanes are always necessary. Recently, a variation was studied by Clifton and Huang [5], in which they require that each point from f0; 1gn n f0g is covered at least k times for some k 2 N (while 0 is never covered). Another natural generalisation is to put more than just 0 to the set of points we wish to avoid in the cover.
    [Show full text]
  • General N-Dimensional Rotations
    General n-Dimensional Rotations Antonio Aguilera Ricardo Pérez-Aguila Centro de Investigación en Tecnologías de Información y Automatización (CENTIA) Universidad de las Américas, Puebla (UDLAP) Ex-Hacienda Santa Catarina Mártir México 72820, Cholula, Puebla [email protected] [email protected] ABSTRACT This paper presents a generalized approach for performing general rotations in the n-Dimensional Euclidean space around any arbitrary (n-2)-Dimensional subspace. It first shows the general matrix representation for the principal n-D rotations. Then, for any desired general n-D rotation, a set of principal n-D rotations is systematically provided, whose composition provides the original desired rotation. We show that this coincides with the well-known 2D and 3D cases, and provide some 4D applications. Keywords Geometric Reasoning, Topological and Geometrical Interrogations, 4D Visualization and Animation. 1. BACKGROUND arctany x x 0 arctany x x 0 The n-Dimensional Translation arctan2(y, x) 2 x 0, y 0 In this work, we will use a simple nD generalization of the 2D and 3D translations. A point 2 x 0, y 0 x (x1 , x2 ,..., xn ) can be translated by a distance vec- In this work we will only use arctan2. tor d (d ,d ,...,d ) and results x (x, x ,..., x ) , 1 2 n 1 2 n 2. INTRODUCTION which, can be computed as x x T (d) , or in its expanded matrix form in homogeneous coordinates, The rotation plane as shown in Eq.1. [Ban92] and [Hol91] have identified that if in the 2D space a rotation is given around a point, and in the x1 x2 xn 1 3D space it is given around a line, then in the 4D space, in analogous way, it must be given around a 1 0 0 0 plane.
    [Show full text]