Reflections in a Euclidean Space

Total Page:16

File Type:pdf, Size:1020Kb

Reflections in a Euclidean Space REFLECTIONS IN A EUCLIDEAN SPACE PHILIP BROCOUM Let V be a finite dimensional real linear space. Definition 1. A function �, � : V × V → R is a bilinear form in V if for all x1, x2, x, y1, y2, y ∈ V and all k ∈ R, �x1 + kx2, y� = �x1, y� + k�x2, y�, and �x, y1 + ky2� = �x, y1� + k�x, y2�. Definition 2. A bilinear form �, � in V is symmertic if �x, y� = �y, x� for all x, y ∈ V . A symmetric bilinear form is nondegenerate if �a, x� = 0 for all x ∈ V implies a = 0. It is positive definite if �x, x� > 0 for any nonzero x ∈ V . A symmetric positive definite bilinear form is an inner product in V . n Theorem 3. Define a bilinear form on V = R by �ei, ej � = δij , where n {ei}i=1 is a basis in V , and δij = 1 if i = j and δij = 0 if i =� j. Then �, � is an inner product in V . Proof. We must check that it is symmetric and positive definite: • �ei, ej � = �ej , ei� = 1 if i = j and �ei, ej � = �ej , ei� = 0 if i =� j • �ei, ei� = 1. � Definition 4. A Euclidean space is a finite dimensional real linear space with an inner product. n Theorem 5. Any Euclidean n­dimensional space V has a basis {ei}i=1 such that �ei, ej � = δij . Proof. First we will show that V has an orthogonal basis. Let {u1, ..., un} be a basis of V , and let v1 = u1; note that v1 =� 0, and so �v1, v1� =� 0. Also, let v2 be given by the following: �v2, v1� v2 = u2 − v1. �v1, v1� Date: May 10, 2004. 1 2 PHILIP BROCOUM The vectors v1 and v2 are orthogonal for �u2, v1� �u2, v1� �v2, v1� = �u2 − v1, v1� = �u2, v1� − �v1, v1� = 0. �v1, v1� �v1, v1� Note v1 and v2 are linearly independent, because u1 and u2 are basis elements and therefore linearly independent. We proceed by letting �uj , vj−1� �uj , v1� vj = uj − vj−1 − ... − v1. �vj−1, vj−1� �v1, v1� Now, for j − 1 ≥ k ≥ 1, �uj , vj−1� �uj , vk � �uj , v1� �vj , vk � = �uj , vk �− �vj−1, vk �−...− �vk , vk �−...− �v1, vk � = 0. �vj−1, vj−1� �vk , vk � �v1, v1� This equality holds by induction because �vi, vk � = 0 for i = j − 1, ..., 1 and i =� k, and the remaining negative term on the right­hand side cancels with the first term for i = k. We cannot write the equation for vj to give a linear relation between {u1, ..., uj } because this is a subset of a basis. Thus, {v1, ..., vj } are linearly independent. Continuing with induction until j = n produces an orthogonal basis for V . Note that all denominators in the above calculations are positive real numbers because vi =� 0 for i = 1, ..., n. Lastly, when we define vi ei = , i = 1, ..., n, �vi� then �ei� = 1 for all i, and {e1, ..., en} is an orthonormal basis for V such that �ei, ej � = δij . � Below V = Rn is a Euclidean space with the inner product �, �. Definition 6. Two vectors x, y ∈ V are orthogonal if �x, y� = 0. Two subspaces U, W ∈ V are orthogonal if �x, y� = 0 for all x ∈ U, y ∈ W . Theorem 7. If U, W are orthogonal subspaces in V , then dim(U) + dim(W ) = dim(U + W ). n Proof. Let {ei}i=1 be an orthonormal basis of V . A subset of these basis vectors will form a basis of U. Likewise, another subset of these basis vectors will form a basis of W . No basis vector of V can be a basis vector of both U and W because the two subspaces are orthogonal. We can see this by taking x to be a basis vector of U and taking y to be a basis vector of W . We now have �x, y� = δij We know this is zero, so x and y cannot be the same. Since this applies to all x ∈ U and all y ∈ W , the two subspaces share none of the same REFLECTIONS IN A EUCLIDEAN SPACE 3 basis vectors. We can then form a basis of U + W by taking the basis of U plus the basis of W . Thus, dim(U)+dim(W ) = dim(U + W ). � Definition 8. The orthogonal complement of the subspace U ⊂ V is ⊥ the subspace U = {y ∈ V : �x, y� = 0, for all x ∈ U}. Definition 9. A hyperplane Hx ⊂ V is the orthogonal complement to the one­dimensional subspace in V spanned by x ∈ V . Theorem 10. (Cauchy­Schwartz). For any x, y ∈ V , �x, y�2 ≤ �x, x� · �y, y�, and equality holds if and only if vectors x, y are linearly dependent. Proof. For all a and b in R we have, by positive definiteness: �ax − by, ax − by� ≥ 0 By bilinearity and symmetry this gives: a 2�x, x� − 2ab�x, y� + b2�y, y� ≥ 0 We can now set a = �x, y� and b = �x, x� and cancel the first term: �x, x�(−�x, y�2 + �x, x��y, y�) ≥ 0 If �x, x� = 0, then by positive definiteness, x = 0 and therefore �x, y� = 0 as well. Thus the equality holds in this case. If �x, x� =� 0, then it is positive and the theorem follows from the equality above. � We will be interested in the linear mappings that respect inner prod­ ucts. Definition 11. An orthogonal operator in V is a linear automorphism f : V → V such that �f(x), f(y)� = �x, y� for all x, y ∈ V . Theorem 12. If f1, f2 are orthogonal operators in V , then so are the −1 −1 inverses f1, f 2 and the composition f1 ◦ f2. The identity mapping is orthogonal. Proof. Because an orthogonal operator f is one­to­one and onto, we −1 know that for all a, b ∈ V there exists an x, y ∈ V such that x = f (a) −1 and y = f (b). Thus, f(x) = a and f(y) = b. By orthogonality, �f(x), f(y)� = �x, y�. This means that �a, b� = �x, y� and thus we −1 −1 conclude that �f (a), f (b)� = �a, b�. The composition of f1 and f2 is �f1 ◦ f2(x), f1 ◦ f2(y)�. Because f1 is orthogonal, �f1 ◦ f2(x), f1 ◦ f2(y)� = �f2(x), f2(y)�. Because f2 is also orthogonal, �f2(x), f2(y)� = �x, y�. The identity map is orthogonal by definition and this completes the proof. � 4 PHILIP BROCOUM Remark 13. The above theorem says that orthogonal operators in a Euclidean space form a group, i.e. a set closed with respect to compo­ sitions, contains an inverse to each element, and contains an identity operator. Example 14. All orthogonal operators in R2 can be expressed as 2 × 2 matrices and they form a group with respect to the matrix multiplica­ tion. The orthogonal operators of R2 are all the reflections and rotations. The standard rotation matrix is the following � cosθ −sinθ � sinθ cosθ and the standard reflection matrix is the following � cosθ sinθ � sinθ −cosθ The matrices of the above two types form the set of all orthogonal oper­ ators in R2 . We know this because �Ax, Ay� = �x, y�. Thus, AT A = I 2 and so det(A ) = 1 which implies det(A) = ±1. The matrices with determinants plus or minus one are of the two forms shown above. Now we are ready to introduce the notion of a reflection in a Eu­ clidean space. A reflection in V is a linear mapping s : VV→ which sends some nonzero vector α ∈ V to its negative and fixes pointwise the hyperplane Hα orthogonal to α. To indicate this vector, we will write s = sα. The use of Greek letters for vectors is traditional in this context. Definition 15. A reflection in V with respect to a vector α ∈ V is defined by the formula: 2�x, α� sα(x) = x − α. �α, α� Theorem 16. With the above definition, we have: (1) sα(α) = −α and sα(x) = x for any x ∈ Hα; (2) sα is an orthogonal operator; 2 (3) sα = Id. Proof. (1) Using the formula, we know that, 2�α, α� sα(α) = α − (α). �α, α� �α,α� Since �α,α� = 1, the formula simplifies to, sα(α) = α − 2α, REFLECTIONS IN A EUCLIDEAN SPACE 5 and thus sα(α) = −α. For x ∈ Hα, we know that �x, α� = 0. We can now write, 2(0) sα(x) = x − α, �α, α� and so sα(x) = x. (2) By Theorem 5, we can write any x ∈ V as a sum of basis n vectors {ei}i=1 such that �ei, ej � = δij . We can also choose this basis so that e1 is in the direction of α and the others are orthogonal to α. Because of linearity, it suffices to check that �sα(ei), sα(ej )� = �ei, ej �. 2�ei, α� sα(ei) = ei − α �α, α� If ei �= α then it must be orthogonal to α. 2(0) sα(ei) = ei − α = ei �α, α� If ei = e1 = α then by part one sα(e1) = −e1. We can now conclude that �sα(ei), sα(ej )� = �±ei, ±ej �. It is easy to check that �±ei, ±ej � = �ei, ej � = δij . (3) By the same reasoning as part two, it suffices to check that 2 sα(ei) = ei for any basis vector ei. Also from part two, we know that sα(ei) is either equal to ei or −ei. If sα(ei) = ei, 2 then ei = α and clearly sα(ei) = ei.
Recommended publications
  • Orthogonal Bases and the -So in Section 4.8 We Discussed the Problem of Finding the Orthogonal Projection P
    Orthogonal Bases and the -So In Section 4.8 we discussed the problemR. of finding the orthogonal projection p the vector b into the V of , . , suhspace the vectors , v2 ho If v1 v,, form a for V, and the in x n matrix A has these basis vectors as its column vectors. ilt the orthogonal projection p is given by p = Ax where x is the (unique) solution of the normal system ATAx = A7b. The formula for p takes an especially simple and attractive Form when the ba vectors , . .. , v1 v are mutually orthogonal. DEFINITION Orthogonal Basis An orthogonal basis for the suhspacc V of R” is a basis consisting of vectors , ,v,, that are mutually orthogonal, so that v v = 0 if I j. If in additii these basis vectors are unit vectors, so that v1 . = I for i = 1. 2 n, thct the orthogonal basis is called an orthonormal basis. Example 1 The vectors = (1, 1,0), v2 = (1, —1,2), v3 = (—1,1,1) form an orthogonal basis for . We “normalize” ‘ R3 can this orthogonal basis 1w viding each basis vector by its length: If w1=—- (1=1,2,3), lvii 4.9 Orthogonal Bases and the Gram-Schmidt Algorithm 295 then the vectors /1 I 1 /1 1 ‘\ 1 2” / I w1 0) W2 = —— _z) W3 for . form an orthonormal basis R3 , . ..., v, of the m x ii matrix A Now suppose that the column vectors v v2 form an orthogonal basis for the suhspacc V of R’. Then V}.VI 0 0 v2.v ..
    [Show full text]
  • Slices, Slabs, and Sections of the Unit Hypercube
    Slices, Slabs, and Sections of the Unit Hypercube Jean-Luc Marichal Michael J. Mossinghoff Institute of Mathematics Department of Mathematics University of Luxembourg Davidson College 162A, avenue de la Fa¨ıencerie Davidson, NC 28035-6996 L-1511 Luxembourg USA Luxembourg [email protected] [email protected] Submitted: July 27, 2006; Revised: August 6, 2007; Accepted: January 21, 2008; Published: January 29, 2008 Abstract Using combinatorial methods, we derive several formulas for the volume of convex bodies obtained by intersecting a unit hypercube with a halfspace, or with a hyperplane of codimension 1, or with a flat defined by two parallel hyperplanes. We also describe some of the history of these problems, dating to P´olya’s Ph.D. thesis, and we discuss several applications of these formulas. Subject Class: Primary: 52A38, 52B11; Secondary: 05A19, 60D05. Keywords: Cube slicing, hyperplane section, signed decomposition, volume, Eulerian numbers. 1 Introduction In this note we study the volumes of portions of n-dimensional cubes determined by hyper- planes. More precisely, we study slices created by intersecting a hypercube with a halfspace, slabs formed as the portion of a hypercube lying between two parallel hyperplanes, and sections obtained by intersecting a hypercube with a hyperplane. These objects occur natu- rally in several fields, including probability, number theory, geometry, physics, and analysis. In this paper we describe an elementary combinatorial method for calculating volumes of arbitrary slices, slabs, and sections of a unit cube. We also describe some applications that tie these geometric results to problems in analysis and combinatorics. Some of the results we obtain here have in fact appeared earlier in other contexts.
    [Show full text]
  • Linear Algebra Handout
    Artificial Intelligence: 6.034 Massachusetts Institute of Technology April 20, 2012 Spring 2012 Recitation 10 Linear Algebra Review • A vector is an ordered list of values. It is often denoted using angle brackets: ha; bi, and its variable name is often written in bold (z) or with an arrow (~z). We can refer to an individual element of a vector using its index: for example, the first element of z would be z1 (or z0, depending on how we're indexing). Each element of a vector generally corresponds to a particular dimension or feature, which could be discrete or continuous; often you can think of a vector as a point in Euclidean space. p 2 2 2 • The magnitude (also called norm) of a vector x = hx1; x2; :::; xni is x1 + x2 + ::: + xn, and is denoted jxj or kxk. • The sum of a set of vectors is their elementwise sum: for example, ha; bi + hc; di = ha + c; b + di (so vectors can only be added if they are the same length). The dot product (also called scalar product) of two vectors is the sum of their elementwise products: for example, ha; bi · hc; di = ac + bd. The dot product x · y is also equal to kxkkyk cos θ, where θ is the angle between x and y. • A matrix is a generalization of a vector: instead of having just one row or one column, it can have m rows and n columns. A square matrix is one that has the same number of rows as columns. A matrix's variable name is generally a capital letter, often written in bold.
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • CLIFFORD ALGEBRAS Property, Then There Is a Unique Isomorphism (V ) (V ) Intertwining the Two Inclusions of V
    CHAPTER 2 Clifford algebras 1. Exterior algebras 1.1. Definition. For any vector space V over a field K, let T (V ) = k k k Z T (V ) be the tensor algebra, with T (V ) = V V the k-fold tensor∈ product. The quotient of T (V ) by the two-sided⊗···⊗ ideal (V ) generated byL all v w + w v is the exterior algebra, denoted (V ).I The product in (V ) is usually⊗ denoted⊗ α α , although we will frequently∧ omit the wedge ∧ 1 ∧ 2 sign and just write α1α2. Since (V ) is a graded ideal, the exterior algebra inherits a grading I (V )= k(V ) ∧ ∧ k Z M∈ where k(V ) is the image of T k(V ) under the quotient map. Clearly, 0(V )∧ = K and 1(V ) = V so that we can think of V as a subspace of ∧(V ). We may thus∧ think of (V ) as the associative algebra linearly gener- ated∧ by V , subject to the relations∧ vw + wv = 0. We will write φ = k if φ k(V ). The exterior algebra is commutative | | ∈∧ (in the graded sense). That is, for φ k1 (V ) and φ k2 (V ), 1 ∈∧ 2 ∈∧ [φ , φ ] := φ φ + ( 1)k1k2 φ φ = 0. 1 2 1 2 − 2 1 k If V has finite dimension, with basis e1,...,en, the space (V ) has basis ∧ e = e e I i1 · · · ik for all ordered subsets I = i1,...,ik of 1,...,n . (If k = 0, we put { } k { n } e = 1.) In particular, we see that dim (V )= k , and ∅ ∧ n n dim (V )= = 2n.
    [Show full text]
  • Geometric (Clifford) Algebra and Its Applications
    Geometric (Clifford) algebra and its applications Douglas Lundholm F01, KTH January 23, 2006∗ Abstract In this Master of Science Thesis I introduce geometric algebra both from the traditional geometric setting of vector spaces, and also from a more combinatorial view which simplifies common relations and opera- tions. This view enables us to define Clifford algebras with scalars in arbitrary rings and provides new suggestions for an infinite-dimensional approach. Furthermore, I give a quick review of classic results regarding geo- metric algebras, such as their classification in terms of matrix algebras, the connection to orthogonal and Spin groups, and their representation theory. A number of lower-dimensional examples are worked out in a sys- tematic way using so called norm functions, while general applications of representation theory include normed division algebras and vector fields on spheres. I also consider examples in relativistic physics, where reformulations in terms of geometric algebra give rise to both computational and conceptual simplifications. arXiv:math/0605280v1 [math.RA] 10 May 2006 ∗Corrected May 2, 2006. Contents 1 Introduction 1 2 Foundations 3 2.1 Geometric algebra ( , q)...................... 3 2.2 Combinatorial CliffordG V algebra l(X,R,r)............. 6 2.3 Standardoperations .........................C 9 2.4 Vectorspacegeometry . 13 2.5 Linearfunctions ........................... 16 2.6 Infinite-dimensional Clifford algebra . 19 3 Isomorphisms 23 4 Groups 28 4.1 Group actions on .......................... 28 4.2 TheLipschitzgroupG ......................... 30 4.3 PropertiesofPinandSpingroups . 31 4.4 Spinors ................................ 34 5 A study of lower-dimensional algebras 36 5.1 (R1) ................................. 36 G 5.2 (R0,1) =∼ C -Thecomplexnumbers . 36 5.3 G(R0,0,1)...............................
    [Show full text]
  • Cutting Hyperplane Arrangements*
    Discrete Comput Geom 6:385 406 (1991) GDieometryscrete & Computational 1991 Springer-VerlagNew York Inc Cutting Hyperplane Arrangements* Jifi Matou~ek Department of Applied Mathematics, Charles University, Malostransk6 nfim. 25, 118 00 Praha 1. Czechoslovakia Abstract. We consider a collection H of n hyperplanes in E a (where the dimension d is fixed). An e.-cuttin9 for H is a collection of (possibly unbounded) d-dimensional simplices with disjoint interiors, which cover all E a and such that the interior of any simplex is intersected by at most en hyperplanes of H. We give a deterministic algorithm for finding a (1/r)-cutting with O(r d) simplices (which is asymptotically optimal). For r < n I 6, where 6 > 0 is arbitrary but fixed, the running time of this algorithm is O(n(log n)~ In the plane we achieve a time bound O(nr) for r _< n I-6, which is optimal if we also want to compute the collection of lines intersecting each simplex of the cutting. This improves a result of Agarwal, and gives a conceptually simpler algorithm. For an n point set X ~_ E d and a parameter r, we can deterministically compute a (1/r)-net of size O(r log r) for the range space (X, {X c~ R; R is a simplex}), in time O(n(log n)~ d- 1 + roe11). The size of the (1/r)-net matches the best known existence result. By a simple transformation, this allows us to find e-nets for other range spaces usually encountered in computational geometry.
    [Show full text]
  • Orthogonal Bases
    Orthogonal bases • Recall: Suppose that v1 , , vn are nonzero and (pairwise) orthogonal. Then v1 , , vn are independent. Definition 1. A basis v1 , , vn of a vector space V is an orthogonal basis if the vectors are (pairwise) orthogonal. 1 1 0 3 Example 2. Are the vectors − 1 , 1 , 0 an orthogonal basis for R ? 0 0 1 Solution. Note that we do not need to check that the three vectors are independent. That follows from their orthogonality. Example 3. Suppose v1 , , vn is an orthogonal basis of V, and that w is in V. Find c1 , , cn such that w = c1 v1 + + cnvn. Solution. Take the dot product of v1 with both sides: If v1 , , vn is an orthogonal basis of V, and w is in V, then w · v j w = c1 v1 + + cnvn with cj = . v j · v j Armin Straub 1 [email protected] 3 1 1 0 Example 4. Express 7 in terms of the basis − 1 , 1 , 0 . 4 0 0 1 Solution. Definition 5. A basis v1 , , vn of a vector space V is an orthonormal basis if the vectors are orthogonal and have length 1 . If v1 , , vn is an orthonormal basis of V, and w is in V, then w = c1 v1 + + cnvn with cj = v j · w. 1 1 0 Example 6. Is the basis − 1 , 1 , 0 orthonormal? If not, normalize the vectors 0 0 1 to produce an orthonormal basis. Solution. Armin Straub 2 [email protected] Orthogonal projections Definition 7. The orthogonal projection of vector x onto vector y is x · y xˆ = y.
    [Show full text]
  • 15 BASIC PROPERTIES of CONVEX POLYTOPES Martin Henk, J¨Urgenrichter-Gebert, and G¨Unterm
    15 BASIC PROPERTIES OF CONVEX POLYTOPES Martin Henk, J¨urgenRichter-Gebert, and G¨unterM. Ziegler INTRODUCTION Convex polytopes are fundamental geometric objects that have been investigated since antiquity. The beauty of their theory is nowadays complemented by their im- portance for many other mathematical subjects, ranging from integration theory, algebraic topology, and algebraic geometry to linear and combinatorial optimiza- tion. In this chapter we try to give a short introduction, provide a sketch of \what polytopes look like" and \how they behave," with many explicit examples, and briefly state some main results (where further details are given in subsequent chap- ters of this Handbook). We concentrate on two main topics: • Combinatorial properties: faces (vertices, edges, . , facets) of polytopes and their relations, with special treatments of the classes of low-dimensional poly- topes and of polytopes \with few vertices;" • Geometric properties: volume and surface area, mixed volumes, and quer- massintegrals, including explicit formulas for the cases of the regular simplices, cubes, and cross-polytopes. We refer to Gr¨unbaum [Gr¨u67]for a comprehensive view of polytope theory, and to Ziegler [Zie95] respectively to Gruber [Gru07] and Schneider [Sch14] for detailed treatments of the combinatorial and of the convex geometric aspects of polytope theory. 15.1 COMBINATORIAL STRUCTURE GLOSSARY d V-polytope: The convex hull of a finite set X = fx1; : : : ; xng of points in R , n n X i X P = conv(X) := λix λ1; : : : ; λn ≥ 0; λi = 1 : i=1 i=1 H-polytope: The solution set of a finite system of linear inequalities, d T P = P (A; b) := x 2 R j ai x ≤ bi for 1 ≤ i ≤ m ; with the extra condition that the set of solutions is bounded, that is, such that m×d there is a constant N such that jjxjj ≤ N holds for all x 2 P .
    [Show full text]
  • Exact Hyperplane Covers for Subsets of the Hypercube 1
    EXACT HYPERPLANE COVERS FOR SUBSETS OF THE HYPERCUBE JAMES AARONSON, CARLA GROENLAND, ANDRZEJ GRZESIK, TOM JOHNSTON, AND BARTLOMIEJKIELAK Abstract. Alon and F¨uredi(1993) showed that the number of hyper- planes required to cover f0; 1gn n f0g without covering 0 is n. We initiate the study of such exact hyperplane covers of the hypercube for other sub- sets of the hypercube. In particular, we provide exact solutions for covering f0; 1gn while missing up to four points and give asymptotic bounds in the general case. Several interesting questions are left open. 1. Introduction A vector v 2 Rn and a scalar α 2 R determine the hyperplane n fx 2 R : hv; xi := v1x1 + ··· + vnxn = αg in Rn. How many hyperplanes are needed to cover f0; 1gn? Only two are required; for instance, fx : x1 = 0g and fx : x1 = 1g will do. What happens however if 0 2 Rn is not allowed on any of the hyperplanes? We can `exactly' cover f0; 1gn n f0g with n hyperplanes: for example, the collections ffx : Pn xi = 1g : i 2 [n]g or ffx : i=1 xi = jg : j 2 [n]g can be used, where [n] := f1; 2; : : : ; ng. Alon and F¨uredi[2] showed that in fact n hyperplanes are always necessary. Recently, a variation was studied by Clifton and Huang [5], in which they require that each point from f0; 1gn n f0g is covered at least k times for some k 2 N (while 0 is never covered). Another natural generalisation is to put more than just 0 to the set of points we wish to avoid in the cover.
    [Show full text]
  • General N-Dimensional Rotations
    General n-Dimensional Rotations Antonio Aguilera Ricardo Pérez-Aguila Centro de Investigación en Tecnologías de Información y Automatización (CENTIA) Universidad de las Américas, Puebla (UDLAP) Ex-Hacienda Santa Catarina Mártir México 72820, Cholula, Puebla [email protected] [email protected] ABSTRACT This paper presents a generalized approach for performing general rotations in the n-Dimensional Euclidean space around any arbitrary (n-2)-Dimensional subspace. It first shows the general matrix representation for the principal n-D rotations. Then, for any desired general n-D rotation, a set of principal n-D rotations is systematically provided, whose composition provides the original desired rotation. We show that this coincides with the well-known 2D and 3D cases, and provide some 4D applications. Keywords Geometric Reasoning, Topological and Geometrical Interrogations, 4D Visualization and Animation. 1. BACKGROUND arctany x x 0 arctany x x 0 The n-Dimensional Translation arctan2(y, x) 2 x 0, y 0 In this work, we will use a simple nD generalization of the 2D and 3D translations. A point 2 x 0, y 0 x (x1 , x2 ,..., xn ) can be translated by a distance vec- In this work we will only use arctan2. tor d (d ,d ,...,d ) and results x (x, x ,..., x ) , 1 2 n 1 2 n 2. INTRODUCTION which, can be computed as x x T (d) , or in its expanded matrix form in homogeneous coordinates, The rotation plane as shown in Eq.1. [Ban92] and [Hol91] have identified that if in the 2D space a rotation is given around a point, and in the x1 x2 xn 1 3D space it is given around a line, then in the 4D space, in analogous way, it must be given around a 1 0 0 0 plane.
    [Show full text]
  • Spherical Conformal Geometry with Geometric Algebra
    MM Research Preprints, 98–109 No. 18, Dec. 1999. Beijing Spherical Conformal Geometry with Geometric Algebra Hongbo Li, David Hestenes, Alyn Rockwood 1. Introduction The study of spheres dates back to the first century in the book Sphaerica of Menelaus. Spherical trigonometry was thoroughly developed in modern form by Euler in his 1782 paper [?]. Spherical geometry in n-dimensions was first studied by Schl¨afli in his 1852 treatise, which was published posthumously in 1901 [?]. The most important transformation in spherical geometry, M¨obius transformation, was considered by M¨obius in his 1855 paper [?]. The first person who studied spherical trigonometry with vectors was Hamilton [?]. In his 1987 book [?], Hestenes studied spherical trigonometry with Geometric Algebra, which laid down foundations for later study with Geometric Algebra. This chapter is a continuation of the previous chapter. In this chapter, we consider the homogeneous model of the spherical space, which is similar to that of the Euclidean space. We establish conformal geometry of spherical space in this model, and discuss several typical conformal transformations. Although it is well known that the conformal groups of n-dimensional Euclidean and spherical spaces are isometric to each other, and are all isometric to the group of isometries of hyperbolic (n + 1)-space [?], [?], spherical conformal geometry on one hand has its unique conformal transformations, on the other hand can provide good understanding for hyperbolic conformal geometry. It is an indispensible part of the unification of all conformal geometries in the homogeneous model, which is going to be addressed in the next chapter. 2. Homogeneous model of spherical space In the previous chapter, we have seen that given a null vector e ∈ Rn+1,1, the intersection of the null cone N n of Rn+1,1 with the hyperplane {x ∈ Rn+1,1|x · e = −1} represents points in Rn.
    [Show full text]