Pantomorphic Perspective for Immersive Imagery

Total Page:16

File Type:pdf, Size:1020Kb

Pantomorphic Perspective for Immersive Imagery Pantomorphic Perspective for Immersive Imagery Jakub Maksymilian Fober [email protected] Abstract even more critical, as there is no mathematical model for gen- A wide choice of cinematic lenses enables motion-picture erating anamorphic projection in an artistically-convincing creators to adapt image visual-appearance to their creative manner [Yuan and Sasian 2009]. Some attempts were made vision. Such choice does not exist in the realm of real-time at alternative projections for computer graphics, with fixed computer graphics, where only one type of perspective pro- cylindrical or spherical geometry [Baldwin et al. 2014; Sharp- jection is widely used. This work provides a perspective less et al. 2010]. A parametrized perspective model was imaging model that in an artistically convincing manner also proposed as a new standard [Correia and Romão 2007], resembles anamorphic photography lens variety and more. but wasn’t adopted. It included interpolation states in be- It presents an asymmetrical-anamorphic azimuthal projec- tween rectilinear/equidistant and spherical/cylindrical pro- tion map with natural vignetting and realistic chromatic jection. The cylindrical parametrization of this solution was aberration. The mathematical model for this projection has merely an interpolation factor, where intermediate values been chosen such that its parameters reflect the psycho- did not correspond to any common projection type. There- physiological aspects of visual perception. That enables its fore it was not suited for artistic or professional use. The use in artistic and professional environments, where specific notion of sphere as a projective surface, which incorporates aspects of the photographed space are to be presented. cartographic mapping to produce a perspective picture be- came popularized [German et al. 2007; Peñaranda et al. 2015; CCS Concepts: • Computing methodologies ! Percep- Williams 2015]. Also, perspective parametrization that tran- tion; Ray tracing; • Human-centered computing; • Ap- sitions according to the content (by the view-tilt angle) has plied computing ! Media arts; been developed, as a modification to the computer game Keywords: Curvilinear Perspective, Panini, Pantomorphic, Minecraft [Williams 2017]. But the results of these solutions Cylindrical, Fisheye, Perspective Map were more a gimmick and have not found practical use. The linear perspective projection is still the way-to-go for most digital content. One of the reasons is the GPU fixed-function © 2021 Jakub Maksymilian Fober architecture in regard to rasterization. With the advent of real-time ray-tracing, more exotic projections could become widely adopted. This work is licensed under Creative Commons BY-NC-ND 3.0 license. https://creativecommons.org/licenses/by-nc-nd/3.0/ This paper aims to provide a perspective model with math- For all other uses including commercial, contact the owner/author(s). ematical parametrization that gives artistic-style interaction with image-geometry. It is a parametrization that works similarly to the way film directors choose lenses for each 1 Introduction scene by their aesthetics [Giardina 2016; Neil 2004; Sasaki The perspective in computer real-time graphics hasn’t changed 2017a], but with a greater degree of control. It also provides a since the dawn of CGI. It is based on a concept as old as the psycho-physiological correlation between perspective model fifteenth century Renaissance, a linear projection [Alberti parameters and the perception of depicted space attributes, like distance, size, proportions, shape or speed. That map- arXiv:2102.12682v4 [cs.GR] 19 Jul 2021 1435; Argan and Robb 1946; McArdle 2013]. This situation is similar to the beginnings of photography, where only one ping enables the use of the presented model in a professional type of lens was widely used, an A.D. 1866 Rapid Rectilinear environment, where image geometry is specifically suited [Kingslake 1989] lens. Linear perspective even at the time of for a task [Whittaker 1984]. pantomorphic its advent, 500 years ago, has been criticized for distorting The presented perspective model is named panto- +morphe¯ proportions [Da Vinci 1632]. In a phenomenon known today (G. , all , shape; assuming all shapes). as Leonardo’s paradox [Dixon 1987], in which figures further away but at the periphery appear larger, than those located 1.1 Document naming convention near the optical center. Computer graphics really skipped This document uses the following naming convention: the artistic achievements of the last five centuries in regard to perspective. This includes the cylindrical perspective of • Left-handed coordinate system. Pannini [Sharpless et al. 2010], Barker [Wikipedia, contribu- • Vectors presented natively in column. tors 2019] and anamorphic lenses used in cinematography • Matrix vectors arranged in rows (row-major order) [Giardina 2016; Neil 2004; Sasaki 2017a,b]. The situation is "row col. Fober, J.M. ®2 EG ¹ º • Matrix-matrix multiplication by »column¼0 ·»row¼1 = 2 2 2 3 " 1 cos 2i # i® E® ¸®E cos2 i ¸ " G = 6 G ~ 7 = = 2 2 , 0 1. i 6 E®2 7 2 i cos¹2iº (1d) ®~ 6 ~ 7 sin 1 − • jDj 2 2 2 2 A single bar enclosure “ ” represents vector length 6 E®G ¸®E~ 7 or scalar absolute. 4 5 • Vectors with an arithmetic sign, or without, are calcu- where A 2 R¡0 is the view-coordinate radius (magnitude). lated component-wise and form another vector. Vector \® 2 »0, c¼2 contains incident angles (measured from • Centered dot “·” represents the vector dot product. the optical axis) of two azimuthal projections. Vector i® 2 5 , 2 • Square brackets with a comma “» ¼” denote interval. »0, 1¼2 is the anamorphic interpolation weight, which is lin- • Square brackets with blanks “»G ~¼” denote vector and ear i®G ¸i ®~ = 1, but has spherical distribution (see Figure 1 matrix. on the facing page). Vector :® 2 »−1, 1¼2 (also »−1, 1¼3 in sub- • The power of “−1” signifies the reciprocal of the value. section 2.2 variant) describes two power axes of anamorphic • QED symbol “□” marks the final result or output. projection. The algorithm is evaluated per-pixel of position This naming convention simplifies the transformation pro- E® 2 R2 in view-space coordinates, centered at the optical axis cess of formulas into code. and normalized at the chosen angle-of-view (horizontal or vertical). The final anamorphic incident angle \ 0 is obtained 2 Anamorphic primary-ray map by interpolation, using i® weights. If we assume that projective visual space is spherical [Fleck "\® # i 1994; McArdle 2013], one can define perspective picture as \ 0 G ®G = ® · i (2a) an array of rays pointing to the visual sphere surface. This is \~ ®~ how the algorithm described below will output a projection- 퐺^ E i map (aka perspective-map). Perspective map is an array of 2 G 3 2 sin \0 ®G 3 2 \ 0 cos 3 6 7 6 A 7 6sin 7 6퐺^~7 = 6 E®~ 7 = 6 sin i 7 , □ (2b) three-dimensional rays, which are assigned to each screen- 6 7 6 0 7 6 0 7 6퐺^ 7 6 cos\ 7 6 cos\ 7 pixel. Visual sphere as the image model enables a wider 4 I 5 4 5 4 5 angle of view, beyond 180°-limit of planar projection. Ray- 0 map can be easily converted to cube-*+ -map, () -map and here \ 2 ¹0, c¼ is the anamorphic incident angle, measured other screen distortion formats. from the optical axis. This measurement resembles azimuthal Here, the procedural algorithm for primary-ray map (aka projection of the globe (here a visual sphere) [McArdle 2013]. 3 perspective map) uses two input values from the user: dis- The final incident vector 퐺^ 2 »−1, 1¼ (aka primary-ray) is 0 tortion parameter for two power axes and focal-length or obtained from the anamorphic angle \ . Parameters A, E,® i® angle-of-view (aka FOV). Two distinct power axes define the are in view-space, while \,\® 0, i,퐺^ are in visual-sphere space. anamorphic projection type. Each axis is expressed by the Essentially the anamorphic primary-ray map preserves the azimuthal projection factor : [Bettonvil 2005; Fleck 1994; azimuthal angle i, while modulating only the picture’s ra- Krause 2019]. Both power axes share the same focal-length dius. 5 . Evaluation of each power-axis produces spherical an- ® ® gles \G and \~, which are then combined to anamorphic 2.1 Focal length and angle-of-view 0 azimuthal-projection angle \ . Interpolation of \ is done To give more control over the picture, a mapping between i i through anamorphic weights ®G and ®~. Weights are derived angle-of-view Ω and focal-length 5 can be established. Here, from spherical angle i, of the azimuthal projection. To note, focal-length is derived in a reciprocal from, to optimize usage calculation of i is omitted here, by using view-coordinates in Equation (1). E®-optimization. Ω tan ℎ :® 8 2 G , :® > ® if G ¡ 0 q > :G 2 2 −1 ><> Ω A = j®Ej = E®G ¸E ®~ (1a) 5 ℎ , :® , Ωℎ = 2 if G = 0 (3) Ω A > sin ℎ ® arctan :® > 2 :G 8 5 G ® > , if :® < 0 > , if :G ¡ 0 > ® G > :G : :G ® >< A ® \G = , if :G = 0 (1b) 5 Ω 2 ¹0, g¼ > arcsin A :® where ℎ denotes a horizontal angle of view. Simi- > 5 G , :® > if G < 0 larly, vertical Ω can be obtained using :® parameter instead. : :G E ~ 1 arctan A :® The resultant value /5 2 R¡0 is the reciprocal focal-length. 8 5 ~ , :® > if ~ ¡ 0 > :~ \® >< A , :® Remark. Focal-length 5 value must be same for \® and \® . ~ = 5 if ~ = 0 (1c) G ~ > arcsin A :® Therefore, only one reference angle Ω can be chosen, either > 5 ~ , :® > if ~ < 0 horizontal or vertical. : :~ Pantomorphic Perspective for Immersive Imagery r 1 1 0 0 1 r (a) Graph mapping angle i, to anamor- (b) Radial graph, mapping anamorphic (c) Radial graph illustrating anamorphic 2 phic interpolation weights i®G and i®~.
Recommended publications
  • Chapter 3 Image Formation
    This is page 44 Printer: Opaque this Chapter 3 Image Formation And since geometry is the right foundation of all painting, I have de- cided to teach its rudiments and principles to all youngsters eager for art... – Albrecht Durer¨ , The Art of Measurement, 1525 This chapter introduces simple mathematical models of the image formation pro- cess. In a broad figurative sense, vision is the inverse problem of image formation: the latter studies how objects give rise to images, while the former attempts to use images to recover a description of objects in space. Therefore, designing vision algorithms requires first developing a suitable model of image formation. Suit- able, in this context, does not necessarily mean physically accurate: the level of abstraction and complexity in modeling image formation must trade off physical constraints and mathematical simplicity in order to result in a manageable model (i.e. one that can be inverted with reasonable effort). Physical models of image formation easily exceed the level of complexity necessary and appropriate for this book, and determining the right model for the problem at hand is a form of engineering art. It comes as no surprise, then, that the study of image formation has for cen- turies been in the domain of artistic reproduction and composition, more so than of mathematics and engineering. Rudimentary understanding of the geometry of image formation, which includes various models for projecting the three- dimensional world onto a plane (e.g., a canvas), is implicit in various forms of visual arts. The roots of formulating the geometry of image formation can be traced back to the work of Euclid in the fourth century B.C.
    [Show full text]
  • Polar Zenithal Map Projections
    POLAR ZENITHAL MAP Part 1 PROJECTIONS Five basic by Keith Selkirk, School of Education, University of Nottingham projections Map projections used to be studied as part of the geography syllabus, but have disappeared from it in recent years. They provide some excellent practical work in trigonometry and calculus, and deserve to be more widely studied for this alone. In addition, at a time when world-wide travel is becoming increasingly common, it is un- fortunate that few people are aware of the effects of map projections upon the resulting maps, particularly in the polar regions. In the first of these articles we shall study five basic types of projection and in the second we shall look at some of the mathematics which can be developed from them. Polar Zenithal Projections The zenithal and cylindrical projections are limiting cases of the conical projection. This is illustrated in Figure 1. The A football cannot be wrapped in a piece of paper without either semi-vertical angle of a cone is the angle between its axis and a stretching or creasing the paper. This is what we mean when straight line on its surface through its vertex. Figure l(b) and we say that the surface of a sphere is not developable into a (c) shows cones of semi-vertical angles 600 and 300 respectively plane. As a result, any plane map of a sphere must contain dis- resting on a sphere. If the semi-vertical angle is increased to 900, tortion; the way in which this distortion takes place is deter- the cone becomes a disc as in Figure l(a) and this is the zenithal mined by the map projection.
    [Show full text]
  • A Geometric Correction Method Based on Pixel Spatial Transformation
    2020 International Conference on Computer Intelligent Systems and Network Remote Control (CISNRC 2020) ISBN: 978-1-60595-683-1 A Geometric Correction Method Based on Pixel Spatial Transformation Xiaoye Zhang, Shaobin Li, Lei Chen ABSTRACT To achieve the non-planar projection geometric correction, this paper proposes a geometric correction method based on pixel spatial transformation for non-planar projection. The pixel spatial transformation relationship between the distorted image on the non-planar projection plane and the original projection image is derived, the pre-distorted image corresponding to the projection surface is obtained, and the optimal view position is determined to complete the geometric correction. The experimental results show that this method can achieve geometric correction of projective plane as cylinder plane and spherical column plane, and can present the effect of visual distortion free at the optimal view point. KEYWORDS Geometric Correction, Image Pre-distortion, Pixel Spatial Transformation. INTRODUCTION Projection display technology can significantly increase visual range, and projection scenes can be flexibly arranged according to actual needs. Therefore, projection display technology has been applied to all aspects of production, life and learning. With the rapid development of digital media technology, modern projection requirements are also gradually complicated, such as virtual reality presentation with a high sense of immersion and projection performance in various forms, which cannot be met by traditional plane projection, so non-planar projection technology arises at the right moment. Compared with traditional planar projection display technology, non-planar projection mainly includes color compensation of projected surface[1], geometric splicing and brightness fusion of multiple projectors[2], and geometric correction[3].
    [Show full text]
  • CS 4204 Computer Graphics 3D Views and Projection
    CS 4204 Computer Graphics 3D views and projection Adapted from notes by Yong Cao 1 Overview of 3D rendering Modeling: * Topic we’ve already discussed • *Define object in local coordinates • *Place object in world coordinates (modeling transformation) Viewing: • Define camera parameters • Find object location in camera coordinates (viewing transformation) Projection: project object to the viewplane Clipping: clip object to the view volume *Viewport transformation *Rasterization: rasterize object Simple teapot demo 3D rendering pipeline Vertices as input Series of operations/transformations to obtain 2D vertices in screen coordinates These can then be rasterized 3D rendering pipeline We’ve already discussed: • Viewport transformation • 3D modeling transformations We’ll talk about remaining topics in reverse order: • 3D clipping (simple extension of 2D clipping) • 3D projection • 3D viewing Clipping: 3D Cohen-Sutherland Use 6-bit outcodes When needed, clip line segment against planes Viewing and Projection Camera Analogy: 1. Set up your tripod and point the camera at the scene (viewing transformation). 2. Arrange the scene to be photographed into the desired composition (modeling transformation). 3. Choose a camera lens or adjust the zoom (projection transformation). 4. Determine how large you want the final photograph to be - for example, you might want it enlarged (viewport transformation). Projection transformations Introduction to Projection Transformations Mapping: f : Rn Rm Projection: n > m Planar Projection: Projection on a plane.
    [Show full text]
  • Documenting Carved Stones by 3D Modelling
    Documenting carved stones by 3D modelling – Example of Mongolian deer stones Fabrice Monna, Yury Esin, Jérôme Magail, Ludovic Granjon, Nicolas Navarro, Josef Wilczek, Laure Saligny, Sébastien Couette, Anthony Dumontet, Carmela Chateau Smith To cite this version: Fabrice Monna, Yury Esin, Jérôme Magail, Ludovic Granjon, Nicolas Navarro, et al.. Documenting carved stones by 3D modelling – Example of Mongolian deer stones. Journal of Cultural Heritage, El- sevier, 2018, 34 (November–December), pp.116-128. 10.1016/j.culher.2018.04.021. halshs-01916706 HAL Id: halshs-01916706 https://halshs.archives-ouvertes.fr/halshs-01916706 Submitted on 22 Jul 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/328759923 Documenting carved stones by 3D modelling-Example of Mongolian deer stones Article in Journal of Cultural Heritage · May 2018 DOI: 10.1016/j.culher.2018.04.021 CITATIONS READS 3 303 10 authors, including: Fabrice Monna Yury Esin University
    [Show full text]
  • Map Projections
    Map Projections Chapter 4 Map Projections What is map projection? Why are map projections drawn? What are the different types of projections? Which projection is most suitably used for which area? In this chapter, we will seek the answers of such essential questions. MAP PROJECTION Map projection is the method of transferring the graticule of latitude and longitude on a plane surface. It can also be defined as the transformation of spherical network of parallels and meridians on a plane surface. As you know that, the earth on which we live in is not flat. It is geoid in shape like a sphere. A globe is the best model of the earth. Due to this property of the globe, the shape and sizes of the continents and oceans are accurately shown on it. It also shows the directions and distances very accurately. The globe is divided into various segments by the lines of latitude and longitude. The horizontal lines represent the parallels of latitude and the vertical lines represent the meridians of the longitude. The network of parallels and meridians is called graticule. This network facilitates drawing of maps. Drawing of the graticule on a flat surface is called projection. But a globe has many limitations. It is expensive. It can neither be carried everywhere easily nor can a minor detail be shown on it. Besides, on the globe the meridians are semi-circles and the parallels 35 are circles. When they are transferred on a plane surface, they become intersecting straight lines or curved lines. 2021-22 Practical Work in Geography NEED FOR MAP PROJECTION The need for a map projection mainly arises to have a detailed study of a 36 region, which is not possible to do from a globe.
    [Show full text]
  • Lecture 16: Planar Homographies Robert Collins CSE486, Penn State Motivation: Points on Planar Surface
    Robert Collins CSE486, Penn State Lecture 16: Planar Homographies Robert Collins CSE486, Penn State Motivation: Points on Planar Surface y x Robert Collins CSE486, Penn State Review : Forward Projection World Camera Film Pixel Coords Coords Coords Coords U X x u M M V ext Y proj y Maff v W Z U X Mint u V Y v W Z U M u V m11 m12 m13 m14 v W m21 m22 m23 m24 m31 m31 m33 m34 Robert Collins CSE486, PennWorld State to Camera Transformation PC PW W Y X U R Z C V Rotate to Translate by - C align axes (align origins) PC = R ( PW - C ) = R PW + T Robert Collins CSE486, Penn State Perspective Matrix Equation X (Camera Coordinates) x = f Z Y X y = f x' f 0 0 0 Z Y y' = 0 f 0 0 Z z' 0 0 1 0 1 p = M int ⋅ PC Robert Collins CSE486, Penn State Film to Pixel Coords 2D affine transformation from film coords (x,y) to pixel coordinates (u,v): X u’ a11 a12xa'13 f 0 0 0 Y v’ a21 a22 ya'23 = 0 f 0 0 w’ Z 0 0z1' 0 0 1 0 1 Maff Mproj u = Mint PC = Maff Mproj PC Robert Collins CSE486, Penn StateProjection of Points on Planar Surface Perspective projection y Film coordinates x Point on plane Rotation + Translation Robert Collins CSE486, Penn State Projection of Planar Points Robert Collins CSE486, Penn StateProjection of Planar Points (cont) Homography H (planar projective transformation) Robert Collins CSE486, Penn StateProjection of Planar Points (cont) Homography H (planar projective transformation) Punchline: For planar surfaces, 3D to 2D perspective projection reduces to a 2D to 2D transformation.
    [Show full text]
  • Projections in Context
    Projections in Context Kaye Mason and Sheelagh Carpendale and Brian Wyvill Department of Computer Science, University of Calgary Calgary, Alberta, Canada g g ffkatherim—sheelagh—blob @cpsc.ucalgary.ca Abstract sentation is the technical difficulty in the more mathematical as- pects of projection. This is exactly where computers can be of great This examination considers projections from three space into two assistance to the authors of representations. It is difficult enough to space, and in particular their application in the visual arts and in get all of the angles right in a planar geometric projection. Get- computer graphics, for the creation of image representations of the ting the curves right in a non-planar projection requires a great deal real world. A consideration of the history of projections both in the of skill. An algorithm, however, could provide a great deal of as- visual arts, and in computer graphics gives the background for a sistance. A few computer scientists have realised this, and begun discussion of possible extensions to allow for more author control, work on developing a broader conceptual framework for the imple- and a broader range of stylistic support. Consideration is given mentation of projections, and projection-aiding tools in computer to supporting user access to these extensions and to the potential graphics. utility. This examination considers this problem of projecting a three dimensional phenomenon onto a two dimensional media, be it a Keywords: Computer Graphics, Perspective, Projective Geome- canvas, a tapestry or a computer screen. It begins by examining try, NPR the nature of the problem (Section 2) and then look at solutions that have been developed both by artists (Section 3) and by com- puter scientists (Section 4).
    [Show full text]
  • Maps and Cartography: Map Projections a Tutorial Created by the GIS Research & Map Collection
    Maps and Cartography: Map Projections A Tutorial Created by the GIS Research & Map Collection Ball State University Libraries A destination for research, learning, and friends What is a map projection? Map makers attempt to transfer the earth—a round, spherical globe—to flat paper. Map projections are the different techniques used by cartographers for presenting a round globe on a flat surface. Angles, areas, directions, shapes, and distances can become distorted when transformed from a curved surface to a plane. Different projections have been designed where the distortion in one property is minimized, while other properties become more distorted. So map projections are chosen based on the purposes of the map. Keywords •azimuthal: projections with the property that all directions (azimuths) from a central point are accurate •conformal: projections where angles and small areas’ shapes are preserved accurately •equal area: projections where area is accurate •equidistant: projections where distance from a standard point or line is preserved; true to scale in all directions •oblique: slanting, not perpendicular or straight •rhumb lines: lines shown on a map as crossing all meridians at the same angle; paths of constant bearing •tangent: touching at a single point in relation to a curve or surface •transverse: at right angles to the earth’s axis Models of Map Projections There are two models for creating different map projections: projections by presentation of a metric property and projections created from different surfaces. • Projections by presentation of a metric property would include equidistant, conformal, gnomonic, equal area, and compromise projections. These projections account for area, shape, direction, bearing, distance, and scale.
    [Show full text]
  • Representations of Celestial Coordinates in FITS
    A&A 395, 1077–1122 (2002) Astronomy DOI: 10.1051/0004-6361:20021327 & c ESO 2002 Astrophysics Representations of celestial coordinates in FITS M. R. Calabretta1 and E. W. Greisen2 1 Australia Telescope National Facility, PO Box 76, Epping, NSW 1710, Australia 2 National Radio Astronomy Observatory, PO Box O, Socorro, NM 87801-0387, USA Received 24 July 2002 / Accepted 9 September 2002 Abstract. In Paper I, Greisen & Calabretta (2002) describe a generalized method for assigning physical coordinates to FITS image pixels. This paper implements this method for all spherical map projections likely to be of interest in astronomy. The new methods encompass existing informal FITS spherical coordinate conventions and translations from them are described. Detailed examples of header interpretation and construction are given. Key words. methods: data analysis – techniques: image processing – astronomical data bases: miscellaneous – astrometry 1. Introduction PIXEL p COORDINATES j This paper is the second in a series which establishes conven- linear transformation: CRPIXja r j tions by which world coordinates may be associated with FITS translation, rotation, PCi_ja mij (Hanisch et al. 2001) image, random groups, and table data. skewness, scale CDELTia si Paper I (Greisen & Calabretta 2002) lays the groundwork by developing general constructs and related FITS header key- PROJECTION PLANE x words and the rules for their usage in recording coordinate in- COORDINATES ( ,y) formation. In Paper III, Greisen et al. (2002) apply these meth- spherical CTYPEia (φ0,θ0) ods to spectral coordinates. Paper IV (Calabretta et al. 2002) projection PVi_ma Table 13 extends the formalism to deal with general distortions of the co- ordinate grid.
    [Show full text]
  • Distortion-Free Wide-Angle Portraits on Camera Phones
    Distortion-Free Wide-Angle Portraits on Camera Phones YICHANG SHIH, WEI-SHENG LAI, and CHIA-KAI LIANG, Google (a) A wide-angle photo with distortions on subjects’ faces. (b) Distortion-free photo by our method. Fig. 1. (a) A group selfie taken by a wide-angle 97° field-of-view phone camera. The perspective projection renders unnatural look to faces on the periphery: they are stretched, twisted, and squished. (b) Our algorithm restores all the distorted face shapes and keeps the background unaffected. Photographers take wide-angle shots to enjoy expanding views, group por- ACM Reference Format: traits that never miss anyone, or composite subjects with spectacular scenery background. In spite of the rapid proliferation of wide-angle cameras on YiChang Shih, Wei-Sheng Lai, and Chia-Kai Liang. 2019. Distortion-Free mobile phones, a wider field-of-view (FOV) introduces a stronger perspec- Wide-Angle Portraits on Camera Phones. ACM Trans. Graph. 38, 4, Article 61 tive distortion. Most notably, faces are stretched, squished, and skewed, to (July 2019), 12 pages. https://doi.org/10.1145/3306346.3322948 look vastly different from real-life. Correcting such distortions requires pro- fessional editing skills, as trivial manipulations can introduce other kinds of distortions. This paper introduces a new algorithm to undistort faces 1 INTRODUCTION without affecting other parts of the photo. Given a portrait as an input,we formulate an optimization problem to create a content-aware warping mesh Empowered with extra peripheral vi- FOV=97° which locally adapts to the stereographic projection on facial regions, and sion to “see”, a wide-angle lens is FOV=76° seamlessly evolves to the perspective projection over the background.
    [Show full text]
  • Projector Pixel Redirection Using Phase-Only Spatial Light Modulator
    Projector Pixel Redirection Using Phase-Only Spatial Light Modulator Haruka Terai, Daisuke Iwai, and Kosuke Sato Abstract—In projection mapping from a projector to a non-planar surface, the pixel density on the surface becomes uneven. This causes the critical problem of local spatial resolution degradation. We confirmed that the pixel density uniformity on the surface was improved by redirecting projected rays using a phase-only spatial light modulator. Index Terms—Phase-only spatial light modulator (PSLM), pixel redirection, pixel density uniformization 1 INTRODUCTION Projection mapping is a technology for superimposing an image onto a three-dimensional object with an uneven surface using a projector. It changes the appearance of objects such as color and texture, and is used in various fields, such as medical [4] or design support [5]. Projectors are generally designed so that the projected pixel density is uniform when an image is projected on a flat screen. When an image is projected on a non-planar surface, the pixel density becomes uneven, which leads to significant degradation of spatial resolution locally. Previous works applied multiple projectors to compensate for the local resolution degradation. Specifically, projectors are placed so that they illuminate a target object from different directions. For each point on the surface, the previous techniques selected one of the projectors that can project the finest image on the point. In run-time, each point is illuminated by the selected projector [1–3]. Although these techniques Fig. 1. Phase modulation principle by PSLM-LCOS. successfully improved the local spatial degradation, their system con- figurations are costly and require precise alignment of projected images at sub-pixel accuracy.
    [Show full text]