Real-Time Shadows Eric Haines [email protected] Tomas Moller¨ [email protected]

Total Page:16

File Type:pdf, Size:1020Kb

Real-Time Shadows Eric Haines Erich@Acm.Org Tomas Moller¨ Tompa@Acm.Org Real-Time Shadows Eric Haines [email protected] Tomas Moller¨ [email protected] [Excerpted and updated from the book ”Real-Time Rendering”, by Tomas Moller¨ and Eric Haines, A.K. Peters, ISBN 1568811012. See http://www.realtimerendering.com/] Shadows are important elements in creating a realistic image and in providing the user with visual cues about object placement. A review of many different shadow algorithms is found in the survey published by Woo et al. [23] and in Watt and Watt’s book [21]. Here we will present the most important real-time algorithms for dynamic shadows. The first section handles the special case of shadows cast on planar surfaces, and the second section covers more general shadow algorithms, i.e., casting shadows onto arbitrary surfaces. 1 Planar Shadows A simple case of shadowing occurs when objects cast shadows on planar surfaces. Two kinds of algorithms for planar shadows are presented in this section. The terminology used here is illustrated in Figure 1, where occluders are objects that cast shadows onto receivers. Point light sources generate only fully shadowed regions, sometimes called hard shadows. If area light sources are used, then soft shadows are produced. Each shadow can have a fully shadowed region, called the umbra, and a partially shadowed region, called the penumbra. light source occluder receiver shadow umbra penumbra Figure 1: Shadow terminology: light source, occluder, receiver, shadow, umbra, and penumbra. That is, at least, how it works in the real world. As we will see, soft shadows (i.e., with penumbrae) can be simulated by the shadow map algorithm. We will also present other methods that generate more physically accurate soft shadows. Soft shadows are generally preferable, if they are possible, because the soft edges let the viewer know that the shadow is indeed a shadow. Hard-edged shadows can sometimes be misinterpreted as actual geometric features, such as a crease in a surface. More important than having a penumbra is having any shadow at all. Without some shadow as a visual cue, scenes are often unconvincing and more difficult to perceive. As Wanger shows [20], it is usually better to have an inaccurate shadow than none at all, as the eye is fairly forgiving about the shape of the shadow. For example, a blurred black circle applied as a texture on the floor can anchor a person to the ground. A simple black rectangular shape fading off around the edges, perhaps a total of 10 triangles, is often all that is needed for a car’s soft shadow. In the following sections we will go beyond these simple modeled shadows and present methods that compute shadows automatically in real-time from the occluders in a scene. 1.1 Projection Shadows In this scheme, the three-dimensional object is rendered a second time in order to create a shadow. A matrix can be derived that projects the vertices of an object onto a plane [2, 17]. Consider the situation in Figure 2, where the light source is located at , the vertex to be ¢ projected is at ¡ , and the projected vertex is at . We will derive the projection matrix for the special case where the shadowed plane is £¥¤§¦ , then this result will be generalized to work with any plane. ¨ © 0 1 )+*,.- / ! "#%$&' ( 35476 8 Figure 2: Left: A light source, located at 2 , casts a shadow onto the plane . The vertex is projected onto the plane. The projected point is called 9 . The similar triangles are used for the derivation of the projection matrix. Right: The notation of the left part of this figure is used here. The shadow is being cast =¥> ?5@BAC476 onto a plane, :<; . We start by deriving the projection for the D -coordinate. From the similar triangles in the left part of Figure 2, the following equation is obtained. E FHGJI F I N K GJI I G7K F FML N N O<P (1) I K GJI K N F F N E F I GJK N N L S S I K RGTI R K I GTK ER N N U V N N U W The Q -coordinate is obtained in the same way: , while the -coordinate L is zero. Now these equations can be converted into the projection matrix X below. b c YZ I GCI N F]\^\ c Z \_\`\^\ X R GCI I N \ \ (2) L d [ G5a I N \ \ X It is easy to verify that Xfe , which means that is indeed the projection matrix. L§g \ In the general case, the plane onto which the shadows should be cast is not the plane W , L \ l<m7n but instead hTi jMk . This case is depicted in the right part of Figure 2. The goal is again L o to find a matrix that projects e down to . To this end, the ray emanating at , which goes g h through e , is intersected by the plane . This yields the projected point : g S nCmJjBk o G G U S e o o (3) G gTL U jpk e o This equation can also be converted into a projection matrix, shown in Equation 4, which satisfies Xfe . L§g YZ b c GJI GHI GHI R GHI F q.F F q.N F q F Z c jpk o.mJn n GHI GJI GCI R GCI N q.F N q.N N q N jpk o.mJn n X GCI R GHI R GJI R RrGHI R q.F q.N q (4) L [ d jpk omJn n G G G R q.F q.N q jpk o \ As expected, this matrix turns into the matrix in Equation 2 if the plane is W (that is, L S a \ \U s \ n j and ). L L To render the shadow, simply apply this matrix to the objects that should cast shadows on the plane h , and render this projected object with a dark color and no illumination. In practice, you have to take measures to avoid allowing the projected polygons to be rendered beneath the surface receiving them. One method is to add some bias to the plane we project upon so that the shadow polygons are always rendered in front of the surface. Getting this bias just right is often tricky: too much and the shadows start to cover the objects and so break the illusion; too little and the ground plane pokes through the shadows due to precision error. As the angle of the surface normal away from the viewer increases, the bias must also increase. A safer method is to draw the ground plane first, then draw the projected polygons with the t -buffer off, then render the rest of the geometry as usual. The projected polygons are then always drawn on top of the ground plane, as no depth comparisons are made. A flaw with projection shadows is the one we ran into with reflections: the projected shadows can fall outside of our plane. To solve this problem, we can use a stencil buffer. First draw the receiver to the screen and to the stencil buffer. Then, with the u -buffer off, draw the projected polygons only where the receiver was drawn, then render the rest of the scene normally. Projecting the polygons this way works if the shadows are opaque. For semi-transparent shadows, where the underlying surface color or texture can be seen, more care is usually needed. A convex object’s shadow is guaranteed to have exactly two (or, by culling backfaces, exactly one) projected polygons covering each shadowed pixel on the plane. Objects with concavities do not have this property, so simply rendering each projected polygon as semi-transparent will give poor results. The stencil buffer can be used to ensure that each pixel is covered at most once. Do this by incrementing the stencil buffer’s count by each polygon drawn, allowing only the first projected polygon covering each pixel to be rendered. A disadvantage of the projection method, in addition to its limitation to planar surfaces, is that the shadow has to be rendered for each frame, even though the shadow may not change. Since shadows are view-independent (their shapes do not change with different viewpoints), an idea that works well in practice is to render the shadow into a texture which is then rendered as a textured rectangle. The shadow texture would only be recomputed when the shadow changes, that is, when the light source or any shadow-casting or -receiving object moves. with synthetic objects forming the texture on the fly. Another method to improve performance is to use simplified versions of the models for generating the shadow projections. This technique can often be used for other shadowing algorithms. The matrices in Equations 2 and 4 do not always generate the desired results. For example, if the light source is below the topmost point on the object, then an anti-shadow [2] is generated, since each vertex is projected through the point of the light source. Correct shadows and anti-shadows are shown in Figure 3. A similar rendering error as that found with planar reflections can occur for this kind of shadow generation. For reflections, errors occur when objects located on the opposite side of the reflector plane are not dealt with properly. In the case of shadow generation, errors occur when we use a shadow-casting object that is on the far side of the receiving plane. This is because an object beyond the shadow receiver does not cast a shadow. Shadows generated in this manner are called false shadows. This problem can be solved in the same manner as for planar reflections, i.e., with a clipping plane located at the shadow receiver that culls away all geometry beyond the receiver.
Recommended publications
  • Canonical View Volumes
    University of British Columbia View Volumes Canonical View Volumes Why Canonical View Volumes? CPSC 314 Computer Graphics Jan-Apr 2016 • specifies field-of-view, used for clipping • standardized viewing volume representation • permits standardization • restricts domain of z stored for visibility test • clipping Tamara Munzner perspective orthographic • easier to determine if an arbitrary point is perspective view volume orthographic view volume orthogonal enclosed in volume with canonical view y=top parallel volume vs. clipping to six arbitrary planes y=top x=left x=left • rendering y y x or y x or y = +/- z x or y back • projection and rasterization algorithms can be Viewing 3 z plane z x=right back 1 VCS front reused front plane VCS y=bottom z=-near z=-far x -z x z=-far plane -z plane -1 x=right y=bottom z=-near http://www.ugrad.cs.ubc.ca/~cs314/Vjan2016 -1 2 3 4 Normalized Device Coordinates Normalized Device Coordinates Understanding Z Understanding Z near, far always positive in GL calls • convention left/right x =+/- 1, top/bottom y =+/- 1, near/far z =+/- 1 • z axis flip changes coord system handedness THREE.OrthographicCamera(left,right,bot,top,near,far); • viewing frustum mapped to specific mat4.frustum(left,right,bot,top,near,far, projectionMatrix); • RHS before projection (eye/view coords) parallelepiped Camera coordinates NDC • LHS after projection (clip, norm device coords) • Normalized Device Coordinates (NDC) x x perspective view volume orthographic view volume • same as clipping coords x=1 VCS NDCS y=top • only objects
    [Show full text]
  • COMPUTER GRAPHICS COURSE Viewing and Projections
    COMPUTER GRAPHICS COURSE Viewing and Projections Georgios Papaioannou - 2014 VIEWING TRANSFORMATION The Virtual Camera • All graphics pipelines perceive the virtual world Y through a virtual observer (camera), also positioned in the 3D environment “eye” (virtual camera) Eye Coordinate System (1) • The virtual camera or “eye” also has its own coordinate system, the eyeY coordinate system Y Eye coordinate system (ECS) Z eye X Y X Z Global (world) coordinate system (WCS) Eye Coordinate System (2) • Expressing the scene’s geometry in the ECS is a natural “egocentric” representation of the world: – It is how we perceive the user’s relationship with the environment – It is usually a more convenient space to perform certain rendering tasks, since it is related to the ordering of the geometry in the final image Eye Coordinate System (3) • Coordinates as “seen” from the camera reference frame Y ECS X Eye Coordinate System (4) • What “egocentric” means in the context of transformations? – Whatever transformation produced the camera system its inverse transformation expresses the world w.r.t. the camera • Example: If I move the camera “left”, objects appear to move “right” in the camera frame: WCS camera motion Eye-space object motion Moving to Eye Coordinates • Moving to ECS is a change of coordinates transformation • The WCSECS transformation expresses the 3D environment in the camera coordinate system • We can define the ECS transformation in two ways: – A) Invert the transformations we applied to place the camera in a particular pose – B) Explicitly define the coordinate system by placing the camera at a specific location and setting up the camera vectors WCSECS: Version A (1) • Let us assume that we have an initial camera at the origin of the WCS • Then, we can move and rotate the “eye” to any pose (rigid transformations only: No sense in scaling a camera): 퐌푐 퐨푐, 퐮, 퐯, 퐰 = 퐑1퐑2퐓1퐑ퟐ … .
    [Show full text]
  • Interactive Volume Navigation
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 4, NO. 3, JULY-SEPTEMBER 1998 243 Interactive Volume Navigation Martin L. Brady, Member, IEEE, Kenneth K. Jung, Member, IEEE, H.T. Nguyen, Member, IEEE, and Thinh PQ Nguyen Member, IEEE Abstract—Volume navigation is the interactive exploration of volume data sets by “flying” the viewpoint through the data, producing a volume rendered view at each frame. We present an inexpensive perspective volume navigation method designed to be run on a PC platform with accelerated 3D graphics hardware. The heart of the method is a two-phase perspective raycasting algorithm that takes advantage of the coherence inherent in adjacent frames during navigation. The algorithm generates a sequence of approximate volume-rendered views in a fraction of the time that would be required to compute them individually. The algorithm handles arbitrarily large volumes by dynamically swapping data within the current view frustum into main memory as the viewpoint moves through the volume. We also describe an interactive volume navigation application based on this algorithm. The application renders gray-scale, RGB, and labeled RGB volumes by volumetric compositing, allows trilinear interpolation of sample points, and implements progressive refinement during pauses in user input. Index Terms—Volume navigation, volume rendering, 3D medical imaging, scientific visualization, texture mapping. ——————————F—————————— 1INTRODUCTION OLUME-rendering techniques can be used to create in- the views should be produced at interactive rates. The V formative two-dimensional (2D) rendered views from complete data set may exceed the system’s memory, but the large three-dimensional (3D) images, or volumes, such as amount of data enclosed by the view frustum is assumed to those arising in scientific and medical applications.
    [Show full text]
  • Computer Graphicsgraphics -- Weekweek 33
    ComputerComputer GraphicsGraphics -- WeekWeek 33 Bengt-Olaf Schneider IBM T.J. Watson Research Center Questions about Last Week ? Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 Overview of Week 3 Viewing transformations Projections Camera models Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Transformation Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Compared to Taking Pictures View Vector & Viewplane: Direct camera towards subject Viewpoint: Position "camera" in scene Up Vector: Level the camera Field of View: Adjust zoom Clipping: Select content Projection: Exposure Field of View Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Transformation Position and orient camera Set viewpoint, viewing direction and upvector Use geometric transformation to position camera with respect to the scene or ... scene with respect to the camera. Both approaches are mathematically equivalent. Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 VieViewingwing Pipeline and Coordinate Systems Model Modeling Transformation Coordinates (MC) Modeling Position objects in world coordinates Transformation Viewing Transformation World Coordinates (WC) Position objects in eye/camera coordinates Viewing EC a.k.a. View Reference Coordinates Transformation Eye (Viewing) Perspective Transformation Coordinates (EC) Perspective Convert view volume to a canonical view volume Transformation Also used for clipping Perspective PC a.k.a. clip coordinates Coordinates (PC) Perspective Perspective Division Division Normalized Device Perform mapping from 3D to 2D Coordinates (NDC) Viewport Mapping Viewport Mapping Map normalized device coordinates into Device window/screen coordinates Coordinates (DC) Computer Graphics – Week 3 © Bengt-Olaf Schneider, 1999 Why a special eye coordinate systemsystem ? In prinicipal it is possible to directly project on an arbitray viewplane. However, this is computationally very involved.
    [Show full text]
  • CSCI 445: Computer Graphics
    CSCI 445: Computer Graphics Qing Wang Assistant Professor at Computer Science Program The University of Tennessee at Martin Angel and Shreiner: Interactive Computer Graphics 7E © 1 Addison-Wesley 2015 Classical Viewing Angel and Shreiner: Interactive Computer Graphics 7E © 2 Addison-Wesley 2015 Objectives • Introduce the classical views • Compare and contrast image formation by computer with how images have been formed by architects, artists, and engineers • Learn the benefits and drawbacks of each type of view Angel and Shreiner: Interactive Computer Graphics 7E © 3 Addison-Wesley 2015 Classical Viewing • Viewing requires three basic elements • One or more objects • A viewer with a projection surface • Projectors that go from the object(s) to the projection surface • Classical views are based on the relationship among these elements • The viewer picks up the object and orients it how she would like to see it • Each object is assumed to constructed from flat principal faces • Buildings, polyhedra, manufactured objects Angel and Shreiner: Interactive Computer Graphics 7E © 4 Addison-Wesley 2015 Planar Geometric Projections • Standard projections project onto a plane • Projectors are lines that either • converge at a center of projection • are parallel • Such projections preserve lines • but not necessarily angles • Nonplanar projections are needed for applications such as map construction Angel and Shreiner: Interactive Computer Graphics 7E © 5 Addison-Wesley 2015 Classical Projections Angel and Shreiner: Interactive Computer Graphics 7E
    [Show full text]
  • Openscenegraph 3.0 Beginner's Guide
    OpenSceneGraph 3.0 Beginner's Guide Create high-performance virtual reality applications with OpenSceneGraph, one of the best 3D graphics engines Rui Wang Xuelei Qian BIRMINGHAM - MUMBAI OpenSceneGraph 3.0 Beginner's Guide Copyright © 2010 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: December 2010 Production Reference: 1081210 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849512-82-4 www.packtpub.com Cover Image by Ed Maclean ([email protected]) Credits Authors Editorial Team Leader Rui Wang Akshara Aware Xuelei Qian Project Team Leader Reviewers Lata Basantani Jean-Sébastien Guay Project Coordinator Cedric Pinson
    [Show full text]
  • Opensg Starter Guide 1.2.0
    OpenSG Starter Guide 1.2.0 Generated by Doxygen 1.3-rc2 Wed Mar 19 06:23:28 2003 Contents 1 Introduction 3 1.1 What is OpenSG? . 3 1.2 What is OpenSG not? . 4 1.3 Compilation . 4 1.4 System Structure . 5 1.5 Installation . 6 1.6 Making and executing the test programs . 6 1.7 Making and executing the tutorials . 6 1.8 Extending OpenSG . 6 1.9 Where to get it . 7 1.10 Scene Graphs . 7 2 Base 9 2.1 Base Types . 9 2.2 Log . 9 2.3 Time & Date . 10 2.4 Math . 10 2.5 System . 11 2.6 Fields . 11 2.7 Creating New Field Types . 12 2.8 Base Functors . 13 2.9 Socket . 13 2.10 StringConversion . 16 3 Fields & Field Containers 17 3.1 Creating a FieldContainer instance . 17 3.2 Reference counting . 17 3.3 Manipulation . 18 3.4 FieldContainer attachments . 18 ii CONTENTS 3.5 Data separation & Thread safety . 18 4 Image 19 5 Nodes & NodeCores 21 6 Groups 23 6.1 Group . 23 6.2 Switch . 23 6.3 Transform . 23 6.4 ComponentTransform . 23 6.5 DistanceLOD . 24 6.6 Lights . 24 7 Drawables 27 7.1 Base Drawables . 27 7.2 Geometry . 27 7.3 Slices . 35 7.4 Particles . 35 8 State Handling 37 8.1 BlendChunk . 39 8.2 ClipPlaneChunk . 39 8.3 CubeTextureChunk . 39 8.4 LightChunk . 39 8.5 LineChunk . 39 8.6 PointChunk . 39 8.7 MaterialChunk . 40 8.8 PolygonChunk . 40 8.9 RegisterCombinersChunk .
    [Show full text]
  • High-Quality Volume Rendering of Dark Matter Simulations
    https://doi.org/10.2352/ISSN.2470-1173.2018.01.VDA-333 © 2018, Society for Imaging Science and Technology High-Quality Volume Rendering of Dark Matter Simulations Ralf Kaehler; KIPAC/SLAC; Menlo Park, CA, USA Abstract cells overlap in overdense regions. Previous methods to visualize Phase space tessellation techniques for N–body dark matter dark matter simulations using this technique were either based on simulations yield density fields of very high quality. However, due versions of the volume rendering integral without absorption [19], to the vast amount of elements and self-intersections in the result- subject to high–frequency image noise [17] or did not achieve in- ing tetrahedral meshes, interactive visualization methods for this teractive frame rates [18]. This paper presents a volume rendering approach so far either employed simplified versions of the volume approach for phase space tessellations, that rendering integral or suffered from rendering artifacts. This paper presents a volume rendering approach for • combines the advantages of a single–pass k–buffer approach phase space tessellations, that combines state-of-the-art order– with a view–adaptive voxelization grid, independent transparency methods to manage the extreme depth • incorporates several optimizations to tackle the unique prob- complexity of this mesh type. We propose several performance lems arising from the large number of self–intersections and optimizations, including a view–dependent multiresolution repre- extreme depth complexity, sentation of the data and a tile–based rendering strategy, to en- • proposes a view–dependent level–of–detail representation able high image quality at interactive frame rates for this complex of the data that requires only minimal preprocessing data structure and demonstrate the advantages of our approach • and thus achieves high image quality at interactive frame for different types of dark matter simulations.
    [Show full text]
  • CS 351 Introduction to Computer Graphics
    EECS 351-1 : Introduction to Computer Graphics Building the Virtual Camera ver. 1.4 3D Transforms (cont’d): 3D Transformation Types: did we really describe ALL of them? No! --All fit in a 4x4 matrix, suggesting up to 16 ‘degrees of freedom’. We already have 9 of them: 3 kinds of Translate (in x,y,z directions) + 3 kinds of Rotate (around x,y,z axes) + 3 kinds of Scale (along x,y,z directions). ?Where are the other 7 degrees of freedom? 3 kinds of ‘shear’ (aka ‘skew’): the transforms that turn a square into a parallelogram: -- Sxy sets the x-y shear amount; similarly, Sxz and Syz set x-z and y-z shear amount: Shear(Sx,Sy,Sz) = [ 1 Sxy Sxz 0] [ Sxy 1 Syz 0] [ Sxz Syz 1 0] [ 0 0 0 1] 3 kinds of ‘perspective’ transform: --Px causes perspective distortions along x-axis direction, Py, Pz along y and z axes: Persp(px,py,pz) = [1 0 0 0] [0 1 0 0] [0 0 1 0] [px py pz 1] --Finally, we have that lower-left ‘1’ in the matrix. Remember how we convert homogeneous coordinates [x,y,z,w] to ‘real’ or ‘Cartesian’ 3D coordinates [x/w, y/w, z/w]? If we change the ‘1’ in the lower left, it scales the ‘w’—it acts as a simple scale factor on all ‘real’ coordinates, and thus it’s redundant:—we have only 15 degrees of freedom in our 4x4 matrix. We can describe any 4x4 transform matrix by a combination of these three simpler classes: ‘rigid body’ transforms == preserves angles and lengths; does not distort or reshape a model, includes any combination of rotation and translation.
    [Show full text]
  • The Top 7000+ Pop Songs of All-Time 1900-2017
    The Top 7000+ Pop Songs of All-Time 1900-2017 Researched, compiled, and calculated by Lance Mangham Contents • Sources • The Top 100 of All-Time • The Top 100 of Each Year (2017-1956) • The Top 50 of 1955 • The Top 40 of 1954 • The Top 20 of Each Year (1953-1930) • The Top 10 of Each Year (1929-1900) SOURCES FOR YEARLY RANKINGS iHeart Radio Top 50 2018 AT 40 (Vince revision) 1989-1970 Billboard AC 2018 Record World/Music Vendor Billboard Adult Pop Songs 2018 (Barry Kowal) 1981-1955 AT 40 (Barry Kowal) 2018-2009 WABC 1981-1961 Hits 1 2018-2017 Randy Price (Billboard/Cashbox) 1979-1970 Billboard Pop Songs 2018-2008 Ranking the 70s 1979-1970 Billboard Radio Songs 2018-2006 Record World 1979-1970 Mediabase Hot AC 2018-2006 Billboard Top 40 (Barry Kowal) 1969-1955 Mediabase AC 2018-2006 Ranking the 60s 1969-1960 Pop Radio Top 20 HAC 2018-2005 Great American Songbook 1969-1968, Mediabase Top 40 2018-2000 1961-1940 American Top 40 2018-1998 The Elvis Era 1963-1956 Rock On The Net 2018-1980 Gilbert & Theroux 1963-1956 Pop Radio Top 20 2018-1941 Hit Parade 1955-1954 Mediabase Powerplay 2017-2016 Billboard Disc Jockey 1953-1950, Apple Top Selling Songs 2017-2016 1948-1947 Mediabase Big Picture 2017-2015 Billboard Jukebox 1953-1949 Radio & Records (Barry Kowal) 2008-1974 Billboard Sales 1953-1946 TSort 2008-1900 Cashbox (Barry Kowal) 1953-1945 Radio & Records CHR/T40/Pop 2007-2001, Hit Parade (Barry Kowal) 1953-1935 1995-1974 Billboard Disc Jockey (BK) 1949, Radio & Records Hot AC 2005-1996 1946-1945 Radio & Records AC 2005-1996 Billboard Jukebox
    [Show full text]
  • Rishi Malhotra
    TCNJ JOURNAL OF STUDENT SCHOLARSHIP VOLUME XIV APRIL 2012 HETEROSEXUAL ANDROGYNY, QUEER HYPERMASCULINITY, AND HETERONORMATIVE INFLUENCE IN THE MUSIC OF FREDDIE MERCURY AND QUEEN Author: Rishi Malhotra Faculty Sponsor: Wayne Heisler, Department of Music ABSTRACT AND INTRODUCTION In a musical career that spanned two decades, Freddie Mercury served as lead vocalist and chief songwriter for the rock band, Queen. Initially regarded as a glam-metal outfit, they later experimented with more eclectic musical styles, notably the rock-opera fusion that characterized Mercury‟s widely successful composition, “Bohemian Rhapsody.” Mercury‟s use of imagery in live performances and music videos helped to shape responses to his lyrics and music. His change in appearance from androgynous, heterosexual rock star to “gay macho clone” led many to speculate that he was homosexual, although he never disclosed his orientation until shortly before his death. His protean appearance contrasted with his lyrical and musical consistency and provided alternative avenues for interpreting his music beyond the scope of heteronormativity. Many songs that were initially construed as reflecting his assumed heterosexuality were later interpreted as expressions of his homosexuality. Mercury has retained a legacy amongst admirers of differing sexual orientations, from members of the gay community during the early years of the AIDS epidemic to sports fans in stadiums around the world. Heteronormativity, according to Judith Butler, assumes heterosexuality as the norm, and deviation from heterosexuality as unnatural or abnormal. The apparent tendency to avoid decentralization of sexual identities and to default to heteronormative viewpoints is evident in the rock and pop music of Freddie Mercury and Queen.
    [Show full text]
  • A Survey of Visibility for Walkthrough Applications
    A Survey of Visibility for Walkthrough Applications Daniel Cohen-Or1 ¢¡ Yiorgos Chrysanthou2 † Claudio´ T. Silva3 ‡ 1Tel Aviv University 2 University College London 3AT&T Labs-Research Abstract The last few years have witnessed tremendous growth in the complexity of computer graphics models as well as network-based computing. Although significant progress has been made in the handling of specific types of large polygonal datasets (i.e., architectural models) on single graphics workstations, only recently have researchers started to turn their attention to more general solutions, which now include network-based graphics and virtual environments. The situation is likely to worsen in the future since, due to technologies such as 3D scanning, graphical models are becoming increasingly complex. One of the most effective ways of managing the complexity of virtual environments is through the application of smart visibility methods. Visibility determination, the process of deciding what surfaces can be seen from a certain point, is one of the fundamental problems in computer graphics. It is required not only for the correct display of images but also for such diverse applications as shadow determination, global illumination, culling and interactive walkthrough. The importance of visibility has long been recognized, and much research has been done in this area in the last three decades. The proliferation of solutions, however, has made it difficult for the non-expert to deal with this effectively. Meanwhile, in network-based graphics and virtual environments, visibility has become a critical issue, presenting new problems that need to be addressed. In this survey we review the fundamental issues in visibility and conduct an overview of the work performed in recent years.
    [Show full text]