
JUST-IN-TIME PIXELS Mark Mine and Gary Bishop Department of Computer Science University of North Carolina Chapel Hill, NC 27599-3175 Abstract objects in the world do not necessarily remain fixed during the period of time required for image scanout, This paper describes Just-In-Time Pixels, a the generated pixel values are inconsistent with the technique for generating computer graphic images state of the world at the time of their display. This which are consistent with the sequential nature of incorrect sampling results in perceived distortion of common display devices. Motivation for Just-In- objects in a scene moving relative to the camera. Time Pixels is given in terms of an analysis of the This distortion can easily be seen in a sequence sampling errors which will occur if the temporal depicting a vertical line moving across the screen. characteristics of the raster scanned display are The line will appear to tilt in the direction of ignored. Included in the paper is a discussion of motion because the pixels on scanlines further down Just-In-Time Pixels implementation issues. Also the screen do not properly represent the position of included is a discussion of approximations which the line at the time at which they are displayed. allow a range of tradeoffs between sampling accuracy and computational cost. The application The goal of Just-In-Time Pixels (JITP) is the of Just-In-Time Pixels to real-time computer generation of pixels that are consistent with the graphics in a see-through head-mounted display is state of the world at the time of their display. This also discussed. state of the world includes both the position and orientation of the virtual camera and the position of all the objects in the scene. By using the computed Keywords: Animation, temporal sampling, head trajectory and velocity of the camera and objects, mounted displays each pixel is rendered to depict the relative positions of the camera and objects at the time at which the individual pixel will be displayed. 1. Introduction The intended application for Just-in-Time- The sequential nature of the raster scanned Pixels, at the University of North Carolina in display has been largely ignored in the production Chapel Hill (UNC), is in maintaining registration of computer animated sequences. Individual frames between the virtual and the real world in a see- (or at best fields) are computed as if all the lines in through head-mounted-display system. It is our an image are presented to the observer goal to produce a display system that allows the real simultaneously. Though this is true in animation world to be augmented with computer generated displayed from movie film, it is inconsistent for objects that remain registered in 3-space as the user conventional CRT based displays which presents moves through the workspace. An example of such a the individual pixels of an image sequentially over a system is the ultrasound visualization system period of almost 17 milliseconds. In a typical CRT described by Bajura, Fuchs, and Ohbuchi [Bajura device, the time of display of each pixel is different 92]. Failure to account for the distortion produced and is determined by its X and Y coordinate within by the image scanout delay may result in significant the image. Though the period between successive misregistration during user head motions. scans is short enough to produce the perception of a continuously illuminated display without flicker, the electron beam in fact only illuminates a tiny 2. Just-In-Time Pixels region of the display at any one time, and this region stays illuminated only for a fraction of a As stated in the introduction, when using a millisecond. raster display device, the pixels that make up an image are not displayed all at once but are spread out The presentation of an image that was over time. In a conventional graphics system computed as a point sample in time on a display generating NTSC video, for example, the pixels at device that sequentially presents the image pixel by the bottom of the screen are displayed almost 17 ms pixel represents an incorrect sampling of the after those at the top. Matters are further aggravated computer generated world. Since the position and when using NTSC video by the fact that not all 525 orientation of the camera and the position of the lines of an NTSC image are displayed in one raster 1 scan but are in fact interlaced across two fields. In Assume: the first field only the odd lines in an image are 1) 200 degrees/sec camera rotation displayed, and in the second field only the even. 2) camera generating a 60 degree Field of Thus, unless animation is performed on fields (i.e. View (FOV) image generating a separate image for each field), the last 3) NTSC video pixel in an image is displayed more than 33 ms after - 60 fields/sec NTSC video the first. The problem with this sequential readout - ~600 pixels/FOV horizontal of image data, is that it is not reflected in the resolution manner in which the image is computed. We obtain: Typically, in conventional computer graphics animation, only a single viewing transform is used degrees 1 sec degrees 200 × = 3.3 camera rotation in generating the image data for an entire frame. sec 60 fields field Each frame therefore, represents a point sample in time which is inconsistent with the way in which it Thus in a 60 degree FOV image when using is displayed. As a result, as shown in figure 1, the NTSC video: resulting image does not truly reflect the position of objects (relative to the view point of the camera) at 1 FOV pixels the time of display of each pixel. 3.3 degrees × × 600 = 33 pixels error 60 degrees FOV OBJECT Thus with camera rotation of approximately VIEWING VIEWING 200 degrees/second, registration errors of more than FRUSTUM FRUSTUM 30 pixels (for NTSC video) can occur in one field time. The term registration is being used here to TIME Tx TIME Ty describe the correspondence between the displayed image and the placement of objects in the computer generated world. Note that even though the above discussion concentrates on camera rotation, the argument is valid for any relative motion between the camera SCANLINE X CAMERA SCANLINE Y and virtual objects. Thus, even if the camera's view ROTATION point is unchanging, objects moving relative to the camera will exhibit the same registration errors as above. The amount of error is dependent upon the velocity of the object relative to the camera’s view direction. If object motion is combined with rotation the resulting errors are correspondingly IMAGE IMAGE worse. SCANOUT SCANOUT AT TIME Tx AT TIME Ty The ideal way to generate an image, therefore, would be to recalculate for each pixel the position and orientation of the camera and the position and orientation of the scene’s objects, based upon the PERCEIVED time of display of that pixel. The resulting color OBJECT and intensity generated for the pixel will be consistent with the pixel’s time of display. Though KEY: objects moving relative to the camera would appear distorted when the frame is shown statically, the Tx - Time of display of scanline x distorted JITP objects will actually appear Ty - Time of display of scanline y undistorted when viewed on the raster display. As shown in figure 2, each pixel in an ideal Just-In- Time Pixel renderer represents a sample of the figure 1: Image generation in conventional virtual world that is consistent with the time of the computer graphics animation pixel’s display. A quick “back of the envelope” calculation can In an ideal JITP renderer, therefore, both the demonstrate the magnitude of the errors that result if viewing matrix and object positions are recomputed this display system delay is ignored. Assuming, for for each pixel. Acceptable approximations to Just- example, a camera rotation of 200 degrees/second (a In-Time Pixels can be obtained, however, with reasonable value when compared with peak considerably less computation. Examples include velocities of 370 degrees/second during typical head the recomputation of world state only once per motion - see [List 83]) we find: scanline and the maintenance of dual display lists. 2 These methods, and an analysis of the errors they introduce, are discussed below. 1 FOV pixels 0.013 degrees × × 600 60 degrees FOV OBJECT = 0.13 pixels error VIEWING VIEWING FRUSTUM FRUSTUM Thus less than a pixel error for NTSC video TIME Tx TIME Ty resolutions. 3.2 Dual Display List Approximation Another potential simplification reduces the number of transformations required per image while still taking into account the raster scan used in the display of the image. Instead of using a separate SCANLINE X CAMERA SCANLINE Y viewing transform for each scanline, two display ROTATION lists are maintained: one representing the position of the objects at the start of display of each field and the second representing the position of the objects at the end of display of that field 16.67 ms later (less the vertical retrace interval). For each field, the data in the two display lists is transformed based IMAGE IMAGE upon the predicted view point of the camera at the SCANOUT SCANOUT two times (start of field and end of field). The data AT TIME Tx AT TIME Ty for the intermediate scanlines is then generated by interpolating the position of object verticies between the two positions. Thus instead of transforming all the objects for every scanline, each PERCEIVED object would be transformed only twice.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-