Spectral Color Management in Virtual Reality Scenes
Total Page:16
File Type:pdf, Size:1020Kb
sensors Article Spectral Color Management in Virtual Reality Scenes Francisco Díaz-Barrancas 1,* , Halina Cwierz 1 , Pedro J. Pardo 1 , Ángel Luis Pérez 2 and María Isabel Suero 2 1 Department of Computer and Network Systems Engineering, University of Extremadura, E06800 Mérida, Spain; [email protected] (H.C.); [email protected] (P.J.P.) 2 Department of Physics, University of Extremadura, E06071 Badajoz, Spain; [email protected] (Á.L.P.); [email protected] (M.I.S.) * Correspondence: [email protected] Received: 21 July 2020; Accepted: 28 September 2020; Published: 3 October 2020 Abstract: Virtual reality has reached a great maturity in recent years. However, the quality of its visual appearance still leaves room for improvement. One of the most difficult features to represent in real-time 3D rendered virtual scenes is color fidelity, since there are many factors influencing the faithful reproduction of color. In this paper we introduce a method for improving color fidelity in virtual reality systems based in real-time 3D rendering systems. We developed a color management system for 3D rendered scenes divided into two levels. At the first level, color management is applied only to light sources defined inside the virtual scene. At the second level, we applied spectral techniques over the hyperspectral textures of 3D objects to obtain a higher degree of color fidelity. To illustrate the application of this color management method, we simulated a virtual version of the Ishihara test for color blindness deficiency detection. Keywords: virtual reality; hyperspectral textures; Ishihara test; color fidelity 1. Introduction Technology is constantly evolving, offering possibilities that were previously unthinkable. This is the case for image sensors that allow us to capture the three spatial dimensions and the temporal dimension of the real world and virtually represent that real world through digital images, digital videos, and, in recent years, virtual and augmented reality devices. In all cases, color is an essential part of the virtual representation of the world using digital media. This is because the recipient of that representation is a human being. For humans, color is a fundamental part of the external information they receive from their environment through their senses. The sense of sight provides more than 80% of the information received by the brain through the senses [1]. The color information captured by imaging devices often requires processing techniques to ensure correct color reproduction on various digital devices. First, it is necessary to carry out a correct chromatic characterization of both the capture device and the reproduction device. In this way, it is possible to establish a biunivocal correspondence between the device-dependent color coordinates (typically RGB or CMYK) and the device-independent color coordinates (CIE XYZ or CIE Lab). Over the last few years, a multitude of studies have been carried out on the color characterization of devices, from CRTS and TFT technology to Near-Eye Displays [2–5]. The differences in the calibration and colorimetric characterization of a color display device are always confusing [6]. Calibration of these devices consists of setting the device’s state to a known value. This can be done by fixing the media’s white point, the gain, and the offset of each channel for a cathode ray tube, for example. This process ensures that the device produces consistent results and that the calibration process can be completed without any information on the relationship between the device’s Sensors 2020, 20, 5658; doi:10.3390/s20195658 www.mdpi.com/journal/sensors Sensors 2020, 20, 5658 2 of 16 Sensors 2020, 20, x 2 of 16 input coordinates and the colorimetric coordinates of the output. The colorimetric characterization of thecharacterization, device, however, the requires relationship this relationship between the to bedevi known.ce’s input From coordinates this characterization, (typically theRGB relationship values in betweendisplays theand device’s other device-indepen input coordinatesdent (typically coordinates RGB values(i.e., the in displaysCIE 1931 and XYZ other tristimulus device-independent value)) are coordinatesobtained. Presently, (i.e., the all CIE display 1931 devices XYZ tristimulus are digital, value)) and the are relation obtained. between Presently, digital all and display analog devices values areis accomplished digital, andthe through relation a Digital between to Analog digital andConver analogter (DAC). values Due is accomplished to the large number through of a chromatic Digital to Analogstimuli that Converter can be shown (DAC). by Due any to digital the large device, number the direct of chromatic measurement stimuli of thatthis relation can be shown is impossible; by any digitaltherefore, device, a mathematical the direct measurement model is applied, of this enabling relation isone impossible; to reduce therefore, the number a mathematical of runs. model is applied,In addition, enabling it one is necessary to reduce to the implement number of color runs. management systems that allow the exchange of chromaticIn addition, information it is necessary between todifferent implement types color of de managementvices, such as systems displays that and allow printers, the exchange thereby ofadapting chromatic the informationchromatic information between di ffbasederent on types diff oferent devices, reproduction such as displays media, different and printers, white thereby point adaptingvalues, and the different chromatic gamu informationt values. Based based on on this di ffneed,erent a reproduction wide field of media,study of di gamutfferent mapping white point has values,been developed and different and has gamut had values. great relevance Based on in this recent need, years a wide [7]. fieldRecently, of study research of gamut has been mapping done has on beenthe color developed appearance and hasin optical had great see-through relevance augmented in recent years reality [7]. devices Recently, [8]. research has been done on the colorAll these appearance developments in optical were see-through designed augmentedprimarily fo realityr developing devices digital [8]. images from real scenes capturedAll these by photographic developments techniques. were designed The generati primarilyon for of developingsynthetic digital digital images images from from 3D real models, scenes capturedwhich is commonly by photographic known techniques. as 3D rendering, The generation has been of left synthetic out of digitalthis type images of color from management 3D models, whichtechnique is commonly since color, known from beginning as 3D rendering, to end, has has al beenways leftbeen out defined of this by type native of colordigital management values such techniqueas RGB values. since The color, only from color beginning correction to end,currently has always carried been out is defined the calibration by native of digital color player values devices such as RGBto a values.standard The configuration only color correction that allows currently a similar carried outappearance is the calibration on all ofdisplays. color player However, devices 3D to adevelopment standard configuration environments that used allows in virtual a similar reality appearance have great on versatility all displays. and However, computing 3D power development thanks environmentsto the GPUs that used are in used virtual to render reality at have least great 90 images versatility per second and computing for each powereye [9,10]. thanks to the GPUs that areThere used is toa large render difference at least 90 between images perthe seconddigital forimages each derived eye [9,10 from]. photographic techniques and theThere digital is a images large di fromfference 3D rendering between the scenes. digital In the images first derivedcase, the from image photographic is captured by techniques a sensor, andtypically the digital CCD or images CMOS, from located 3D rendering in the image scenes. plan Ine theof the first camera’s case, the optical image system, is captured where by aphotons sensor, typicallyarrive from CCD different or CMOS, parts located of the inscene. the image In the plane case of of a the digital camera’s image optical obtained system, by rendering where photons a 3D arrivescene (Figure from di ff1),erent a process parts of of the ray scene. tracing In is the carried case of out a digital in the imageopposite obtained direction by renderingto that of traditional a 3D scene (Figureoptical 1systems,), a process i.e., offrom ray the tracing eye or is carriedthe camera out into thethe oppositeobjects constituting direction to the that scene, of traditional passing through optical systems,a matrix i.e.,of points from thethat eye correspo or thend camera to the to future the objects pixels constituting [11]. the scene, passing through a matrix of points that correspond to the future pixels [11]. Figure 1. Simplified graphic representation of a lighting and shadowing method for rendering a 3D sceneFigure and 1. Simplified how the final graphic 2D image representation is generated of a [12 lighting]. and shadowing method for rendering a 3D scene and how the final 2D image is generated [12]. From a visual point of view, Virtual Reality (VR) technology is based on the generation of two differentFrom images a visual corresponding point of view, to Virt twoual di ffRealityerent views (VR) technology of the same is three-dimensional based on the generation scene. Oneof two of thesedifferent images images is shown corresponding to each eye,