LIGHT TRANSPORT SIMULATION IN REFLECTIVE DISPLAYS

by

Zhanpeng Feng, B.S.E.E., M.S.E.E.

A Dissertation

In

ELECTRICAL ENGINEERING

DOCTOR OF PHILOSOPHY

Dr. Brian Nutter Chair of the Committee

Dr. Sunanda Mitra

Dr. Tanja Karp

Dr. Richard Gale

Dr. Peter Westfall

Peggy Gordon Miller Dean of the Graduate School

May, 2012

Copyright 2012, Zhanpeng Feng

Texas Tech University, Zhanpeng Feng, May 2012

ACKNOWLEDGEMENTS

The pursuit of my Ph.D. has been a rather long journey. The journey started when I came to Texas Tech ten years ago, then took a change in direction when I went to California to work for Qualcomm in 2006. Over the course, I am privileged to have met the most amazing professionals in Texas Tech and Qualcomm. Without them I would have never been able to finish this dissertation. I begin by thanking my advisor, Dr. Brian Nutter, for introducing me to the brilliant world of research, and teaching me hands on skills to actually get something done. Dr. Nutter sets an example of excellence that inspired me to work as an engineer and researcher for my career. I would also extend my thanks to Dr. Mitra for her breadth and depth of knowledge; to Dr. Karp for her scientific rigor; to Dr. Gale for his expertise in color science and sense of humor; and to Dr. Westfall, for his great mind in statistics. They have provided me invaluable advice throughout the research. My colleagues and supervisors in Qualcomm also gave tremendous support and guidance along the path. Tom Fiske helped me establish my knowledge in from ground up and taught me how to use all the tools for optical measurements. I was amazed in a number of circumstances that he discovered software bugs by just a quick look at the data. My special thanks go to Rick Brinkley, for not only giving me time and room in busy work schedules, but for also making me believe that obtaining a doctoral degree is a significant achievement, especially when I was in doubt. I also would like to express my appreciation to Bill Cummings, Chris Hogh, Jennifer Gille, Alok Govil, Behnam Bastani, Russel Martin, and Ibrahim Sezan, for their support and inspiring discussions. My gratitude towards my family is beyond words. I would like to thank my parents, Xiushan Feng and Bixia Feng, my sister, Yingtao Feng, and my brother in law, Xiaodong Pan, for always standing behind me. When I was feeling lost and discouraged, your love and support were always there to give me strength. During the final write-up stage, the stress of long hours of writing were put at ease by home- cooked delicious meals and a warm soup at my desk. I know you have always been proud of me, and I want you to know that I am proud to be your son and brother.

ii Texas Tech University, Zhanpeng Feng, May 2012

I am filled with thankfulness for the most amazing thing that has ever happened to my life, the birth of my little girl, Yongqing Feng. No matter how challenging it is to be swamped with work and research, a simple thought of your smile helps me regain my confidence and energy. Finally, I want to thank my wife, Yihua Yuan, for coming to my life. Thank you for all the long flights between Texas and California every month for over two years. Thank you for making the sacrifice to stay with me in the United States. Thank you for bringing our wonderful daughter to the world. With you, I rediscovered the goal of life. I hope we will always hold hands as tight as now and build a bright future together.

iii Texas Tech University, Zhanpeng Feng, May 2012

TABLE OF CONTENTS

ACKNOWLEDGEMENTS ...... ii ABSTRACT ...... vi LIST OF FIGURES ...... viii LIST OF TABLES ...... xi CHAPTER 1. INTRODUCTION ...... 1 2. BACKGROUND ...... 4 Assumptions ...... 4 Interference and Interferometric Displays ...... 5 Monte Carlo ...... 8 Radiometry ...... 8 Ray Tracing ...... 10 Rendering Equation ...... 11 Monte Carlo Integration ...... 13 Reflectance Modeling ...... 15 Microscopic Geometry of Surfaces ...... 16 BRDF and Surface Reflection ...... 19 BSSRDF and Subsurface Scattering ...... 22 Summary ...... 27 3. MODELING REFLECTIVE DISPLAYS ...... 28 Introduction ...... 28 Related Work...... 29 Front of Screen ...... 31 Modeling and Measuring a Diffuser ...... 33 Display Pixel Array ...... 43 Summary ...... 44 4. MONTE CARLO RAY TRACING IN REFLECTIVE DISPLAYS ...... 45 Introduction ...... 45 ...... 46 Stratified Sampling ...... 47

iv Texas Tech University, Zhanpeng Feng, May 2012

Importance Sampling ...... 49 Simulation with Uniform Sampling, Cosine Sampling, and Ward Sampling ...... 53 Summary ...... 57 5. SIMULATING DISPLAY PERFORMANCE ...... 58 Introduction ...... 58 Software Architecture ...... 59 Reflectance of Display Pixels ...... 61 Color Gamut and Contrast Ratio ...... 62 Simulation Results and Measured Data ...... 65 Typical Lighting Conditions for Display Simulation ...... 72 Impact of Front Surface Reflection ...... 76 Diffuser with Different Haze Values ...... 76 Auxiliary Lighting (Front Light) ...... 78 Daylight Readability ...... 80 Summary ...... 83 6. CONCLUSIONS AND FUTURE WORK ...... 84 Summary of Contributions ...... 84 Future Work ...... 85 REFERENCES ...... 88

v Texas Tech University, Zhanpeng Feng, May 2012

ABSTRACT

In the last several years, reflective displays have gained substantial popularity in mobile devices such as e-readers, because of their significant advantages in power consumption and sunlight readability. A typical reflective display consists of a stack of optical layers. Accurate and efficient simulation of light transport in these layers provides valuable information for optical design and analysis. Physically based ray tracing algorithms are able to produce simulation results that mirror the real world display performance in a wide range of illumination conditions, viewing angles, and distances. These simulation outcomes help system architects make far reaching decisions as early as possible in the design process. In this dissertation, a reflective display is modeled as a layered material, with a FOS (front of screen) layer on the top, a diffusive layer (diffuser) underneath the FOS, a transparent layer (glass) in the middle, and a -dependent reflective layer (pixel array) at the bottom. A set of simple and efficient spectral functions is developed to model the reflectance and absorption of FOS. A novel hybrid approach combining both spectro-radiometer based and imaging based measurement methods is developed to acquire high resolution reflectance data in both angular and spectral domains. A BTDF (bidirectional transmittance distribution function) is generated from the measured data to model the diffuser. A wavelength dependent BRDF (bidirectional reflectance distribution function) is used to model the pixels. Realistic light transport simulation requires interplay of three factors: surface geometry, lighting, and material reflectance. Monte Carlo ray tracing methods are used to link these factors together. Path tracing is employed to provide unbiased results. Stratified sampling and importance sampling are used for effective variance reduction. Stratified sampling produces well distributed random samples, and importance sampling helps Monte Carlo simulation converge more quickly. Different importance sampling methods are compared and analyzed. Simulation results of display performance, including reflectance, color gamut, contrast ratio, and daylight readability, are presented. The impact of different lighting conditions, diffusers, and FOS designs are studied. Measurement data and physically based analyses are used to confirm the validity of the simulation tool. The simulation

vi Texas Tech University, Zhanpeng Feng, May 2012 tool provides the desired accuracy and predictability for display design in a wide range of lighting conditions, which makes it a valuable mechanism for display designers to find the optimal solution for real world applications.

vii Texas Tech University, Zhanpeng Feng, May 2012

LIST OF FIGURES

Figure 1.1: Light Transport Simulation for Reflective Displays...... 2 Figure 2.1: Series of Increasingly Complete Optical Models...... 4 Figure 2.2: Constructive and Destructive Interference...... 6 Figure 2.3: Qualcomm’s Mirasol® Display...... 6 Figure 2.4: Natural and Synthesized Interference...... 7 Figure 2.5: and Intensity...... 8 Figure 2.6: Radiance...... 9 Figure 2.7: Image Formation with Ray Tracing...... 10 Figure 2.8: Ray Tracing Along a Light Path...... 11 Figure 2.9: Sampling on a Sphere...... 14 Figure 2.10: Reflection and Refraction...... 16 Figure 2.11: Simulated Diffuser Microscopic Geometry...... 18 Figure 2.12: Diffuser Rendering Results with Microscopic Geometries...... 18 Figure 2.13: SEM Image of an Actual Diffuser Surface...... 19 Figure 2.14: BRDF...... 20 Figure 2.15: Diffuser Rendering with BRDF...... 21 Figure 2.16: BRDF vs. BSSRDF...... 22 Figure 2.17: Typical Shapes of Phase Functions...... 24 Figure 2.18: Dipole Approximation and Rendering Result...... 25 Figure 3.1: Reflective Display Layers...... 28 Figure 3.2: Front of Screen (FOS) Model...... 32 Figure 3.3: Path Integral for Light Transport Simulation...... 34 Figure 3.4: Typical Beam Geometry and Angular Characteristics of Different Types of Reflections...... 35 Figure 3.5: Ward’s Image Based Goniometer...... 36 Figure 3.6: Spectro-radiometer Based System...... 37 Figure 3.7: IS-SA™, an Image Based BRDF/BTDF Measurement System...... 38 Figure 3.8: Measured BTDF of a Haze 78 Diffuser...... 39

viii Texas Tech University, Zhanpeng Feng, May 2012

Figure 3.9: Definition of Angle of Incidence, ScatterRadial, and ScatterAzimuth in IS-SA ...... 40 Figure 3.10: Transmission of Various Diffusers at Normal Viewing Angle...... 41 Figure 3.11: An Example of Measurement Output Data...... 41 Figure 3.12: Impact of Incident Angle Dependent Scale Factor...... 43 Figure 3.13: SPD of Pixel Reflectance with Angle Scanning...... 44 Figure 4.1: Path Tracing for Display Simulation...... 45 Figure 4.2: Random Spatial Samples Generated from Different Sampling Methods...... 49 Figure 4.3: Different Sampling Functions for Importance Sampling...... 53 Figure 4.4: Standard Error of Different Sampling Methods...... 54 Figure 4.5: Fitted Ward BRDF vs. Measure Data...... 56 Figure 5.1: Color Image Pipeline...... 59 Figure 5.2: System Diagram of Simulation Software...... 60 Figure 5.3: Standard Luminosity Function...... 62 Figure 5.4: Color Matching Functions...... 63 Figure 5.5: Color Gamuts of a Bare Panel and a Diffused Panel...... 64 Figure 5.6: Simulated and Measured Reflectance...... 66 Figure 5.7: Simulated and Measured Color Gamuts...... 67 Figure 5.8: Color Gamut vs. Viewing Angle...... 68 Figure 5.9: Thin Film Interference Geometry...... 68 Figure 5.10: Thin Film Spectral Reflectance at Different Angles...... 69 Figure 5.11: Distribution of Rays at Different Viewing Angles...... 70 Figure 5.12: SPD Shift from a Bare Panel to a Diffused Panel at 20°...... 71 Figure 5.13: Average Peak Shift vs. Angle...... 72 Figure 5.14: Mean Directional Luminance for 42 Outdoor Locations and 70 Indoor Locations...... 75 Figure 5.15: Simulate FSR Impact in Different Lighting Conditions...... 76 Figure 5.16: Scattering Profile of Haze 78 and Haze 65 Diffusers at Normal Angle...... 77 Figure 5.17: Simulate FSR Diffuser with Different Haze Values in In-door Lighting. .. 78 Figure 5.18: Front Light of Reflective Display...... 78 Figure 5.19: Simulate Auxiliary Lighting of Different SPDs...... 80

ix Texas Tech University, Zhanpeng Feng, May 2012

Figure 5.20: CIE RVP for 20-, 50-, and 75-year-old Adults of Contrast and Average Luminance...... 82

x Texas Tech University, Zhanpeng Feng, May 2012

LIST OF TABLES

Table 4.1: Comparison between Different Sampling Functions...... 52 Table 5.1: Colorimetric Output...... 63 Table 5.2: SPD Peak Shift of Bare Panel and Diffused Panel at 20° and 8°...... 72 Table 5.3: Readability Results of Various Displays ...... 83

xi Texas Tech University, Zhanpeng Feng, May 2012

CHAPTER I

INTRODUCTION

The display industry has seen rapid growth in the last several years. While revolutionary technologies in liquid crystal displays have brought large screen HDTVs into millions of homes, reflective displays have found a rapidly expanding market in mobile devices such as e-readers. From big screen TVs to hand held portable devices, displays bring a fascinating world to our eyes. All these devices are enabled by the physics that we are all familiar with: light transport. Displays can be categorized into emissive and reflective types. Although most TV, computer, and cell phone displays are emissive displays, reflective displays have also seen significant growth in recent years for their very low power consumption, sunlight readability, comfort on eyes, and enhanced image quality. A conventional emissive display uses a backlight to illuminate its pixels. To be seen, an emissive display must be brighter than the light it reflects. The brightness competition with ambient lighting creates a power drain for emissive displays. On the other hand, a reflective display uses ambient light as its light source. Light from the sun, lamps, or other illuminants is reflected, refracted, or scattered by the display, and finally arrives at the observer’s eyes. Because of the reflective nature, ambient illumination plays a much more important role in reflective display simulation than simulation with emissive displays. For design of display products, we desire a predictive modeling tool to simulate the visual performance before the actual devices is built. Light transport algorithms simulate the physics of the scene and generate accurate numerical results from various desired view points. These results allow designers to develop a clear picture of the final performance of the design and make far reaching decisions as early as possible in the design process. In reflective display design, luminance and chromaticity are of major interest. Simulation of display performance under a wide range of viewing angles and ambient lighting conditions is desirable. To simulate luminance and chromaticity, a scene must be constructed to host the three major components: lighting, display, and detector. The signal that the detector receives from a display results from interplay of the lighting,

1 Texas Tech University, Zhanpeng Feng, May 2012 geometry, and display material appearance. In this dissertation, a simulation system is built which allows users to: 1) Design a reflective display with different materials, layers, and structures. 2) Place the display in arbitrary lighting conditions, where the luminance and spectral power distribution (SPD) of light sources can be specified. As illustrated in Figure 1.1, the SPD of lighting can be white LED or D65. D65 is a commonly-used standard illuminant defined by the International Commission on Illumination (CIE), which is intended to represent average daylight and has a correlated color temperature of approximately 6500 K. 3) Place the display and the detector at arbitrary locations and angles. This system is based on Monte Carlo ray tracing. Starting from the detector, well distributed random rays are cast into the scene and iteratively traced along ray paths. Finally radiance across the is collected at the detector’s plane, and hence colorimetric (e.g. x, y, z) and photometric values are calculated for display pixels of different colors.

Figure 1.1: Light Transport Simulation for Reflective Displays.

2 Texas Tech University, Zhanpeng Feng, May 2012

The work of this dissertation attempts to answer the following questions: 1) What metrics are important for display performance? 2) How is the display performance affected by different pixel design, diffuser design, and FOS design? 3) How is the display performance affected if viewing angle changes? 4) How is the display performance affected if lighting geometry and spectrum change? 5) How to make the tool easy to use and fast? 6) How to input design parameters to the model? 7) How to measure a diffuser and model it? 8) How to validate the model? 9) The system can simulate any scenario, but which scenarios are important for display characterization? The dissertation will proceed as follows. Chapter 2 gives an overview of light transport simulation. The chapter starts with a discussion of the assumptions of optical models. Then, the interference effect and interferometric displays are introduced. After that, the principles of Monte Carlo ray tracing are described. Finally, various reflectance models, including microscopic geometry, BRDF, and BSSRDF (bidirectional surface scattering reflectance distribution function) models, are discussed. Chapter 3 describes our approach to model a reflective display as a layered surface. Different models are applied to different layers of a display. Both measurement and modeling methods are developed and evaluated. Chapter 4 investigates the ray tracing algorithms and some variance reduction techniques such as stratified sampling and importance sampling. The results of different sampling methods are compared and analyzed. Chapter 5 describes the simulation software and presents the simulation results. Measured data and physically based analyses are used to validate the simulation system. Impacts of various design decisions on importance display performance metrics are discussed. Chapter 6 concludes with a summary of contributions in the dissertation and identifies some directions for future research.

3 Texas Tech University, Zhanpeng Feng, May 2012

CHAPTER II

BACKGROUND

This chapter provides the general background of light transport simulation. The assumptions of optical models are discussed. The interference effect is explained, and interferometric displays are introduced. The principles of Monte Carlo ray tracing are described. Various reflectance models, including microscopic geometry, BRDF, and BSSRDF models, are presented.

Assumptions Not every detail of light transport will be simulated. Based on the requirements of each application, the important optical effects are selected and the appropriate algorithms are developed to simulate them.

Figure 2.1: Series of Increasingly Complete Optical Models [1].

Current understanding of the behavior of light relies on a series of increasingly complete models of light, which are ray optics, wave optics, electromagnetic optics, and quantum optics [1], respectively. Each successive model can be applied to account for more complex optical phenomena. The more complex models are not usually necessary to explain our real life observations. For example, the focusing effect of a convex lens is

4 Texas Tech University, Zhanpeng Feng, May 2012

a complex behavior in quantum optics that can be conveniently described by ray optics. Most of the work in this dissertation assumes the simplest of these models, ray optics (also called geometric optics). The geometric optics model makes several simplifying assumptions about the behavior of light, which limit the types of phenomena that can be simulated. Light is assumed to be emitted, reflected, and transmitted only at surfaces. Additionally, light is assumed to travel in straight lines and at infinite speed. With these limitations, effects explained by the higher-level models cannot be easily incorporated. In ray optics, effects such as diffraction and interference (wave optics), and dispersion (electromagnetic optics), and and phosphorescence (quantum optics) are usually ignored. In spite of these limitations, in normal environments, geometric optics is adequate to model almost everything we see in the natural world with high accuracy.

Interference and Interferometric Displays One wave optics effect that cannot be neglected is interference, because a large body of this research is based on interferometric reflective displays. Interferometric reflective display is an emerging technology in the reflective category. This technology uses the interference of light beams to create beautiful colors that mimic nature, such as butterfly wings and peacock feathers. Interference, which is the fundamental mechanism of interferometric displays, is closely related to coherence. Coherence is a relationship between two beams of light that measures the average correlation between their phases [2]. When two light waves have no phase correlation, their intensities add linearly (where intensity is defined as the mean squared amplitude) when they are superimposed. For example, two 40 watt light bulbs produce twice the brightness as one 40 watt bulb, which is consistent with our intuition. However, when two beams are partially or fully coherent, their superposition results in interference. If their phase correlation is positive, we observe constructive interference; otherwise we see destructive interference. When two coherent beams of the same amplitude and wavelength are combined, the resulting intensity can be anywhere from zero to four times as great.

5 Texas Tech University, Zhanpeng Feng, May 2012

Figure 2.2: Constructive and Destructive Interference.

Qualcomm’s mirasol® display uses interferometric modulators (IMODs) as pixels to produce colors from ambient lighting. At the most basic level, a mirasol® display is an optically resonant cavity. The device consists of a self-supporting deformable reflective membrane and a thin-film stack (each of which acts as one mirror of an optically resonant cavity), both residing on a transparent substrate. When ambient light hits the structure, it is reflected both off the top of the thin-film stack and off the reflective membrane. Depending on the height of the optical cavity, light of certain reflecting off the membrane will be out of phase with the light reflecting off the thin-film structure. Based on the phase difference, some wavelengths will constructively interfere, while others will destructively interfere. The human eye will perceive a color because certain wavelengths possess higher amplitude with respect to others. [3]

blue subpixel green subpixel red subpixel Figure 2.3: Qualcomm’s Mirasol® Display.

6 Texas Tech University, Zhanpeng Feng, May 2012

This design imitates the iridescent colors in nature as shown in butterfly wings, peacock feathers, and snake skins. researchers have made various attempts to render these visual effects.

(1) (2)

(3) (4)

(5) (6) 2. Matthew Wang, CS348B Final Project, http://www- graphics.stanford.edu/courses/cs348b-competition/cs348b-05/snake/index.html 4. John Pau, Modeling and Rendering Peacock Tail Feathers, http://www- graphics.stanford.edu/courses/cs348b-competition/cs348b-05/peacock/index.html 6. Steve Bennett and Arthur Amezcua, Simulating Interference effects in LRT:

Iridescence in biological structures, http://www- graphics.stanford.edu/courses/cs348b-competition/cs348b- 01/iridescent_butterflies/morpho.html

Figure 2.4: Natural and Synthesized Interference.

7 Texas Tech University, Zhanpeng Feng, May 2012

Monte Carlo Ray Tracing As shown in Figure 2.4, the images synthesized to imitate real world visual effects are generated using physically based rendering. Ray tracing is the most commonly used technique for physically based rendering. Ray tracing follows the path of light. In the scene, light is emitted from the illuminants, then is reflected, refracted, or scattered by objects, and finally arrives at the image plane of a virtual camera. A 2D image is thus generated to reflect the 3D world. In this section, the radiometry used in ray tracing is introduced. The rendering equation that governs the ray tracing algorithm is explained. Finally, the Monte Carlo method used to solve the rendering equation is described.

Radiometry Flux Φ: The total amount of energy passing through a surface per unit time. Flux is also known as power and is measured in Watts (J/s). A light source emits power in the visible spectrum. For example, if the point light source in Figure 2.5 is a 40 W light bulb, then 40 J/s of energy is emitted through any sphere that surrounds it.

Figure 2.5: Irradiance and Intensity.

Irradiance E: The flux arriving at a surface per unit area. Its unit is (W/m2). dφ E = (2-1) dA . In Figure 2.5, area A and area B receive the same amount of flux, because every that passes A will also pass B. However, because area A is smaller than area B,

8 Texas Tech University, Zhanpeng Feng, May 2012

the irradiance on A is larger than that on B. The amount of energy received by a unit area is reduced with the squared distance from the light. Solid angle ω: A 3D angle subtended by some object with respect to a center point. Its unit is steradian (sr). A solid angle is computed by projecting the object onto the unit sphere and measuring the area of its projection. In Figure 2.5, because objects A and B project the same area on the unit sphere, they subtend the same solid angle. A solid angle subtended by the entire sphere is 4π. Intensity I: Flux per solid angle. Its unit is (W/sr). It describes the directional flux density. In Figure 2.5, areas A and B have the same intensity. dφ I = (2-2) dω . Radiance L: The flux per unit projected area per unit solid angle. It has unit of (W/m2/sr).

dφ L = ⊥ (2-3) dA dω .

Figure 2.6: Radiance.

dA⊥ is the projection of the differential area dA of the light source onto a plane perpendicular to ray direction ω. In ray tracing, radiance is the most important radiometric quantity. During ray tracing, it is the quantity that is associated with each ray. Due to conservation of energy, radiance remains constant along a ray in free space. It is updated when light transport events occur. The final rendered image is constructed by integrating the radiance of all rays in the visible spectrum.

9 Texas Tech University, Zhanpeng Feng, May 2012

Ray Tracing Ray tracing simulates light transport, starting from light sources, interacting with objects in the scene, and finally arriving at the camera. Because the vast majority of light rays from the light sources do not reach the viewer, we choose not to trace rays in the forward direction (source to camera). Only light paths that reach the image plane contribute to the final image. For this reason, we instead follow the rays of light backwards (camera to source).

Figure 2.7: Image Formation with Ray Tracing. http://en.wikipedia.org/wiki/Ray_tracing_(graphics)

Rays are cast from the camera’s image plane into the scene and recursively traced when they interact with the objects in the scene. Ray tracing is terminated when a ray exits the scene, reaches the maximum number of reflections, or hits an object that allows no further ray propagation. At the beginning of ray tracing, each ray is given a weight, which represents the throughput of light carried by this path. The weight is updated during the course of light transport. The weight value is changed when the ray is reflected, refracted, or travels through a participating media, which tracks radiance change of this ray. For example, if a ray travels though a medium that attenuates it (like the sun ray that

10 Texas Tech University, Zhanpeng Feng, May 2012 passes through a cloud in Figure 2.8), its weight is reduced. If a ray hits a diffusive surface, the weight is also reduced because only a fraction of the light is reflected along a particular direction. When a ray travels through the scene, its weight is potentially updated many times, due to reflection, refraction, scattering, and other optical events, before it terminates. If this ray ends on a light source, then multiplying this weight with the emitted radiance from the light source gives this ray’s contribution to the final image. The amount of light that reaches the camera from a point on an object is the sum of light emitted by the object (if it emits light) plus all light that it reflects. By integrating the contribution of a large number of rays, we are able to render an accurate image that represents the real world with a high level of realism.

Figure 2.8: Ray Tracing Along a Light Path.

The throughput of light carried by the ray path is updated during the light propagation. The final amount of light that arrives at the observer is given by multiplying the throughput with radiance from the light source.

Rendering Equation Physically based rendering uses the rendering equation [4] to produce realistic images. The rendering equation is based on physics. It is an integral equation in which the equilibrium radiance leaving a given point is given as the sum of emitted radiance plus reflected radiance. The physical basis for the rendering equation is the law of

11 Texas Tech University, Zhanpeng Feng, May 2012

conservation of energy. Assuming that L denotes radiance, at each particular position and ω direction, the amount of light L(x, o ) that reaches the observer is the sum of the emitted light Le (x,ωo ) and the reflected light. The reflected light itself is the sum of the incoming light L(x',ω ) from all directions, multiplied by the surface reflection and i cosine of the incident angle.

L(x,ω ) = L (x,ω ) + fr(x,ω , ω )L(x',ω )G(x, x')V (x, x')dω , (2-4) o e o ∫S o i i i where

L(x,ωo ) = the radiance observed from position x in direction ωo,

Le (x,ωo ) = the light emitted from x by this object itself,

fr(x,ωo , ωi )= the BRDF of the surface at point x, transforming incoming light ωi to

reflected light ωo,

L(x',ωi ) = light from x’ on another object arriving along ωi,

G(x, x')= the geometric relationship between x and x’,

V (x, x') = a visibility test, returns 1 if x can see x’, 0 otherwise. Solving the integral term in the rendering equation analytically is not possible except for the simplest scenes, so simplifying assumptions and numerical integration techniques are almost always used. Standard numerical integration methods such as Newton-Cote’s rules, Gaussian quadrature, and Clenshaw–Curtis quadrature are only effective at solving low-dimensional smooth integrals. The computational cost of these methods becomes prohibitively expensive as the dimensionality of the integrand increases. However, in rendering problems, high-dimensional and discontinuous integrals are common. When a ray is recursively traced in a scene, the dimensionality of the integrand increases with every bounce of the ray. If a ray bounces many times, the dimension can increase to hundreds, where standard numerical methods fail. Monte Carlo integration, whose convergence rate is independent of dimension, is the key to solving this puzzle.

12 Texas Tech University, Zhanpeng Feng, May 2012

Monte Carlo Integration The rendering equation is an integral function. This function includes a term

L(x',ωi ) describing incoming light intensity, which can be of any form, from perfectly uniform (with omindirectional light sources over the entire hemisphere) to highly irregular (with a few point light sources placed at random locations in the scene). The

BRDF term fr(x,ωo , ωi ) is also arbitrary in high dimension, with closed-form expression that is rarely available. It is almost always impractical to perform the integral analytically. Monte Carlo integration provides a solution that can estimate the reflected radiance by sampling a set of directions, computing the incident radiance along them, multiplying by the BRDF value for those directions, and applying the visibility test term V (x, x') and the

geometric term G(x, x'). Incident lighting, arbitrary BRDF, and scene geometry are all included. Evaluating these functions at every point of interest is all that we need to render the scene. Following is a short derivation of the basic Monte Carlo estimator. The Monte Carlo estimator is able to convert the integral term in the rendering equation into summations of individual samples. Using proper sampling methods that meet certain requirements, the sum will converge to a close approximation of the true integral result when sufficient samples are used. The expected value of a function f (x) using random variable x, if it exists, can be computed with equation (2-5). E[ f (x)] = ∫ f (x) p(x)dx , (2-5) where p(x) is the PDF (probability density function) of random variable x. A numerical method of calculating the expected value of a function is to take the mean of a large number of random samples from the function, which can be shown to converge towards the correct expected answer as the number of samples approaches infinity, according to the Law of Large Numbers: 1 N E[ f (x)] ≈ f (x ) ∑ i (2-6) N i=1 .

Combining these two equations, we have an estimator of the integral of f (x)

13 Texas Tech University, Zhanpeng Feng, May 2012

f (x) 1 N f (x ) f (x)dx = p(x)dx ≈ i ∫ ∫ ∑ (2-7) p(x) N i=1 p(xi ) . Equation (2-7) is the Monte Carlo estimator, which can be used to evaluate the integral in the rendering equation. We simply take a large number of point samples of the function f (x) , scale each one by the PDF of x, sum the result, and divide by the number of samples. Crude Monte Carlo integration uses uniformly distributed samples over the sampling space. The rendering equation shows that the integration area is over the surface of a sphere, which means that sampling points must be generated over the surface of a sphere. A square-shaped two-dimensional uniform distribution of a pair of independent

random numbers ξx and ξ y can be mapped into spherical coordinates using the transform [5]: (2arccos( 1−ξ ),2πξ )→ (θ,ϕ) (2-8) x y . Figure 2.9 shows 10,000 samples uniformly distributed on a sphere, presented in (θ,ϕ) angle space and in 3D projection.

Figure 2.9: Sampling on a Sphere.

The convergence rate in Monte Carlo simulation is proportional to the square root of the number of samples. To improve efficiency of the estimator, we can employ various sampling techniques to reduce variance with fewer samples. We are free to set the

14 Texas Tech University, Zhanpeng Feng, May 2012

sampling probabilities in any fashion we like, as long as the light calculation result of every sample is weighted appropriately, and there is a nonzero probability of sampling every light path that contributes to the final image. This requirement applies to sampling light sources and sampling BRDF/BTDFs. Research shows that the Monte Carlo estimator converges more quickly if the samples are taken from a distribution p(x) that

is similar in shape to the function f (x) in the integrand. The basic idea is that by concentrating work where the value of the integrand is relatively high, an accurate estimate is computed more efficiently [6]. This method is called importance sampling. Carefully choosing the PDF from which samples are drawn is an important technique for reducing variance in Monte Carlo integration. The better the probabilities are set, the more efficient the Monte Carlo estimator will be, and fewer rays will be required to lower variance to an acceptable level.

Reflectance Modeling We are surrounded by real world materials with complex appearances. We see shiny metal appliances, transparent glass containers, dazzling diamond jewelry, translucent jade statues, and hard wood floors. The visual appearance of these materials is the interplay of surface geometry, lighting, and material reflectance. Accurate light transport simulation requires a precise description of these three factors. The first two factors concern the global setting of light sources and objects in the scene. The third factor deals with the interaction between light and objects. A material’s appearance depends on the mechanisms through which it absorbs, transmits and reflects incident light energy. Light and object interact at surface and subsurface. Surface interaction takes place at the boundary between two different media. Its appearance is primarily decided by surface geometries and reflectance properties. On the other hand, subsurface interaction occurs after light penetrates into the material, possibly is reflected multiple times, and then exits the object. Its appearance is mainly decided by the optical properties of the material’s substance. If surface geometry can be described precisely at the microscopic level, then Snell’s law and the Fresnel equation are sufficient to model a material’s reflectance.

15 Texas Tech University, Zhanpeng Feng, May 2012

However, most materials are too complex and require more sophisticated models. The reflectance properties of a surface are generally modeled with the bidirectional reflectance distribution function (BRDF). BRDF ρ(ωi ,ωo ) describes the reflectance relationship between a pair of incident and outgoing rays when the reflection happens exactly at the incident point. However, for some translucent materials, such as skin, marble, milk, or display diffuser, BRDF is not sufficient. When light arrives at these materials, it enters the surface, and then is absorbed, scattered, and finally exits the material at a different point in a different direction. This type of reflection is described by the bidirectional surface scattering reflectance distribution function (BSSRDF). BSSRDF

ρ(xi ,ωi , xo ,ωo ) represents the ratio of outgoing radiance at the viewing angle ωo at xo

to incident irradiance from direction ωi at xi .

Microscopic Geometry of Surfaces

In a mirror reflection, the relationship between incident radiance and outgoing radiance can be calculated by Snell’s law [7] θ θ = sin( i ) / sin( t ) n2 / n1 , (2-9)

and the Fresnel equations [7]

2 2  θ −θ   θ −θ   Ir 1 2 2 1  sin( i t )   tan( i t )   Fr (θ;η) = = (r⊥ + r= ) ≅   +   (2-10) Ii 2 2  sin(θi +θt )   tan(θi +θt )     .

Figure 2.10: Reflection and Refraction.

16 Texas Tech University, Zhanpeng Feng, May 2012

In reality, object surfaces are rarely perfect mirrors. They almost always exhibit some degree of roughness. At the microscopic level, certain materials can be viewed as a collection of perfect mirrors [8], where light is only reflected and refracted. The roughness of a surface can be viewed as perturbations of mirror normals, which direct incident light rays to all directions. If surface geometries can be described at the microscopic level, then the Fresnel equation is sufficient to completely compute surface reflection. Figure 2.12 shows some rendering results where a diffuser consists of a large number of microscopic glass bumps. The glass bumps are placed side by side to form a large array. As illustrated in Figure 2.11, the surface geometry is precisely described by a dense grid of millions of hemispherical glass bumps. Because of surface curvature and index of refraction, light is reflected and refracted between the glass bumps multiple times, which creates a diffusive appearance. It is shown in Figure 2.12 that the rendering results match expectations. In the scene where all four images are produced, the strongest light source is placed at a distant point above the display. When viewing from the normal angle (Figure 2.12 (1), inclination angle θ = 0°), a portion of incident light is scattered away from the normal direction by the diffuser, which makes the diffuser area darker than the rest of the display. On the other hand, when viewing form a side angle (Figure 2.12 (2), inclination angle θ = 45°), because some luminance is scattered into this angle by the diffuser, the diffuser area is brighter than the other regions. The diffuser also blurs the display content. Blurriness is more severe when the diffuser is distant from the display (Figure 2.12 (3)) compared with close to the display (Figure 2.12 (4)). A well designed display diffuser should distribute most of the light to the primary viewing cone, with minimal blurriness.

17 Texas Tech University, Zhanpeng Feng, May 2012

Figure 2.11: Simulated Diffuser Microscopic Geometry.

(1) (2)

(3) (4) A 5 mm by 4 mm diffuser consisting of uniformly placed glass bumps.

1. View from θ = 0° 2. View from θ = 45°. The shadow on the display is the reflection of the environmental lighting. 3. With diffuser placed at 100 mm from display surface 4. With diffuser placed at 100 um from display surface

Figure 2.12: Diffuser Rendering Results with Microscopic Geometries.

18 Texas Tech University, Zhanpeng Feng, May 2012

It is generally impractical to build geometric models in such fine detail. Real diffuser surfaces do not have perfectly periodic patterns. Diffuser products have irregular surfaces as shown in Figure 2.13. Even if we are able to describe such complex geometry precisely, the computation cost will be extremely high. Therefore, simulation of complex microscopic geometries relies on reflectance models. As shown in the next section, BRDF is a macroscopic method of describing surfaces such as a diffuser without knowing the geometric details.

Figure 2.13: SEM Image of an Actual Diffuser Surface [9].

BRDF and Surface Reflection The most widely used reflectance model is the bidirectional reflectance distribution function (BRDF), which is a four-dimensional function that describes light reflection at a surface. BRDF is a function of incident and reflected angles:

dLo (ωo ) ρ(ωi ,ωo ) = , (2-11) dE(ωi ) where E is the irradiance, that is the incident flux per unit area, and L is the reflected radiance, or the reflected flux per unit area per unit solid angle. The units of BRDF are thus inverse steradians (sr-1). Intuitively, the BRDF represents the amount of light that is scattered in each outgoing angle for each incident angle. In Figure 2.14, incident radiance Li along direction ωi hits the surface on the x-y plane and then reflects towards direction ωo, with outgoing radiance Lo. The differential irradiance at the reflection point is dE(ωi ) = Li (ωi )cosθidωi .

19 Texas Tech University, Zhanpeng Feng, May 2012

A portion of differential radiance due to this irradiance will be reflected towards

ωo, which is dLo (ωo ) . BRDF describes the proportional relationship of dLo (ωo ) and dE(ωi ) between any pair of angles ωi and ωo.

Figure 2.14: BRDF.

Given the BRDF of a surface, the observation point, and the incident illumination at a point on the surface from each direction, the reflected radiance from that point to the viewer's direction can be calculated by integrating the contributions of incident light from all directions. This integration is represented by the reflectance term

fr(x,ωo , ωi )L(x',ωi )G(x, x')V (x, x')dωi (2-12) ∫S in the rendering equation.

Solid angles ωi and ωo can be defined by inclination and azimuth angles

(θo ,φo ;θi ,φi ) with respect to surface normal, which is the positive z axis in Figure 2.14.

Term fr(x,ωo ,ωi ) can be rewritten as fr(x;θo ,φo ;θi ,φi ). Following simple geometry, we have dωo = sinθidθidφi . The geometric relationship between x and x’ is G(x, x') = cosθ .

Assuming x and x’ are visible to each other, we have V (x, x') = 1. The reflectance term can be rewritten as

2π π / 2 θ φ = θ φ r θ φ θ φ θ θ θ φ B( o , o ) ∫ ∫ L( i , i ) f (x; o , o ; i , i )cos i sin id id i (2-13) φi =0 θi =0

20 Texas Tech University, Zhanpeng Feng, May 2012

with respect to angles (θo ,φo ;θi ,φi ) , which is more convenient when implementing the integration in software. With a physically based BRDF, complex appearance can be rendered without describing object surfaces in microscopic detail. One simply requires the object’s macroscopic geometry and its BRDF to calculate the reflectance of every point. For example, in Figure 2.15, the diffuser’s geometry is extremely simple; it is just a perfectly flat plane. Its diffusive appearance is produced by a BRDF that describes the scattering relationship between incident rays and reflected rays. Accurate simulation relies on correct modeling of the reflectance properties described by the BRDF.

Figure 2.15: Diffuser Rendering with BRDF.

The BRDF is a fundamental radiometric concept, and accordingly it is widely used in computer graphics for photorealistic rendering of synthetic scenes. BRDFs may either be obtained from theoretical models of reflection at a surface or may be measured directly. Physically based BRDFs have two important qualities: reciprocity and ω ω conservation of energy. Reciprocity means that for any pair of i and o,

ρ(ωi ,ωo ) = ρ(ωo ,ωi ) . Conservation of energy means that the total energy of reflected light is less than or equal to the energy of incident light, which requires for all directions

ωo,

21 Texas Tech University, Zhanpeng Feng, May 2012

2π π / 2 fr(x;θ ,φ ;θ ,φ )cosθ sinθ dθ dφ ≤ 1 (2-14) ∫φ =0 ∫θ =0 o o i i i i i i i i . The BRDF model assumes that light enters and exits the surface at the same point. A more complete reflectance model comes from the BSSRDF (bidirectional surface scattering reflectance distribution function), which also accounts for sub surface scattering.

BSSRDF and Subsurface Scattering The BSSRDF [10] is more general than the BRDF. A four-dimensional BRDF was formalized as a specific case of the eight-dimensional BSSRDF, restricted to opaque materials. The BSSRDF relates outgoing radiance at a surface point to incident light rays at all points on the surface. dL (ω ) ρ(ω ,ω ) = o o (2-15) i o ω dE( i ) .

Figure 2.16: BRDF vs. BSSRDF.

To account for subsurface light transport in calculating outgoing radiance at point

xo along direction ωo, integration must be done over all incident directions plus the surface area, which adds extra integral dimensions compared with integration for the BRDF. As a result, the computational cost increases exponentially.

L (x ,ω ) = L(x ,ω )ρ(x ,ω ; x ,ω )cosθ dω dA (2-16) o o o ∫A ∫S 2 i i i i o o i i .

22 Texas Tech University, Zhanpeng Feng, May 2012

Light propagation in a participating medium can be described by the radiative transport equation [11]:         (ω ⋅∇)L(x,ω) = −σ L(x,ω) +σ p(ω,ω′)L(x,ω′)dω′ + Q(x,ω) . (2-17) t s ∫4π

In this equation, the properties of the medium are described by the absorption   coefficient σ a , the scattering coefficient σ s , and the phase function p(ω,ω′) . The  extinction coefficient σ t is defined as σ t = σ a + σ s . Q(x,ω) is the light distribution inside the object due to incident light, which is reduced exponentially depending on the  material properties. L(x,ω) is the light distribution inside the object due to scattering.   The phase function p(ω,ω′) describes the amount of light scattered from one direction   ω′ to another direction ω . A commonly used phase function in computer graphics is the Henyey-Greenstein phase function [12]. 1 1− g 2 P (ω,ω′) = (2-18) HG π + 2 − θ 3/ 2 4 (1 g 2g(cos )) . It has a single parameter, called the mean cosine, g, of the scattering angle given as

    π g = p(ω,ω′)⋅(ω,ω′)dω′ = 2π p(cosθ )cosθdθ (2-19) ∫4π ∫0 . If g is positive, the phase function is for predominantly forward scattering; if g is negative, backward scattering dominates. A constant phase function results in isotropic   scattering (g = 0). The phase function is normalized, such that p(ω,ω′)dω′ = 1. The ∫4π normalization indicates that the phase function defines the probability density function of light scattering into different directions. Figure 2.17 shows some typical shapes of a phase function.

23 Texas Tech University, Zhanpeng Feng, May 2012

Figure 2.17: Typical Shapes of Phase Functions.

The radiative transfer function describes light transport inside a medium. Based on the number of scattering events occurring with a light ray, subsurface scattering can be separated into single scattering and multiple scattering. In single scattering, a ray is only reflected one time before it exits the object surface. In this case, the outgoing radiance can be computed with integration over its path. In multiple scattering, a ray is scattered many times inside the object. Its path is similar to a random walk of particles, which is difficult track. These behaviors make it challenging to solve the radiative transfer function analytically. We can use Monte Carlo methods similar to surface reflection to solve this equation [13]. However, high computational expense prevents such methods from being widely used in practice. Researchers discovered that in optically thick materials [14], encounter thousands of scattering events before they exit the medium. In these cases, light distribution tends to be isotropic even if the phase function is strongly forward or backward. However, given the isotropic assumption, the radiative transfer function is still a five dimensional differential integral function, which is difficult to solve in most cases.   One approach is to expand radiance L(x,ω) and Q(x,ω) into a truncated series of spherical harmonics and apply diffusion approximation [14, 15]:

24 Texas Tech University, Zhanpeng Feng, May 2012

 ∇L0 (x)  ∇ ⋅  −σ 0 + =   a L (x) S(x) 0 , (2-20) σ s (1− g / 2) + σ a   where S(x) is the spherical harmonics expansion of Q(x,ω) , and

0 σ s 1 S(x) = σ tQ (x) − ∇Q (x). (2-21) σ s (1− g / 2) +σ a

A homogeneous diffuse BSSRDF model was proposed by Jensen et al. [16]. The model formulates BSSRDF as the sum of single scattering and a dipole source diffusion approximation for multiple scattering. In this model, the contribution of an incident ray entering at point xi is approximated by a pair of point sources placed above and beneath the medium boundary. The fluence leaving the medium at point xo is given by summing up the contribution of these two point sources.

Figure 2.18: Dipole Approximation and Rendering Result [16].

An analytic form of the BSSDF is given with respect to a single variable ∆x, which is the distance between the incident point xi and outgoing point xo. Details of derivation can be found in [16].

ρ(xi ,ωi ; xo ,ωo ) = ρ(∆x) = ρ( xi − xo ) −σ d −σ d σ  1  z ⋅e tr r  1  z ⋅e tr v  , (2-22) = s σ +  r + σ +  v  ′  tr  2  tr  2 4πσ t  dr  dr  dv  dv  where

σ tr = 3σ aσ t′ , (2-23)

σ s′ = (1− g)σ s , (2-24)

25 Texas Tech University, Zhanpeng Feng, May 2012

σ t′ = (1− g)σ s +σ a , (2-25)

 2 2 dr = zr + r  , (2-26) 2 2 dv = zv + r

zr =1/σ t′  , (2-27) zv = (3+ 4A) / 3σ t′

1+ F A = dr , (2-28) 1− Fdr and

−1.440 0.710 F = + + 0.668 + 0.0636η . (2-29) dr η 2 η

The parameters in this BSSRDF are material-dependent. Their values are measured on several materials and given in [16]. Using this model, an incident ray’s contribution to its vicinity can be computed based on only the distance ∆x, which significantly reduces the rendering cost. Jensen and Buhler [17] implemented a hierarchical integration approach to improve rendering speed with two passes, where the first pass samples the irradiance and the second pass evaluates the diffusion approximation. Arbree et al. [18] proposed a one-pass method based on light cuts, which generates irradiance samples on demand to further improve efficiency. The dipole model is limited to homogeneous semi-infinite slabs. Donner and Jensen [19] generalized the approach to multipoles to accurately model finite scattering slabs. The assumption of heterogeneousness and high albedos in the multipole model restricts its usage. Li et al. [20] proposed a method to handle first a few bounces in Monte Carlo path tracing and to use a dipole approximation after that. Haber [21] proposed a method to solve the diffusion approximation using finite element and finite difference analysis. More materials are supported by Tong et al. [22], and Wang et al. [23], where the diffusion equation is solved on a grid of samples. Although the dipole model is very efficient in rendering translucent materials and has been expanded to support a wider variety of media, its effectiveness has only been shown in optically thick materials. A diffuser is not optically as thick as skin or marble. A

26 Texas Tech University, Zhanpeng Feng, May 2012

diffuser’s translucency is more on the transparent side than the opaque side. Research done on thin scattering media includes the dirty glass model [24] and the dual microfacet model [25], where diffusive appearance is described by layers of scattering materials using BRDFs.

Summary

The general background of light transport simulation is introduced in this chapter. The geometric optics model is chosen for reflective display simulation. The assumptions made with geometric optics model are discussed. Although geometric optics is certainly not the most complete model, it is adequate to simulate the optical phenomenon of interest in this research. The interference effect usually is not included in geometric optics. However, because an important category of reflective displays is based on interference, interference and interferometric displays are introduced. The goal of light transport simulation is to solve the rendering equation. The rendering equation is a high dimensional equation with irregular components. While other numerical methods fail, Monte Carlo integration is capable of solving the rendering equation. The radiometry and principles of Monte Carlo ray tracing are described. Reflectance model plays a critical role in the rendering equation. The appearance of a reflective display depends on its reflectance and the ambient lighting. Various reflectance models, including microscopic geometry, BRDF, and BSSRDF models, are presented.

27 Texas Tech University, Zhanpeng Feng, May 2012

CHAPTER III

MODELING REFLECTIVE DISPLAYS

Introduction A reflective display can be modeled as a layered material, with a front of screen (FOS) layer on the top, a diffusive layer (diffuser) underneath the FOS, a transparent layer (glass) in the middle, and a wavelength-dependent reflective layer (pixel array) at the bottom.

Figure 3.1: Reflective Display Layers.

The FOS layer consists of some performance enhancing coatings (e.g. anti-glare coating) and touch sensor array. This layer introduces a small amount of absorption and front surface reflection (FSR). The FSR has little angular dependence in the viewing cone of design. Therefore, a pair of SPD functions is suitable to describe the absorption and reflectance properties of the FOS. Scattering is also negligible in the FOS, so no bidirectional reflectance distribution function (BRDF) model is required. For a light ray that passes through the FOS layer, a wavelength-dependent scaling operation is sufficient to describe the light transport occurring in this layer. This method avoids the computationally expensive scattering calculation and preserves accuracy. The diffuser is the major scattering medium for light transport in a reflective display. Because the diffuser is generally very thin (<5 um), subsurface scattering can be ignored. The most widely used appearance model, BRDF can effectively describe the characteristics of a diffuser. If there were substantial subsurface scattering, a bidirectional

28 Texas Tech University, Zhanpeng Feng, May 2012 scattering-surface reflectance distribution function (BSSRDF) would have been required. As a BSSRDF increases the path integral dimension by two, avoiding a BSSRDF eliminates the significantly higher computational cost. Although analytical BRDF models exist, measured BRDF is always preferred for accurate simulation results. In this project, an imaging based integration sphere is used to obtain measured BRDFs for diffusers, and an efficient method is developed to expand the measured BRDF to full visible spectrum to address wavelength dependency in a diffuser. The pixel array is a sophisticated component in a reflective display. One display pixel has a number of sub-pixels, each producing a different color. Pixel design includes but is not limited to material selection, horizontal structure placement, and vertical layer arrangement. Existing pixel design software is used to calculate the angular-dependent SPD of pixel reflectance over a wide range of angles. The outputs from the pixel design are seamlessly integrated with the ray tracing engine, which allows users to simulate results of a new pixel design in real world lighting environments without switching between software tools.

Related Work Based on observation of material properties and theoretical assumptions, various appearance models have been developed. Early BRDF models include Blinn-Phong model [26], Cook-Torrance model [27], Ward model [28], and Lafortune analytic BRDF model [29]. These models and simulations provide analytic methods to reproduce the appearance of natural phenomena in computer graphics. Earlier models have simple representation and good generality, which enable their wide usage on various materials. However, they have limited applicability on complex materials when a high degree of realism is required, because the broadly defined parameters in such models may not be appropriate to describe the subtleties of a particular material. For layered materials, simple BRDF models are not sufficient. Hanrahan and Krueger [30] first systematically introduced light transport theory into computer graphics for modeling the reflection of layer surfaces based on radiative transfer theories [11, 15]. Their model is dominated by light scattering inside the object and requires expensive Monte Carlo simulation to evaluate. Simplified models have since been proposed based on a parameterized representation [31] that are limited to multiple scattering in thin-layer materials [16, 19].

29 Texas Tech University, Zhanpeng Feng, May 2012

Koenderink and Pont [32] extended the model to describe the velvety appearance of materials such as hairy skin and peaches. Ershov et al. [33] developed a recursive model of multiple layers to render car paints interactively. A more recent model was developed by Rump et al. [34] to represent metallic car paints, in which the metallic appearance of car paint is separated into two parts: the homogenous BRDF part and the spatially varying Bidirectional Texture Function (BTF) part. Gu et al. developed a model for a contamination layer on smooth transparent surfaces in [24]. Dai et al. [25] proposed a spatially varying BTDF model for materials formed by a thin, transparent slab lying between two surfaces of spatially varying roughness. One common approach to computing the parameters in an analytic model is to fit the measured BRDF samples to the model using optimization techniques. However, although analytical BRDF models generally have theoretical basis on physics, it is not necessarily convenient to fit physically-based parameters to measured BRDF data. An alternative is to use empirical models. Ozturk et al. [35] used polynomials to construct an analytical model that includes both diffuse and glossy BRDFs. Their model uses PCA (principle component analysis) transformed variables in the polynomial for a compact representation of BRDFs. An empirical BSSRDF is developed by [36] to handle both single scattering and multiple scattering using six intuitive parameters. The other alterative, data-driven models, received considerable attention for their appealing realism. These models use the measured data directly in the rendering process. Measured material properties include reflectance [37, 38], color textures [39], and scattering parameters [40]. The main limitation in data-driven models is the large amount of storage for a material. The commonly used tabulated representation for such models typically requires tens of gigabytes for one scene. Moreover, given the lack of basis on physics, data-driven models are not suitable for predictive modeling. A practical model for calculation of reflected light from a reflective display is developed in [41], as shown in Equation (3-1). This equation models the ambient light on a reflective display as a combination of auxiliary lighting, diffuse illumination, and directional illumination.

30 Texas Tech University, Zhanpeng Feng, May 2012

Q Q Q Q Rhemi / si (λ,θd )Ehemi (λ) Rdir (λ,θd ,θs )Edir (λ) cosθs Lamb (λ,θd ,θs ) = L (λ,θd ) + + π π , (3-1) where

Q Lamb (λ,θd ,θs ) is the total spectral radiance perceived at the display surface in the ambient lighting environment,

Q L (λ,θd ) is the darkroom spectral radiance at viewing angle θd (this term is zero for a reflective display with no auxiliary lighting),

Q Rhemi / si (λ,θd ) is the diffuse spectral reflectance factor at θd with specular component included,

Ehemi (λ)/π is the diffuse illumination,

Q Rdir (λ,θd ,θs ) is the directional spectral reflectance factor with light source angle θs and

viewing angle θd,

Edir (λ) cosθs /π is the directional illumination. In this equation, isotropy is assumed, which means that there is no azimuthal dependence in any of these quantities. Q represents the state of the display (white, black, red, green, blue, etc.). Note that additional directional terms could be added to account for additional directed light sources if desired. The spectral radiance must be integrated over wavelength with the standard color matching functions to recover luminance and chromaticity. A simulation system is built to solve this equation in this dissertation.

Front of Screen In Equation (3-1), R describes the overall reflectance property of a reflective display. The overall reflectance property is a function of the three layers illustrated in Figure 3.2. For simplicity, let us consider the pure diffuse lighting case, where only the RQ (λ,θ )E (λ) hemi / si d hemi term is evaluated. Because the FOS layer on the top has minimal π angular dependence in the viewing cone, it can be described as a function only related to wavelength.

31 Texas Tech University, Zhanpeng Feng, May 2012

Figure 3.2: Front of Screen (FOS) Model.

Figure 3.2 shows the optical model of the FOS layer. A FOS layer can consist of multiple components; each can have its own reflectance and absorption properties. For a three layer FOS, the overall transfer functions are shown below.

T ≅ (1− R − A )(1− R − A )(1− R − A ) (3-2) Total, λ 1, λ 1, λ 2, λ 2, λ 3, λ 3, λ ,

2 2 2 R Total , λ ≅ R1,λ + (1− R1,λ − A1,λ ) R 2,λ + (1− R1,λ − A1,λ ) (1− R 2,λ − A 2,λ ) R 3,λ , (3-3)

1 = R + T + A (3-4) Total, λ Total, λ Total, λ .

With these equations, the overall RTotal, λ and ATotal, λ values can be calculated.

Q Q 2 Rhemi / si (λ,θ d ) is expanded as Rhemi / si (λ,θd ) = Rtotal,λ + (1− Atotal,λ − Rtotal,λ ) * Rdiffuser+ pixel , where the combined reflectance property of diffuser and pixel is represented by the

Rdiffuser+ pixel term. Because a light ray passes through the FOS layer when it enters the display and when it reflects back to the viewer, the attenuation terms must be applied

2 twice, which results in a squared term (1− Atotal,λ − Rtotal,λ ) . The reflectance of the FOS

will also reflect the incoming light directly to the viewer, represented by Rtotal,λ . With this expansion,

Q Rhemi / si (λ,θ d )Ehemi (λ) π

2 Ehemi (λ) = (R λ + (1− A λ − R λ ) * R + )* (3-5) total, total, total, diffuser pixel π E (λ) E (λ) = R * hemi + (1− A − R )2 * R * hemi . total,λ π total,λ total,λ diffuser+ pixel π

32 Texas Tech University, Zhanpeng Feng, May 2012

E (λ) In this summation, the first term R * hemi has no angular dependence, so total,λ π no path integral is needed for its computation.

Modeling and Measuring a Diffuser E (λ) In the second term of Equation (3-5), R * hemi accounts for the angle diffuser+ pixel π mixing for scattering events in a diffuser. Monte Carlo ray tracing is a powerful tool to solve this type of problem. The light transport equation connects the reflectance function (BRDF/BTDF) with lighting to compute the collected radiance at a particular outgoing angle. Assuming that L denotes radiance, at each particular position and direction, the amount of light that reaches the observer is the sum of the emitted light and the reflected light. The reflected light is the sum of the incoming light from all directions, multiplied by the surface reflection and the cosine of the incident angle.

L(x,ω ) = Le (x,ω ) + fr(x,ωo , ωi )L(x',ωi )G(x, x')V (x, x')dωi (3-6) o o ∫S .

The term Le (x,ωo ) is zero for a reflective display with no auxiliary lighting.

Assuming no obstruction between the display and viewer V (x, x′) is always one. The

geometric term G(x, x′) can be substituted with cosθi . Therefore, equation (3-6) can be rewritten as L(x,ω ) = BRDF(x,ω ,ω )L(x,ω )⋅cosθ dω (3-7) o ∫s o i i i i

Notice that the incoming light L(x,ωi ) in this integral is also radiance, θi which allows iterative algorithms to be developed for solving this equation. In a reflective display, a light ray first passes through the diffuser, where the diffuser BTDF should be applied, and then hits the pixel and is reflected, where the display BRDF should be applied, and finally passes through the diffuser again, where again the diffuser BTDF should be applied. In the process, three scattering events occur. A path integral is computed for each scattering. Figure 3.3 shows the backwards ray tracing process, in which rays are originated from the observer and incrementally propagated into the scene. Because of reversibility of light rays, backward ray tracing calculates exactly the same

33 Texas Tech University, Zhanpeng Feng, May 2012 radiance had the rays been originated from light sources. As shown in Figure 3.3, the radiance arriving the viewer, denoted as L1, is computed by path integral of L2, L3, and L4. L2, L3, and L4 are just three possible samples of rays in the hemisphere. One could integrate more samples for better accuracy. L2 is computed by another path integral over the hemisphere from the paths colored in purple. Note that an interferometric modulator (IMOD) pixel reflects light like a perfect mirror; therefore path integral on the reflection from a pixel is reduced to specular reflection only. In the cases where the pixel itself is also a scattering surface, the same algorithm is still able to handle it using Monte Carlo. Similarly, L5 is computed by path integral of L6, L7, and L8. It is worth mentioning that L9, which is the front surface reflection from FOS, must be included to calculate L1.

Figure 3.3: Path Integral for Light Transport Simulation.

Because of the scattering effect of a diffuser, its reflectance can not be simply described as a multiplier. To include the angular dependency of a diffuser, a BTDF is used. According to the amount and type of scattering introduced during reflection, three basic types of reflection can be identified as shown in Figure 3.4. If no scattering occurs, as seen from mirror surfaces, the reflection is called specular (see Fig. 5(a)). If the incident light beam is uniformly scattered over all angles as illustrated in Fig. 5(b), this type of diffuse reflection is called Lambertian. This kind of ideal scattering is typically

34 Texas Tech University, Zhanpeng Feng, May 2012

given by photocopier paper, where fibers of the pulp scatter incident light with the same luminance in all directions. The third type of reflection has the effect of scattering the maximum of reflected intensity in the specular direction, and decreasing when the outgoing direction is moving away from the specular beam. This type of reflection in Fig. 5(c) is known from the appearance characterization terminology as haze. A diffuser used in reflective displays is a textbook example of a haze surface.

Figure 3.4: Typical Beam Geometry and Angular Characteristics of Different Types of Reflections [42].

The PBRT ray tracer [6] allows usage of both measured BRDF and analytical BRDF functions as the appearance model of a surface. Measured BRDF often yields more accurate results for simulation. In reflective display design, it is desired to evaluate a number of readily made diffusers and to select the one that provides the best performance. For example, making a selection among different haze values is important. A good appearance acquisition method is required to obtain the BRDFs of diffusers with different haze values. BRDF measurement is a challenging task. Factors to consider for BRDF measurement include: 1) Broad angular coverage 2) High-resolution coverage of the visible spectrum 3) Rapid operation 4) Cost

35 Texas Tech University, Zhanpeng Feng, May 2012

Generally, there are two types of appearance acquisition systems: spectro- radiometer based and image based systems. The spectro-radiometer offers the advantage of high-resolution spectral coverage, at the cost of a much longer measurement time, because the detector must be moved to measure different outgoing angles. On the other hand, image based systems can capture the entire output hemisphere in a single image without moving the detector, which significantly reduces measurement time at the cost of spectral resolution. The Cornell three-axis gonioreflectometer [43] uses a spectro-radiometer as the detector. This system can acquire 31 spectral samples at approximately 10 nm increments over the visible spectrum, per camera/source position. Capturing 1000 angular samples, which is considered a very sparse covering of the hemispherical angular domain, takes approximately 10 hours. An early example of an image based system is Ward’s measurement system [28], in which the radiance emitted by a planar sample is reflected from a half-silvered hemisphere and captured by a camera with a fish-eye lens. Thus, almost the entire output hemisphere is captured by a single image. The two degrees of freedom in the incident direction are controlled by a rotation of the source arm and the rotation of the planar sample. This system claims measurement of a 4D anisotropic BRDF in ten minutes.

Figure 3.5: Ward’s Image Based Goniometer [28].

36 Texas Tech University, Zhanpeng Feng, May 2012

Experiments using both of these two types of systems have been performed in this research. A spectro-radiometer based system was built, which uses a pair of turning mirrors and a rotating arm to control incident and outgoing angles for the testing target.

Spectro-radiometer

Target

Xe Lamp

Turning Mirrors Rotating Arm

Figure 3.6: Spectro-radiometer Based System.

The main issues we found on this system include: 1) Alignment takes long time and expert skills are required for a reasonable number of angular samples. 2) Measurement also takes long time. It takes a few minutes per angular sample, and that time increases with signal depleting off center. 3) Front optics quality is poor. 4) Power source is unregulated and has no collimation. 5) Operator measurement errors are difficult to manage. As a result, this set-up is not optimal for rapid component characterization. The aforementioned issues pushed us to adopt an image based solution provided by Radiant Imaging [44]. The IS-SA™ (Imaging Sphere for Scatter and Appearance

37 Texas Tech University, Zhanpeng Feng, May 2012

measurement) system is shown in Figure 3.7. It uses a probe beam to illuminate the material at various incident angles. A convex mirror is placed to one side of the device under test (DUT) and acts as a fisheye lens to reflect light from the entire hemisphere to the detector. The imaging photometer captures an image of the entire inner surface of the sphere (2π steradians) in an instant. This tool can be used to analyze scattered light in both transmission and reflection from a surface, and it offers speed and angular resolution for BRDF/BTDF, which measures 26,000 angular samples in 4 minutes. The IS-SA™ also eliminates the complex calibration procedure of Ward’s system. In Ward’s system, the map from pixels in the camera to output directions in the coordinate system of the sample must be known. The outgoing solid angle that is effectively subtended by each pixel also must be accurately measured for precise results. In addition to this geometric calibration information, radiometric information including the optical fall-off in the lens system and the radiometric camera response are required [45]. In IS-SA™, non-uniformities in any of the functional elements, including diffuser coating variations and spatially dependent response in the optics, filters, and CCD detector can be calibrated out of the system to produce accurate BRDF measurements using a semi-automated procedure. This type of calibration is accomplished by measuring a sample with a known BRDF, such as Spectralon, which has a nearly perfect Lambertian scattering profile. A series of images are captured to generate a BDRF correction map for BRDF calibration. The removal of complex calibration process significantly improves the accuracy and repeatability of the measurements.

Figure 3.7: IS-SA™, an Image Based BRDF/BTDF Measurement System[44].

38 Texas Tech University, Zhanpeng Feng, May 2012

IS-SA™ measures the BTDF of a diffuser. For a non-Lambertian diffuser, the scattering profile has a peak at the specular angle, then it falls off gradually in the viewing cone, and it becomes near zero at higher angles. Figure 3.8 shows the measured BTDF of a H78 diffuser. The incident angle of incoming light ranges from 0° to 75°, at 5 degree increments. For each incident angle, the scattered light is measured at sample points covering the entire hemisphere.

BTDF for H78 Diffuser Angle Ext(Int) 16 0 (0) 5 (3.3) 14 10 (6.6) 15 (9.9) 12 20 (13.2)

10 25 (16.4) 30 (19.5) 8 35 (22.5) 40 (25.4) BTDF (1/sr) 6 45 (28.1) 4 50 (30.7) 55 (33.1) 2 60 (35.3) 65 (37.2) 0 70 (38.8) -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 75 (40.1) Scattering Angle (°)

Figure 3.8: Measured BTDF of a Haze 78 Diffuser.

To reduce the amount of reported data, Radiant Imaging chooses ScatterAzimuth and ScatterRadial angles [46]. The ScatterAzimuth and ScatterRadial angles are defined relative to the angle of specular reflection. ScatterAzimuth ranges from 0° to 180° at 10 degree increments, and ScatterRadial ranges from 0° to 180° at 2 degree increments. ScatterAzimuth and ScatterRadial angles are transformed to spherical coordinates used in PBRT.

39 Texas Tech University, Zhanpeng Feng, May 2012

Figure 3.9: Definition of Angle of Incidence, ScatterRadial, and ScatterAzimuth in IS- SA [46].

Using the IS-SA™, diffuser BTDF can be measured much faster compared with a spectro-radiometer based system. One parameter that is traded for this gain in efficiency is spectral resolution. With IS-SA™, only three spectral (XYZ) measurements for each angular configuration are obtained. These measurements are weighted averages over large, overlapping intervals of the visible spectrum. This limitation reduces the predictability of the appearance of a reflective display under a source with a different spectral distribution. Dense spectral information is required to obtain physically-accurate spectral reproduction. As shown in Figure 3.10, observe that the diffusers of interest have significant drop in transmission in the 380 nm to 420 nm spectrum. Failure to account for this spectral dependence will produce inaccurate luminance and chromaticity results.

40 Texas Tech University, Zhanpeng Feng, May 2012

Transmission of Various Diffusers at Normal Angle

120 100 80 60 40 20 0

Relative Transmission (%) Transmission Relative 350 450 550 650 750 Wavelength (nm)

Haze 65 Haze 78 Haze 81

Figure 3.10: Transmission of Various Diffusers at Normal Viewing Angle.

To construct a BTDF with high spectral resolution in the visible range, the total integrated scattering (TIS [47]) measured with Radiant Imaging is used. For a given incident angle, the Imaging Sphere integrates the total transmitted radiance over the hemisphere, and it outputs TIS as the ratio between outgoing and incoming radiance.

DataBegin Rad 1 Rad 2 ... TIS 0.72 3.689E+00 Az 0 3.575E+00 2.907E+00 ... 3.689E+00 Az 30 3.585E+00 3.108E+00 ... 3.689E+00 . 3.585E+00 3.044E+00 ... Inc 0 3.689E+00 . 3.511E+00 3.143E+00 ... 3.689E+00 Az 180 3.313E+00 2.905E+00 ...

Figure 3.11: An Example of Measurement Output Data [46].

The definition of the hemispherical directional reflectance at different incident angles is

ρ (ωo) = BTDF(ω ,ω )⋅cosθ dω (3-8) hd ∫ 2 o i i i H (n) . The Monte Carlo estimator of equation (3-8) is

41 Texas Tech University, Zhanpeng Feng, May 2012

N 1 BTDF(ωo ,ωij )⋅ cosθ j ρhd (ωo ) = ∑ (3-9) N p(ω ) j=1 j . For a diffuser, we generally assume the shape of transmission versus wavelength does not change at different angles, and therefore the SPD of a BTDF at any pair of directions can be obtained by multiplying SPD of 0° with a scale factor k(ω). The scale factor must satisfy the requirement that the integrated reflectance over the visible spectrum estimated by Monte Carlo simulation be equal to TIS measured by Radiant Imaging, as described by Equation (3-10) 1 N TIS(ω) = k(ω)* ρ hd (ω)* ∑ SPDnormal (λ j ) (3-10) N j .

Using Equation (3-10), k(ω) can be trivially computed. To evaluate the BTDF between any pair of directions, BTDF(ω ,ω ,λ )= k(ω )* BTDF(ω ,ω )*SPD (λ ) (3-11) o i j i o i normal j . The incident angle dependent scale factor k(ω) has a subtle impact on SPD calculation, especially for the magnitudes of SPD curves. In Figure 3.12, the dash line series (“xx_diff_meas”) are measured data, the solid line series (“xx_diff_mod1”) are modeled SPDs with no incident angle dependent scale factor, the compound line series (“xx_diff_mod2”) are modeled SPDs with incident angle dependent scale factor. The plots show that modeled series 1 generally overestimates reflectance across the spectrum for all colors, while the modeled series 2 better matches the magnitude of measured data. If the SPD shapes are completely identical across all incident angles, the scale factor k should be a constant value over all angles. However, observe that diffuser SPD has slight differences in shape with respect to angles, and these differences are difficult to measure at larger incident angles. Therefore, the assumption of diffuser SPD shape being constant over angles is maintained, and the shape variation with different incident angles is compensated by the angular dependent scale factor k(ω). As a result, the simulated SPD better matches that of measurement.

42 Texas Tech University, Zhanpeng Feng, May 2012

0.25

0.20

0.15

Reflectance 0.10

0.05

0.00 400 450 500 550 600 650 700 Wavelength (nm) K_diff_meas R_diff_meas G_diff_meas B_diff_meas K_diff_mod_1 R_diff_mod_1 G_diff_mod_1 B_diff_mod_1 K_diff_mod_2 R_diff_mod_2 G_diff_mod_2 B_diff_mod_2

Figure 3.12: Impact of Incident Angle Dependent Scale Factor.

Display Pixel Array Display pixel design involves optical properties of multiple layers of materials and their geometric structure. A separate software tool is responsible for simulation of display pixels. That tool is able to find the optical response from an unlimited number of optical stacks with known dispersions. It uses a transfer matrix approach to calculate spectral response with respect to incident angle. The light transport simulation tool built in this dissertation utilizes the pixel design software as a black box component. A complete scan from 0° to 90° is performed to generate a look up table for pixel reflectance over angles, such that when a ray hits a display pixel from a particular angle, the spectral response can be found, and the reflected ray’s SPD can be updated. Figure 3.13 shows the SPDs of red, green, and blue pixels at four different viewing angles (8°, 20°, 30°, and 40°). The SPDs of a red, green, and blue pixel are colored in red, green, and blue, respectively, in Figure 3.13. As the incident angle increases, the SPD of the reflectance of a pixel gradually shifts to the blue end.

43 Texas Tech University, Zhanpeng Feng, May 2012

Figure 3.13: SPD of Pixel Reflectance with Angle Scanning.

Summary In this chapter, reflective display is modeled as a layered surface. The different layers in a reflective display, which are FOS, diffuser, and pixel array, are explained in detail. Different light transport models are applied to each of the layers. A set of simple and efficient spectral functions is developed to model the reflectance and absorption of FOS. Both spectro-radiometer based and imaging based methods are investigated for measuring the BTDF of a diffuser. A novel hybrid approach combining both measurement methods is developed to provide an efficient solution for high resolution in both angular and spectral domains. A BTDF is generated from the measured data to model the diffuser. A wavelength dependent BRDF is generated by scanning the incoming angles of a display pixel using the pixel design software to model the pixels. With models of these layers, the combined light transport effect described in Equation (3- 1) can be solved. The actual computation is carried out by Monte Carlo ray tracing algorithms. These algorithms will be introduced in the next chapter.

44 Texas Tech University, Zhanpeng Feng, May 2012

CHAPTER IV

MONTE CARLO RAY TRACING IN REFLECTIVE DISPLAYS

Introduction Monte Carlo ray tracing links the lighting, geometry, and the reflectance properties of objects in scene together. Because a display pixel is very small, typically with width on the order of 100 to 150 um, rays are cast into a very small viewing cone to cover a small area on a display to simulate pixel performance. Theoretically, viewing from 500 mm above the display, a 100 um pixel subtends a 0.011° inclination angle. However, using such a small viewing angle in ray tracing may increase the possibility of quantization error. In practice, a field of view (FOV) angle of 0.1° is used, which covers around 70 pixels looking from normal on a display, assuming pixel size of 100 um. This FOV angle is so small that the spatial variation within the viewing area is minimal, which means that the pixels in the viewing area can be considered to have identical contribution to the final radiance. Meanwhile this angle is not too small to preserve numerical stability. A path tracing algorithm is used for an unbiased calculation of a complete light transport. Stratified sampling is used to cast rays in the FOV angle for good coverage of the area of interest. Importance sampling is used to incrementally extend rays when scattering occurs on diffusers.

Figure 4.1: Path Tracing for Display Simulation.

45 Texas Tech University, Zhanpeng Feng, May 2012

Path Tracing When James Kajiya first described the rendering equation in [4], he also introduced the first general unbiased algorithm to compute a complete light transport solution, called path tracing. In path tracing, random ray trees are built with their roots at the detector, and each valid transport path is treated as a sample. A path is generated by starting a ray from the detector, recursively tracing the ray in the scene, and ending at light sources. At each bounce, a direction is sampled according to a distribution, for example a BRDF function or a cosine function. The contribution of the path to the image plane is evaluated by the radiance the path carries weighted by the probability of this path being generated. The light transport equation describes the fact that the radiance leaving a point x can be expressed as an integral over all hemispherical directions incident on the point x. The integral over the hemisphere can also be transformed to an integral over all surfaces in the scene. In the form of integration over the hemisphere

    L x,ω  = BRDF(x,ω ,ω )L x,ω  ⋅ cosθ dω (4-1)  o  ∫ 2 o i  i  i i H (n) . In the form of integration over surfaces

  L x,ω  = BRDF(x,ω ,ω )L(y → x)G(x, y)dA (4-2)  o  ∫A o i y ,

where y is a point on a surface in the scene and

cos(N ,ω )cos(N ,ω ) = x i y i G(x, y) 2 (4-3) r xy .

Although the equations above are equivalent, they require two different sampling strategies to solve with Monte Carlo integration. For the hemispherical form, a number of directions from a hemisphere are sampled, and rays are cast along these directions to evaluate the integrand. For the surface form, a number of points on surfaces are sampled to compute the coupling between those points to evaluate the integrand. In a scene consisting of a large number of surfaces, it makes sense to sample the surface points to include the important surface areas and exclude the directions where no

46 Texas Tech University, Zhanpeng Feng, May 2012 surface exists on the ray path. However, for reflective display simulation, only a few flat surfaces exist in the scene. The hemispherical integral enables the formation of a path space and computation of light transport over each individual path. Path space encompasses all possible paths of any length. An appropriate Monte Carlo sampling procedure can be employed to generate the correct paths in the path space, and the throughput of radiance over each path can be evaluated. For the rays that actually hit a light source, the path throughput is a product of the BSDF values and cosine terms for the paths that one ray has been through, divided by their respective sampling PDFs,

i−1 f ( p → p → p ) | cosθ | ∏ j+1 j j−1 j (4-4) p ( p − p ) j=1 ω j+1 j . The contribution of a path is calculated as the product of path throughput with scattered light from direct lighting at the final vertex of the path. During path tracing, a variable L holds the running total of radiance, and the path that extends to another vertex is constructed incrementally. Finally the contribution of each path is summed together to give the result of a path integral. A key to solving the path integral efficiently is to make wise choices on sampling strategy. In Monte Carlo ray tracing, stratified sampling and importance sampling have been shown to produce good results [48].

Stratified Sampling The basic idea of stratified sampling is to partition the domain of integral into mutually exclusive sub domains (strata), and then perform Monte Carlo integration in each stratum. Suppose we are interested in estimating I = f (x)dx , and S can be divided ∫s into m sub domains S1 S2 S3...Sm. If we generate ni samples, Xi,1,... ,Xi,ni , from sub domain

Si, where i = 1,...,m, then the estimator from stratified sampling

n ∧ 1 m 1 i I = ∑ ∑ f (X ij ) (4-5) m i=1 ni j=1 is an unbiased estimator for I . It can be shown that stratified sampling will never have higher variance than plain unstratified sampling [49]. Equation (4-6) computes the variance of stratified sampling

47 Texas Tech University, Zhanpeng Feng, May 2012

∧ 1 m 1 ni var(I) = var( ∑ ∑ f (X ij )) m i=1 ni j=1 1 m 1 ni = 2 (∑ var( ∑ f (X ij )) m i=1 ni j=1 1 m 1 ni = (4-6) 2 ∑ 2 var(∑ f (X ij )) m i=1 ni j=1 1 m 1 = 2 ∑ 2 ni var( f (X ij )) m i=1 ni 1 m σ 2 = i 2 ∑ m i=1 ni ,

2 where σ i is the variance of f (x) in sub domain Si.

On the other hand, the unstratified sampling variance can be written as

m m 2 m 2 1 σ i 2 = i + i − = + i − var(I) ∑(var(I ) (I I) ) 2 ∑ ∑(I I) (4-7) i=1 m i=1 ni i=1 ,

where Ii is the integral of f (x) in domain Si, and I is the average integral of f (x) over the entire domain S. Only when all sub domains have the same integral as the average

2 integral will (Ii − I) be equal to 0 for every i in equation (4-7), such that stratified

∧ sampling does not reduce variance. Therefore, var(I) can never be larger than var(I) . The most common example of stratified sampling in graphics is jittering for pixel sampling [6].

Figure 4.2 shows comparisons between unstratified sampling and two variants of stratified sampling. In 16(a), the samples are generated randomly throughout the area. In the sampling space, some regions have samples clustered together, and some regions have no samples. The stratified sampling distributes samples more uniformly into non overlapping regions, which guarantees samples not to be all close together. Some jittering pattern is applied in the sub domains to avoid a purely uniform pattern that may result in aliasing. The two commonly used stratified sampling methods are low discrepancy sampling [50] (LDS, shown in 16(b)) and best candidate sampling [50] (BCS, shown in 16(c)).

48 Texas Tech University, Zhanpeng Feng, May 2012

LDS is based on a concept called discrepancy that can numerically express the quality of a pattern of sample positions. The quality of samples is evaluated by counting the number of samples in each region and comparing the volume of each region to the number of samples. Generally, a given fraction of the volume should have roughly the same fraction of the sample points in it. A shortcoming of LDS is that it generates good samples around a single subspace, but it does not ensure the samples at adjacent subspaces are well distributed with respect to each other. It can happen that two samples are chosen to be very close to their shared edge. BCS addresses this issue by generating a Poisson disk-like pattern. In BCS, each time a new sample is to be computed, a large number of random candidates are generated. All the candidates are compared to the previous samples, and the one that is farthest away from all the previous ones is added to the pattern. The price being paid here is expensive computation. In reflective display simulation, rays are cast from only a very small number of pixels, and the LDS samples must be generated just once. It is worth to pay the computational price for high quality samples. Figure 4.2 shows that BCS has much better spatial distribution compared to the periodic pattern seen in LDS.

(a) Random Sampler (b) Low Discrepancy Sampler (c) Best Candidate Sampler Figure 4.2: Random Spatial Samples Generated from Different Sampling Methods.

Importance Sampling The path integral is a sum of energy carried by an infinite number of rays. To evaluate the infinite sum, one would like to include as many samples as possible. However, an expensive computational cost is associated with each ray. Assume that 1000

49 Texas Tech University, Zhanpeng Feng, May 2012 rays are cast from the viewer. When a ray hits a diffuser, 1000 rays are generated from the point of intersection to cover the hemisphere. Then, these rays continue to hit the display pixels and bounce back to the diffuser, so that 1000 rays are generated again for each one. In this simple three bounce scenario, 10003 = 1 billions rays are traced. In a hemisphere, even 1000 sample points are considered sparse. Obviously the computational cost is huge. A more careful sampling strategy must be considered to improve efficiency. When rays are cast from the detector, stratified sampling is preferred because it covers the display surface with well distributed samples. After the ray hits the scattering surface (diffuser), the sampling strategy should be changed to accommodate the haze property of the diffuser and the incident light distribution. As shown in the measured profile of a Haze78 diffuser (see Figure 3.8), it has a strong peak in the specular angle and gradually attenuates at wider angles. This shape of scattering profile results in non- uniform scattering of light. For ray tracing with this type of scattering, a strategy called importance sampling can efficiently reduce the variance of a Monte Carlo estimator.

The Monte Carlo estimator of the integral I = f (x)dx is ∫s ∧ N 1 f (Xi) I = ∑ (4-8) N i=1 p(Xi) . The variance of a Monte Carlo estimator can be derived as

∧ N 1 f (Xi) I = var( ∑ ) N i=1 p(Xi) N 1 f (Xi) = 2 var(∑ ) N i=1 p(Xi) 1 f (Xi) = 2 N var( ) N p(Xi) 1 f (Xi) f (Xi) (4-9) = (E( )2 − E 2 ( )) N p(Xi) p(Xi) 1 f (Xi) = (E( )2 − I 2 ) N p(Xi) 1 f 2 (x) = (∫ dx − I 2 ) N s p(x) .

50 Texas Tech University, Zhanpeng Feng, May 2012

This equation reveals that if p(Xi) is a uniform pdf over the entire domain, the 1 variance of the Monte Carlo estimator will reduce at the rate of and the standard error N 1 will reduce at , which means that to reduce the standard error by half, four times the N number of samples are required.

Apparently, variance depends on the choice of the density function p(Xi) from which the samples are drawn. Intelligently choosing samples from a well designed pdf

helps to reduce the variance more quickly. In an ideal case, if p(Xi) satisfies the

f (Xi) requirement that always equal to I, then the variance of the estimator p(Xi) ∧ 1 ∧ I = (E(I) 2 − I 2 ) = 0 . In this case I is always equal to zero no matter how many N samples are drawn. Obviously this is an absurd case, because it requires knowing I before running the Monte Carlo estimator. Had the integral value I been known, one would not

∧ bother to evaluate I at the first place. However, if a p(Xi) can be found that is similar in

shape to f (x) , variance decreases more quickly. Table 4.1 shows the efficiency of different choices of p(Xi) on the same integral

4 I = xdx = 8 (4-10) ∫0 .

For example, if the ideal sampling function is chosen as p(x) = x /8 , then

∧ 1 f (Xi) 1 I = (E( ) 2 − I 2 ) = (82 − 82 ) = 0 (4-11) N p(Xi) N . For comparison, if sampling function is chosen to have an opposite slope as p(x) = (6 − x) /16 , then

2 2 ∧ 1 f (x) 1 4 16x 1 56.8 I = ( dx − I 2 ) = ( dx − I 2 ) = (120.8 − 64) = (4-12) ∫s ∫0 N p(x) N 6 − x N N .

51 Texas Tech University, Zhanpeng Feng, May 2012

This variance is even higher then simply using uniform sampling over the range of [0, 4], ∧ 21.3 which results in I = . It clearly shows that a poorly chosen importance function N performs worse than uniform sampling. Consider the case that a pdf is chosen to have a very similar shape to the perfect choice p(x) = x /8 . We may use the sampling function p(x) = (0.95x + 0.1) /8; then

2 ∧ 1 4 8x 1 0.1645 I = (∫ dx − I 2 ) = (64.1645 − 64) = (4-13) N 0 0.95x + 0.1 N N , which reduces variance more than 100 times faster than the uniform pdf.

Table 4.1: Comparison Between Different Sampling Functions.

Sampling Method Sampling function Variance Samples needed for (pdf) standard error of 0.08 importance (poor) (6-x)/16 56.8/N 8875 uniform 1/4 21.3/N 3328 importance (ideal) x/8 0 1 importance (near ideal) (0.1+0.95x)/8 0.1645/N 26 stratified 1/4 21.3/N3 15

Note that all sampling functions p(x) in this table meet the requirement of a

4 probability density function that p(x)dx = 1. ∫0

52 Texas Tech University, Zhanpeng Feng, May 2012

0.6

0.5

0.4

0.3

0.2

0.1

0 0 0.5 1 1.5 2 2.5 3 3.5 4

(6-x)/16 x/8 (0.1+0.95x)/8 1/4

Figure 4.3: Different Sampling Functions for Importance Sampling.

Comparison in Table 4.1 suggests that more samples should be put in the “important” regions in the sample space, where f (x) has relatively high values. This is even more important for high dimensional problems such as iterative ray tracing. Uniformly sampling the entire space will produce high variance with non-uniformly distributed reflectance and lighting. Typically, a good importance function p(x) may be obtained by fitting the sampled values to some known models. Notice that stratified sampling is also listed in Table 4.1 for comparison. Although the variance for this stratification on I is inversely proportional to the cube of the number of samples, there is no general result for the behavior of variance under stratification. For some functions, stratification performs worse than importance sampling; BRDF/BTDF with significant peaks is one of these cases.

Simulation with Uniform Sampling, Cosine Sampling, and Ward Sampling

    In the rendering equation L x,ω  = BRDF(x,ω ,ω )L x,ω  ⋅ cosθ dω , the  o  ∫H 2 (n) o i  i  i i integrand is a product of the scattering profile (BRDF), incoming light (L), and a cosine term. Ideally, if a sampling function can be found to match distribution of this product, variance can be efficiently reduced via importance sampling. However, it is frequently

53 Texas Tech University, Zhanpeng Feng, May 2012

difficult to find the distribution of this product, because the BRDF and L often have irregular shapes over different angles. Although finding an optimal importance sampling function for this product is hard, finding a function that is similar to one of these multiplicands is helpful. One obvious choice is cosine sampling the hemisphere, because the product of BRDF and incident radiance is weighted by a cosine term. Cosine sampling generates sample directions from a pdf that is proportional to the cosine of the incident angle. More samples are likely to be generated towards the top of the hemisphere than towards the lower side, where cosine value is large near the normal angle. Simulation results in Figure 4.4 indicate that cosine sampling reduces standard error more quickly than uniform sampling.

Standard Error of Different Sampling Methods 0.025

0.02

0.015 Uniform 0.01 Cosine Error Standard Ward 0.005

0 0 5000 10000 15000 20000 Number of Samples

Figure 4.4: Standard Error of Different Sampling Methods.

Another method is sampling according to the BRDF. Because the BRDF of a diffuser is obtained from measured data, the BRDF must be first fitted to a function before finding the sampling function. Ward BRDF [28] is widely used for modeling scattering medium as an elliptical Gaussian distribution. It uses only a few simple parameters, admits efficient sampling for Monte Carlo estimation, can model anisotropic surfaces, and fits reasonably well to certain measured BRDF data [51]. The Gaussian gloss lobe of a Ward BRDF is defined as

54 Texas Tech University, Zhanpeng Feng, May 2012

 cos 2 φ sin 2 φ   h h  −tan2θ + h 2 2   α x α y  ρ s   fr(i,o) = e (4-14) 4πα α cosθ cosθ x y i o , where (i, o) is a pair of vectors denoting the incident and outgoing direction, h is the half angle defined by i and o, i + o h = (4-15) i + o ,

ρs controls the magnitude of the lobe,

α x andα y control the width of the lobe in the two principal directions of anisotropy.

Because the diffuser is isotropic, α x = α y , equation (4-14) is reduced to

 1  −tan2θ   h 2  ρ s α  fr(i,o) = e (4-16) 4πα 2 cosθ cosθ i o . Using the objective function defined for BRDF fitting in [52],

( ) = ( − r )⋅ θ g i,o d(i,o) f (i,o) cos i , (4-17) where d (i,o) is the measured BRDF at a pair of directions i and o, and fr(i,o) is the corresponding Ward BRDF. Parameters are estimated using the MATLAB function lsqnonlin(), which finds ρs and α that minimizes the least square error between a Ward BRDF and the measured data.

N 2 (ρˆs ,αˆ) = ArgMin(∑ g(ik ,ok ) ) (4-18) k=1 . Around 2000 pairs of incident/outgoing angles from the measured data are used to fit the Ward BRDF. Figure 4.5 shows the fitted Ward BRDF compared with measured data at two incident angles (20° and 5°, phi = 0°). The peak of the fitted curve is lower than the measured data, but overall the curves fit the data well.

55 Texas Tech University, Zhanpeng Feng, May 2012

Figure 4.5: Fitted Ward BRDF vs. Measure Data.

A sampling function is given in [51] for Ward BRDFs. Given two uniform random variables u and v in the range 0 < u,v < 1, the correct sampling equations for half angle h are:

2 θh = arctan −α logu (4-19) , φ = 2πv (4-20) h . The samples of half angle h are generated by the equations above from u and v. The outgoing direction o is then computed using:

o = 2(i⋅h)h -i . (4-21)

Sampling the measured BTDF of a diffuser generally has lower standard error than cosine sampling and uniform sampling (see Figure 4.4). However, when number of samples increases, the standard error of cosine sampling gets closer to or even better than Ward sampling. The standard error reduction trend in cosine sampling is the smoothest among the three, which implies a better numerical stability. For a couple of reasons, Ward sampling may not necessarily perform better than cosine sampling. 1) As mentioned before, the integrand of Monte Carlo is a product of the scattering profile (BRDF), incoming light (L), and a cosine term. The impact of the BRDF may not be always larger than the cosine term. 2) Ward BRDF may not be the best model for a diffuser. Thus, Ward sampling may not be the optimal sampling method for the measured BTDF.

56 Texas Tech University, Zhanpeng Feng, May 2012

Summary Path tracing is the ray tracing algorithm being employed in this simulation tool. Other path integrators exist, but most of them are biased. For example, direct lighting [6] ignores diffuse reflections and refractions, which will fail for a diffuser. On the other hand, irradiance caching [53] only works for diffuse surfaces and will not work for the display pixels. Path tracing is slow compared to other path integrators, but it does not exclude any possible light paths, and thus it is unbiased. For pixel level simulation, path tracing finishes in several seconds on a 2 GHz personal computer. It is sufficiently fast and achieves high accuracy. Sampling strategy is critical for improving efficiency in Monte Carlo ray tracing. Stratified sampling and importance sampling are investigated in this chapter. Stratified sampling produces well distributed random samples, and importance sampling helps Monte Carlo simulation converge faster. Different importance sampling methods are evaluated in this chapter. Cosine sampling is chosen for its stable reduction in stand error when number of samples increases.

57 Texas Tech University, Zhanpeng Feng, May 2012

CHAPTER V

SIMULATING DISPLAY PERFORMANCE

Introduction The commonly used metrics for reflective display performance include color gamut, contrast ratio, and white illuminance, through a range of viewing angles and lighting conditions. In addition, readability under daylight illumination is proposed in [54]. For hypothetical optical designs, physically based ray tracing provides a means of accurate simulation for these values. Although the PBRT ray tracer being used in this dissertation is an image rendering engine and capable of producing realistic images, for display design purposes, we choose to output numerical results instead of rendered images for a number of reasons: 1) Subtle changes in brightness and chromaticity are difficult to catch where comparing two rendered images produced by different design parameters. Only significant difference in parameters results in noticeable difference in appearance. For example, diffuser heights of the rendered images in Figure 2.12 (3) and Figure 2.12 (4) in Chapter 2 are 100 um and 100 mm, which has 10 times difference. 2) No two display devices produce the same image even the source images are identical. As shown in the image pipeline (Figure 5.1), every display device has its own Gamma and rgb color transform matrices, which makes the appearance of the same image device dependent. 3) It is difficult to communicate concerning a display’s performance between engineers without objective numerical data. 4) Numerical results enable algorithms for optimization of display design parameters.

58 Texas Tech University, Zhanpeng Feng, May 2012

Figure 5.1: Color Image Pipeline. http://en.wikipedia.org/wiki/Color_image_pipeline

Software Architecture The simulation software brings together ambient light, front light, pixel array, diffuser, and FOS to model display performance. The system diagram is shown in Figure 5.2. Once the user clicks the “Run” button, the software proceeds as follows: 1) Load IMOD panel design file (batch file). 2) Scan 0 to 90 degree for every one degree to obtain reflectance SPD (spectral power distribution) for all colors (white, black, red, green, blue, cyan, yellow, magenta, etc) and store these values in intermediate files. 3) Following the scene description file syntax of PBRT, a scene is constructed based on user inputs a. Lighting: geometry (θ,φ ), ambient SPD (ilm file), ambient illuminance (lx), percentage between direct and diffuse light, front light illuminance, front light SPD b. Viewer: viewing θ,φ , and distance c. Diffuser BTDF: generated from radiant imaging measurements d. FOS absorption and reflectance: from a wavelength dependent table 4) Run Monte Carlo ray tracing. Cast rays into the scene and incrementally construct ray paths based on the selected sampling method. 5) Calculate the SPD of light that arrives at the viewer and the color metrics.

59 Texas Tech University, Zhanpeng Feng, May 2012

Figure 5.2: System Diagram of Simulation Software.

With this software, display engineers can make changes to the design parameters, display structures, materials, and environment settings to evaluate pixel designs. Although the software was built for IMOD displays, the same architecture can be used for other types of displays. For a display with non IMOD pixels, all that is needed is the spectral and angular reflectance distribution stored in a compatible format. The same ray tracing framework can still calculate simulation results including brightness and chromaticity for this display. PBRT originally only uses tristimulus values for reflectance functions. It has been expanded to full visible spectrum to support interference in this software. By default, the lighting and texture maps are also limited to tristimulus values. They are expanded to full spectrum, too. Bringing full spectrum representation into the entire ray tracing pipeline is crucial for accurate simulation for display performance. The simulation results will be presented in the following sections.

60 Texas Tech University, Zhanpeng Feng, May 2012

Reflectance of Display Pixels Reflectance is an important metric for display performance. The hemispherical- directional reflectance is a function that gives the total reflection in a given direction due to constant illumination over the hemisphere. For example, it is interesting to evaluate the diffuser’s impact on display reflectance. The Monte Carlo ray tracer calculates the radiance captured at a specified viewing angle. It is possible to carefully set the illuminance level of the hemispherical lighting, such that the captured radiance at the angle is exactly equal to the directional reflectance. Compare the definition of hemispherical-directional reflectance

ρ (ωo) = BRDF(ω , ω )⋅cosθ dω (5-1) hd ∫ 2 o i i i H (n) with the light transport equation, which computes the observed radiance at angle ωo

    L x,ω  = BRDF(x,ω ,ω )L x,ω  ⋅ cosθ dω (5-2)  o  ∫ 2 o i  i  i i H (n) .

If L(x,ωi) is a constant value of 1, then the reflectance at ωo apparently equals the

radiance observed at (x,ωo). This equivalence allows us to calculate reflectance using the same ray tracer that computes observed radiance. The radiance of incoming light must be 1 at all angles. This requirement is equivalent to having Lambertian diffuse light over the hemisphere, whose illuminance value I in lx (cd·sr·m–2) is equal to I = π ⋅ L , (5-3)

where L is the light’s luminance value at all angles. The luminance of a given SPD can be calculated using the standard luminosity function

∞ L = 683.002 y(λ)J (λ)dλ (5-4) ∫0 , where L is the luminance in cd·m–2, J (λ) is the spectral power distribution of the radiance (power per solid angle per area per unit wavelength), in W/sr/m3,

y(λ) is the standard luminosity function, which is dimensionless, and λ is wavelength in m. When J (λ) = 1 for all wavelengths,

61 Texas Tech University, Zhanpeng Feng, May 2012

∞ L = 683.002 y(λ)dλ = 683.002 (5-5) ∫0 , because the standard luminosity function y(λ) integrates to one.

1

0.8

0.6

0.4

0.2 Relative Sensitivity Spectral Relative 0 350 450 550 650 750 Wavelength (nm)

Figure 5.3: Standard Luminosity Function.

The above derivation concludes that for a diffuse light source with constant radiance that is equal to 1 at all angles, the total illuminance is 683.002π ≈ 2145.7141 lx.

Color Gamut and Contrast Ratio Path tracing returns the SPD of radiance for different color pixels at the detector. According to the tristimulus theory, the SPDs can be transformed to three color values, x, y, and z, which represent human perception of colors, using three spectral matching curves x(λ), y(λ), z(λ). Given an SPD(λ), ∞ X = SPD(λ)x(λ)dλ (5-6) ∫0 ,

∞ Y = SPD(λ)y(λ)dλ (5-7) ∫0 ,

∞ Z = SPD(λ)z(λ)dλ (5-8) ∫0 .

62 Texas Tech University, Zhanpeng Feng, May 2012

1.8

1.6 1.4 1.2 1 x(λ) 0.8 y(λ) 0.6 0.4 z(λ)

Relative Sensitivity Spectral Relative 0.2 0 350 450 550 650 750 Wavelength (nm)

Figure 5.4: Color Matching Functions.

The colorimetric values are output in tables and plots in the simulation software GUI (graphical user interface). The X, Y, Z values can be transformed to other color spaces such Lab and LUV. A typical output from a simulation run is shown in Table 5.1.

Table 5.1: Colorimetric Output.

Name White Black Red Green Blue *dR 0.0133 0.114 0.154 0.0475 0.206 *Gamut 33.4 33.4 33.4 33.4 33.4 CR 24 1 4.26 19.6 2.06 Xcap 0.225 0.015 0.0701 0.151 0.0335 Ycap 0.242 0.0101 0.043 0.198 0.0208 Zcap 0.223 0.0283 0.0276 0.13 0.123 up 0.199 0.239 0.351 0.172 0.188 vp 0.481 0.361 0.485 0.508 0.262 x 0.326 0.281 0.498 0.316 0.189 y 0.351 0.189 0.306 0.414 0.118 z 0.324 0.53 0.196 0.271 0.693 Lst 56.3 9.07 24.6 51.6 15.9 ast -2.36 17.5 34.5 -20.6 26.4 bst 6.67 -16 11.3 18.2 -41.6 R 91 91 255 64 22 G 91 26 25 123 17 B 74 137 44 56 183 Ynits 165 6.9 29.4 135 14.2

63 Texas Tech University, Zhanpeng Feng, May 2012

Figure 5.5 shows color gamut comparison of a bare panel and a diffused panel, where the detector is placed at 8°. The vertices of a triangle are the coordinates of the three primary colors of a display in u’ v’ space. The area enclosed by the triangle is the color gamut of a display, which indicates the colors that are producible by the display. Compared with a bare panel, adding a diffuser introduces a counter clockwise rotation of the gamut triangle, which means that the colors are shifted to the blue end. Simulated results match both measured data and physical analyses shown in the next section.

Figure 5.5: Color Gamuts of a Bare Panel and a Diffused Panel.

In this dissertation, color gamut is measured as the percentage ratio between the area covered by the primaries of a reflective display and the area covered by the sRGB primaries in the u’v’ chromaticity diagram. sRGB uses the ITU-R BT.709 primaries, the same as are used in studio monitors and HDTV [55].

ColorArea ColorGamut = display ×100% (5-9) ColorArea sRGB .

64 Texas Tech University, Zhanpeng Feng, May 2012

Contrast ratio (CR) is define as the luminance of the white pixel divided by the luminance of the black pixel.

L CR = white (5-10) L black .

Simulation Results and Measured Data

Simulation results are verified with measurement data and theories in physics. Figure 5.6 shows the comparison between simulated results and measured data. The studies are conducted on both a bare display panel and a panel with diffuser attached, under diffuse illumination. Figure 5.6 (a) shows the simulated results. The solid lines are the SPDs of a bare panel, which is without a diffuser. The dashed lines are SPDs of a panel with a diffuser. Figure 5.6 (b) shows the measured results. The simulation captures a few phenomena that are physically correct from a diffuser: decreased reflectance, reduced color gamut, and shifted spectrum. Decreased reflectance is caused by scattering loss of the diffuser at larger viewing angles. Recall that the magnitude of BTDF of a diffuser decreases as the inclination of incident angle increases (see Figure 3.8). A diffuser behaves as an angle mixing component in the light path. It mixes the incoming light from all incident angles. In a perfectly diffuse lighting environment, incoming radiance from all angles is equal. However, because of the Fresnel effect, at larger inclination angle, more light is reflected on a diffuser and less light enters the diffuser to reach the display. The diffuser collects rays from all directions. Those rays that come from a larger inclination angle are attenuated by the diffuser. Therefore, the accumulated radiance arriving at the viewer takes both the losses caused by a pixel and a diffuser. On the other hand, a bare panel behaves as a mirror, which only reflects the incoming light at the specular angle. So, bare panel reflectance only depends on the pixel reflectance at the specular angle. No angle mixing is involved in the bare panel case. As a result, the level of reflectance coming from a diffused display panel is less than that of a bare panel.

65 Texas Tech University, Zhanpeng Feng, May 2012

Simulated (bare panel vs diffused panel, viewing at 8°) 0.3

0.25

0.2 R_bare_mod G_bare_mod 0.15 B_bare_mod

Reflectance 0.1 R_diff_mod G_diff_mod 0.05 B_diff_mod 0 400 450 500 550 600 650 700 Wavelength (nm)

(a) Simulated Results. Bare Panel in Solid Lines. Diffused Panel in Dashed Lines.

Measured (bare panel vs with diffused panel, viewing at 8°) 0.3

0.25

0.2 R_bare_meas G_bare_meas 0.15 B_bare_meas Reflectance 0.1 R_diff_meas G_diff_meas 0.05 B_diff_meas 0 400 450 500 550 600 650 700 Wavelength (nm)

(b) Measured Results. Bare Panel in Solid Lines. Diffused Panel in Dashed Lines. Figure 5.6: Simulated and Measured Reflectance.

Reduced color gamut is caused by angle mixing. As a diffuser mixes the rays reflected by the display pixels, it adds the spectra of these rays together. Some rays have a blue-shifted spectrum compared with specular angle; other rays have a red-shifted spectrum compared with specular angle. When these rays are mixed together, the overall

66 Texas Tech University, Zhanpeng Feng, May 2012 spectrum spreads out across a wider range of wavelengths. As a result, the color of a primary pixel reflects light of a wider spectrum, which makes the color less saturated, and hence the color gamut shrinks. Color gamut is also a function of viewing angle. When viewing angle increases, gamut decreases in both bare panel and diffused cases. Figure 5.7 shows the color primaries rotation and color gamut shrinkage caused by a diffuser. Both simulated and measured results are shown for comparison. Qualitatively, simulated results predict the correct rotating direction and gamut size change. Quantitatively, simulated results match measurement well, except for a slight discrepancy in the blue primary point, which is the bottom vertex of the red triangle in solid line in Figure 5.7 (a). Further investigation shows that the discrepancy could be due to the imperfect modeling of pixel reflectance over angles, which is outside the scope of this dissertation. Simulation plots in Figure 5.8 show that color gamut decreases faster in a bare panel, as expected.

(a) Simulated Color Gamut (b) Measured Color Gamut Figure 5.7: Simulated and Measured Color Gamuts.

67 Texas Tech University, Zhanpeng Feng, May 2012

Color Gamut vs. Viewing Angle 30 DIFFUSE 25

BARE 20

15

10 Gamut (% of EBU) of (% Gamut 5

0 8 10 12 14 16 18 20 Viewing Angle (⁰) Figure 5.8: Color Gamut vs. Viewing Angle.

The shift of SPD is a combinational effect of the diffuser’s angle mixing and the nature of an interferometric display. An interferometric modulator (IMOD) pixel uses thin film interference to produce color. A thin film structure is illustrated in Figure 5.9. A difference in distance is taken by the two reflected beams from the top surface (ray AJ) and bottom surface (ray ABC). This difference causes the two light beams to have different phases, which results in interference.

Figure 5.9: Thin Film Interference Geometry [56].

68 Texas Tech University, Zhanpeng Feng, May 2012

Glassner [56] derived the reflectance of a thin film as

Ir 2 2πw n2 r = = 4R f sin ( cosθt ) (5-11) i λ I n1 , where Rf is reflectivity from the Frensel effect, w is the thickness of the thin film, n1 is the index of refraction of the top medium, n2 is the index of refraction of the bottom medium, and θt is the incident angle. Using equation (5-11), the color produced by a soap bubble can be calculated according to viewing angle. As the viewing angle increases, the color of bubble shifts from violet to red. The color shift phenomenon caused by interference is shown in Figure 5.10. A display pixel is much more complicated than a single layer thin film. It involves optical properties of multiple layers of materials and their geometric structure. However, in principle, an IMOD pixel exhibits the same spectrum shift behavior as a thin film bubble.

Thin Film Spectral Reflectance vs. Angle

30° 20° 10° 0°

Figure 5.10: Thin Film Spectral Reflectance at Different Angles.

69 Texas Tech University, Zhanpeng Feng, May 2012

When the detector inclination angle is small, compared with a bare panel, the diffuser accumulates more rays reflected by a larger angle into the viewing direction. Higher angle of reflection on a display pixel results in a blue shift of SPD (see Figure 5.10). Therefore, in aggregate, the observed SPD at the detector shifts towards blue, when the viewing angle is small. Figure 5.11 (a) shows the scenario with a small viewing angle (θ = 8°). If there is no diffuser, the viewer only sees the reflected light from the specular refraction angle, indicated by the blue line. When a diffuser is added, light rays from a wider range of angles are seen by the viewer. The rays that come from a smaller incident angle are colored in green, and the rays that come from a wider angle are colored in red. Because the viewing angle is small, green rays are outnumbered by red rays. Therefore, in the mixture of rays arriving at the viewer, more rays arrive from a larger angle than from the specular refraction. The aggregate result is a blue shift in the observed spectrum. Figure 5.6 shows the blue shift in SPDs. On the other hand, if the viewing angle is large, then more rays reflected at smaller angle are brought into the viewing direction by the diffuser. These rays illustrate a red shift of the observed SPD (see Figure 5.10). Figure 5.11 (b) shows the scenario with a larger viewing angle (θ = 20°). More rays (colored in green) come from a smaller angle than from the specular refraction (colored in blue). In addition, the Fresnel effect dictates that the rays reflected from a larger angle carry less energy. Therefore, at 20° a viewer observes a red shift in spectrum compared with bare panel.

(a) (b)

Figure 5.11: Distribution of Rays at Different Viewing Angles.

70 Texas Tech University, Zhanpeng Feng, May 2012

If we define average peak shift from bare panel to diffused panel at a particular viewing angle as

1 ∆peak = ∑ (λpeak _ i _ diffused − λpeak _ i _ bare ) (5-12) 3 i=r,g,b , then the average peak shift vs. viewing angle can be plotted as shown in Figure 5.13. When the viewing angle is at normal (0°), all the rays accumulated by the diffuser are from larger reflected angles compared with a bare panel. At this viewing angle (0°) the highest value of blue shift is observed. When viewing angle increases, a blue shift gradually changes to a red shift. At a small viewing angle (as shown in 5.6(a), viewing angle = 8°), a diffuser causes a blue shift. At a large viewing angle (20°), a diffuser causes a red shift (see Figure 5.12). The viewing angle a blue shift changes to a red shift depends on the diffuser haze and pixel design.

Modeled (bare panel vs diffused panel, viewing at 20°) 0.3

0.25

0.2 R_diff_mod G_diff_mod 0.15 B_diff_mod

Reflectance Reflectance R_bare_mod 0.1 G_bare_mod B_bare_mod 0.05

0 400 450 500 550 600 650 700 Wavelength (nm)

Figure 5.12: SPD Shift from a Bare Panel to a Diffused Panel at 20°.

71 Texas Tech University, Zhanpeng Feng, May 2012

Ave_Peak_Shift vs. Angle 40

30 20 10 0

Peak Shift (nm) Shift Peak 0 5 10 15 20 25 -10 -20 Viewing Angle

Figure 5.13: Average Peak Shift vs. Angle.

Effectively, by using a diffuser on the display, the color shift with angle reduces. With no diffuser, the spectrum peak of R, G, and B pixels shifts 56.7 nm from 8° to 20°. With a diffuser, the spectrum peak shift is reduced to 18 nm.

Table 5.2: SPD Peak Shift of Bare Panel and Diffused Panel at 20° and 8°. Bare Panel With Diffuser Peak 20° Peak 8° Peak Shift Peak 20° Peak 8° Peak Shift (nm) (nm) (nm) (nm) (nm) (nm) Red 613 690 77 665 683 18 Green 472 528 56 498 520 22 Blue 400 437 37 407 421 14 Average 56.7 18

Typical Lighting Conditions for Display Simulation Lighting plays a critical role in display performance. Via Monte Carlo ray tracing, the simulation tool is not only able to evaluate the reflection of the display device in detail, but also to add arbitrary light sources at different viewing angles, luminance level, and spectra. However, due to the unlimited number of illumination situations, it is desirable to fix a set of illumination scenarios for characterization of display performance.

72 Texas Tech University, Zhanpeng Feng, May 2012

Common settings for reflective display measurements include: 1) The diffuse reflectance factor at near normal incidence (8° to 10°) using a large integrating sphere or sampling sphere; 2) The diffuse reflectance factor at normal incidence using a ring-light source; 3) The directed light reflectance factor at some combination of source and detector angles. While these methods are often used to describe the performance of reflective displays that exhibits substantially Lambertian reflection in nature (e.g. electrophoretic displays, polymer dispersed liquid crystal displays), they are inadequate to describe display performance in a real ambient lighting environment. This limitation is particularly true for those displays with significant non-Lambertian reflectance property, such as Cholesteric-LCD and MEMS-based interferometric displays [41]. Both diffuse (spatially extended) and directed (point-like) light sources exist in most real ambient lighting environments. The experiment results in Kobuta et al. [57] are found to be especially helpful in defining sensible representations of various ambient environments. In Kobuta’s study, a digital camera fitted with a fisheye lens is mounted on the user’s forehead to measure the incident light falling on a display surface, in various environments, including both indoor and outdoor. In the experiments, the users are instructed to orient the portable devices to find the optimal viewing angle. The mean directional luminance for 42 outdoor locations and 70 indoor locations are shown in Figure 5.14. In the outdoor condition (Figure 5.14 (a)), the relative luminance peaks at incident angle range from 30° to 45°. In the indoor condition (Figure 5.14 (b)), the relative luminance peaks at incident angle range from 15° to 30°. Based on these observations, we set the typical incident angle of the direct light source for outdoor (35°) and indoor (20°). Azimuthally, in indoor condition, the relative luminance clearly peaks at 0° and decreases as azimuth angle increases, which indicates a strong contribution from a direct light source. In outdoor condition, the relative luminance stays reasonably flat across a large range of angles, which indicates a strong contribution from diffuse light. Based on

73 Texas Tech University, Zhanpeng Feng, May 2012

these observations, we set the typical directed/diffuse light composition for outdoor and indoor conditions. Viewing angles are chosen to balance two factors: a) avoiding veiling glare from front surface reflection, and b) taking advantage of the gain from a specular display with a medium diffuser. For example, if the directed light comes from -35°, viewing at +35° gives the highest reflectance. However, the veiling glare of specular reflection disturbs visual experience significantly. A user usually avoids the specular reflection angle in this case. Therefore +20° is chosen. In summary, the measurement geometry and directed/diffuse light composition is fixed for three representative lighting conditions: 1) For indoor, ambient light is composed as 50% diffuse plus 50% directed. The directed light source is held at -20° and the viewer at +8° with respect to display normal. 2) For outdoor sunny-day, ambient light is composed as 20% diffuse plus 80% directed. The directed light source is held at -35° and the viewer at +20° with respect to display normal. 3) For outdoor overcast-day, ambient light is 100% diffuse. The viewer is at +8° with respect to display normal.

74 Texas Tech University, Zhanpeng Feng, May 2012

(a) Outdoor

(b) Indoor Figure 5.14: Mean Directional Luminance for 42 Outdoor Locations and 70 Indoor Locations [57].

75 Texas Tech University, Zhanpeng Feng, May 2012

Impact of Front Surface Reflection As introduced in the FOS section, the layers in a FOS stack introduce front surface reflection to the viewer. The total FSR (front surface reflection) is a function of the reflection and absorption properties of the layers. These properties can be stored in a spectral profile and loaded into the simulation tool to evaluate the impact of different levels of FSR. Simulation results are shown in Figure 5.15. In all lighting conditions, lower FSR is desired for better display performance. Low FSR leads to higher contrast ratio and larger color gamut. In the three simulated lighting scenarios, FSR has the greatest impact under diffuse lighting. In diffuse lighting, regardless of the viewing angle, the user always sees the front surface reflection from the FOS. Although the white luminance increases with higher FSR, this increase does nothing favorable to display performance. Instead, the reflected light from the FOS competes with the reflected light from the display, which makes contrast ratio lower and color gamut smaller. In the lighting environments where the directed light component accounts for a measurable portion, the FSR poses less adverse impact, because the user generally rotates the display to avoid the specular angle such that the FOS does not reflect directed light to the user.

Figure 5.15: Simulate FSR Impact in Different Lighting Conditions.

Diffuser with Different Haze Values

As shown in Figure 5.16, diffusers with lower haze value scatter light into a smaller viewing cone. The simulation results with two measured diffusers are consistent

76 Texas Tech University, Zhanpeng Feng, May 2012 with this law of physics. Contrast, gamut, and white reflectance all change more drastically with respect to viewing angles in the Haze 65 case. In both cases, the white reflectance peaks at the specular angle of 20°. However, the H65 peak is much higher than the H78, which is consistent with the scattering profile of these diffusers. Over the typical range of viewing angles, H65 has higher contrast ratio and color gamut compared to H78. Only when the detector angle is near or above 50° does the H78 diffuser start to perform better. However, the white reflectance increases at a steep rate from 0° to 10° (the optimal viewing angle range), which may create discomfort to the user, because luminance variation can be noticed with slight rotation of the display.

BTDF of H78 and H65 Diffusers

100

H78

BSTF (1/sr) 10 H65

1 -50 -40 -30 -20 -10 0 10 20 30 40 50 Scattering Angle (°)

Figure 5.16: Scattering Profile of Haze 78 and Haze 65 Diffusers at Normal Angle.

77 Texas Tech University, Zhanpeng Feng, May 2012

Figure 5.17: Simulate FSR Diffuser with Different Haze Values in In-door Lighting.

Auxiliary Lighting (Front Light)

Under dark ambient lighting, supplemental illumination is required to produce light for a reflective display. The reflective display pixels are often opaque which block light coming from the back of a display and eliminates the possibility of using a conventional backlight. Any artificial illumination must be provided on the front the side of the display between the display and the viewer; hence the term front light is used. A front light is placed under the FOS layers for better efficiency of light usage.

Figure 5.18: Front Light of Reflective Display.

78 Texas Tech University, Zhanpeng Feng, May 2012

In a front light design, light from LEDs is brought in from the sides of a reflective display, scattered by a layer of light guide towards the display, and then reflected back to the viewer. Light guide generally scatters light to the display uniformly in all directions. With this assumption, front light can be modeled as an additional diffuse light source. The difference between a front light and diffuse ambient light is that the front light layer is under the FOS, and therefore front light is only attenuated by the FOS once; ambient light is attenuated by FOS twice. To account for this difference, front light source 1 strength is scaled up by , and then it can be modeled as an additional 1− RTotal, λ − ATotal, λ diffuse light source in the scene. Choices of spectra can be made on a front light. The capability of assigning full spectrum SPD to a light source in the simulator becomes valuable for front light selection. Figure 5.19 shows a comparison between different front light spectra. When a white LED front light is used, the color gamut is smaller. When RGB LED is used, because of the narrow bandwidth of each individual LED, the color gamut is enlarged, and the saturation of red is significantly improved.

79 Texas Tech University, Zhanpeng Feng, May 2012

(a) (b) (c)

d/8, Haze 78 Diffuser 0.6

0.5

0.4

0.3 v' 0.2

0.1

0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 u' Locus D65 RGB LED FL Wht LED FL sRGB (d) Figure 5.19: Simulate Auxiliary Lighting of Different SPDs.

Daylight Readability

Readability depends on the contrast, the luminance, the character height, and the age of the reader in a complex way. The perception of images is even more complex and beyond the scope of this research. If only the black and white texts on a reflective display are considered, as experiment results show in [54], where an LCD screen exhibits a large

contrast ratio of JW/JK = 709 in a dark room, under daylight conditions the ambient contrast ratio is only D = 1.7. The contribution of the diffuse reflectance causes a significant reduction in contrast. If only a sun-level source was at 45°, then the contrast ratio would be 5:1. If only a bright sky were present, then the contrast ratio would be 1.5:1. This example shows that simply pointing a sun level source at a display gives an

80 Texas Tech University, Zhanpeng Feng, May 2012

incomplete picture of sunlight readability. The diffuse contribution is not negligible. Therefore, a combination of direct and diffuse light sources is essential to simulate daylight readability. The simulator built in this dissertation provides such feasibility, and simulation results are in line with those in the literature. Kelley compared daylight readability of LCD and transflective displays in [54], where sky light is modeled as a uniform diffuse illumination with an illuminance level of 4 Esky = 10 lx, and direct sunlight as a directed illumination at an illuminance level of Esun = 105 lx. The simulation of daylight readability of a prototype IMOD reflective display is

run with our software using the same illumination setting. Simulated results of Kw

(luminance of white) and Kk (luminance of black) values of this reflective display are 2973 cd/m2 and 277 cd/m2. Using the definition of the local average luminance of the black text with a white background in [54],

2 Lave = 0.75KW + 0.25K K = 0.75* 2973 + 0.25* 277 = 2299 cd/m . (5-13)

The contrast is defined in [58] as

K − K 2973− 277 C = L − L L = W K = = 0.879 (5-14) AVE AVE K + (K / 3) 2973+ (277 / 3) W K .

With the calculated Lave and C, relative visual performance (RVP) values of various ages can be looked up in the charts in Figure 5.20. The RVP value P is between zero and one, 0 ≤ P ≤1, assuming the critical detail size set to 1.5' of arc, which represents the size of stroke widths, diacritics, and punctuation of small font sizes common on many electronic displays. For a 400 mm viewing distance, a 1.5' mark would be 0.17 mm high and a typical character height might be 1.7–2.3 mm (15' to 20'). P = 1, or 100%, means that reading is 100% accurate.

81 Texas Tech University, Zhanpeng Feng, May 2012

Figure 5.20: CIE RVP for 20-, 50-, and 75-year-old Adults of Contrast and Average Luminance [58].

Simulated results of a prototype IMOD display is shown in the last two columns of Table 5.3. For comparison, the third column shows the results with same lighting (45°) and viewing (0°) angles as measurements in [54]. However, these angles are not the optimal usage configuration for a mobile display. An E-reader user typically attempts to orient his reflective display to be able to best read the information on the screen. If the user adjust the display to receive direct light from 35°, and reads it at 20°, the daylight readability will be even better. Only 1/10 of the original sun light level is needed to 3 4 achieve similar readability. Setting Esky = 10 lx and Esun = 10 lx, simulated results of Kw 2 2 (luminance of white) is 4030 cd/m and Kk (luminance of black) is 359 cd/m . The readability for a standard 20, 50, and 70 year old reader is 100%, 89%, and 60%, respectively. If the sun light level had been set to the same as Kelley’s measurements, the readability would have been even better. However the curves would have reached outside 3 4 of the charts in that case. That is why Esky = 10 lx and Esun = 10 lx are set in the optimal viewing angle simulation. The simulated results are consistent with Kelley’s finding that reflective displays outperform emissive displays under bright sun light.

82 Texas Tech University, Zhanpeng Feng, May 2012

Table 5.3: Readability Results of Various Displays

Reflective (35° Reflective (45° direct, 20° view, LCD Transflective direct, 0° view) 1/10 illumination) 2 Kw (cd/m ) 1291 6331 2973 4030 2 Kk (cd/m ) 749 1487 277 359 D 1.7 4.3 10.08 11.22 2 Lave(cd/m ) 1156 5120 2299 3112.25 C 0.35 0.71 0.88 0.88 82%,67%,30 100%,87%,53 P(20y,50y,70y) % % 100%,87%,59% 100%,89%,60%

Summary

In this chapter, simulation results of display performance, including reflectance, color gamut, contrast ratio, and day light readability, are presented. The impact of different lighting conditions, diffusers, and FOS designs, are studied. Simulation results are compared with measurement data for validation. Analyses of simulations based on physics are also performed. It is shown that the simulation results are correct qualitatively and quantitatively. The simulation tool provides the desired accuracy and predictability for display design in a wide range of lighting conditions. Simulation shows that a reflective display has superior daylight readability compared with an emissive display. The performance of a reflective display can be improved by reducing the angular dependence of spectrum, reducing front surface reflection, increasing reflectivity, providing auxiliary lighting, etc. Considering the various constraints including materials, structures, and systems, display designers can leverage the predictability of this simulation tool to find the optimal solution for a set of desired usage conditions, such as indoor, outdoor-overcast, and outdoor sunny days.

83 Texas Tech University, Zhanpeng Feng, May 2012

CHAPTER VI

CONCLUSIONS AND FUTURE WORK

Summary of Contributions

In this dissertation, a simulation tool is built for accurate simulation of optical performance of reflective displays in a wide range of environments. A reflective display is modeled as a specific type of layered material, with a FOS layer at the top, a diffusive layer (diffuser) beneath it, a transparent layer (glass) in the middle, and a wavelength dependent reflective layer (pixel array) at the bottom, as shown in Figure 3.1. This specific material has its own set of light transport events that no previous model has completely described. These events include reflection and refraction, both diffusive and specular, at the air-diffuser interface, diffuser-glass interface, and glass-pixel array interface. The scattering of light of the diffuser and wavelength dependent reflection at the pixel array are particularly interesting. Accurate simulation of all these events is crucial for modeling the correct visual performance of a reflective display. Despite the fact that analytical and empirical appearance models exist for general applications, specific models are necessary to account for the subtleties of certain materials. Examples are finished wood [59], contaminated glass [24], rough microfacet [60], dual microfacet [25], car paint [34], and human skin [61-63]. Although individual light transport events related to a reflective display can be found in previous models, no existing model systematically integrates all reflection and scattering events of a reflective display. Light interactions in the multiple layers of a reflective display make it a unique material. The study of material appearance can be roughly categorized into measurement, representation, modeling, and rendering. This dissertation focuses on measurement and modeling. The contribution of this dissertation can be summarized as follows: 1) A reflective display simulation framework is proposed and implemented (Chapter 5, Section 2). 2) Display is modeled as a layered material. Reflectance models of FOS, diffuser, and pixel are developed to address both accuracy and speed (Chapter 3).

84 Texas Tech University, Zhanpeng Feng, May 2012

3) A rapid BRDF measurement method is developed. Different measurement tools are evaluated and compared. A two-step measurement is taken to obtain angular and spectral optical response. A mathematical method is developed to combine the two measurements for high angular and spectral resolution on the BRDF (Chapter 3, Section 4). 4) Monte Carlo ray tracing is applied to compute light transport. Different sampling strategies are evaluated. Best candidate sampling and importance sampling are chosen for high efficiency (Chapter 4). 5) Simulation results are validated by both measurement and theoretical analysis (Chapter 5, Section 5). 6) A set of representative lighting conditions is proposed to characterize reflective display performance (Chapter 5, Section 6). 7) The simulation capability of color gamut, contrast ratio, and white reflectance with respect to diffuser haze, FSR, and light geometry and spectrum have been demonstrated (Chapter 5). Accomplishment of these tasks in this dissertation provides a solution for predictive modeling of reflective displays.

Future Work

Spatial dithering [64], color mapping [65], and half toning [66] are commonly used image processing techniques for image quality enhancement in printing and displays. These methods are generally based on global or neighborhood information of an image. To simulate the effects of these spatial domain techniques requires image rendering on a larger area of a display. Merging the image processing with complex display design for a complete simulation is an interesting and challenging task. New techniques for importance sampling on the products of complex functions using wavelets have been developed. Such methods account for both BRDF and lighting when generating samples of rays. In [67], Petrik et al. present a hierarchical sample warping algorithm that generates high-quality point distributions, which match the wavelet representation exactly. This sampling technique was applied to rendering of objects with measured BRDFs illuminated by complex distant lighting, and efficiency

85 Texas Tech University, Zhanpeng Feng, May 2012

improvement of more than an order of magnitude was shown. These methods can be explored to improve the sampling efficiency in reflective display simulation. Physically based rendering with ray-tracing enables accurate simulation of light fields and final photographs from hypothetical optical designs. In [68], Ren Ng explores the performance of a prototype light field camera by using PBRT [6] to compute the irradiance distribution that would appear on the photo sensor. High fidelity was achieved in his experiments, which illustrates the capability of synthetic MTF (modulation transfer function) analysis with ray tracing. In reflective display design, a similar goal is to evaluate system blur introduced by the optical stacks. Ray tracing results on an area of display pixels will be able to simulate the blur characteristics. For more complicated lighting conditions, light maps [69] can be used. Environmental light maps of indoor and outdoor condition can be found in databases from USC [70] and MIT [71]. These light maps are in HDR (high dynamic range) format, which are created by combining omnidirectional pictures at multiple exposures. If higher spectral resolution is desired, a lighting environment can be carefully designed, where the SPDs of luminaries are known from precise measurements. Combining the spatial resolution from light maps and the spectral resolution from SPD data, complex lighting with high spectral accuracy can be achieved in lighting simulation. Auxiliary lighting plays an important role when a reflective display is used in dark environments. Currently the simulation model assumes a perfectly diffuse front light. It is possible to design angular distribution for a front light to use illumination power more efficiently. Sophisticated front light design can be done in optical design engines such as ASAP™ [72] and the output data of front light angular distribution can be imported into the simulation model for more precise modeling. The path tracing algorithm is suitable for light transport simulation on the pixel level. However, to render a complete picture with a large number of pixels on a display requires a significant amount of computation. Path tracing becomes too expensive for such purposes. Although pixel level simulation addresses the primary design concerns for a reflective display, having the complete rendering result can be helpful for evaluation of color processing of various image contents, spatial artifacts reduction, and full image visualization. One possibility is to run light transport simulation with path tracing on

86 Texas Tech University, Zhanpeng Feng, May 2012

designed color pixels and then synthesize the entire picture assuming that the same SPD is perceived from all pixels of the same primary color. Other approaches include employing the fast rendering methods such as PRT (precomputed radiance transfer) [73], in which the illumination of a point is precomputed as a linear combination of incident irradiance, either encoded in spherical harmonics [74] or non-linear wavelets [75].

87 Texas Tech University, Zhanpeng Feng, May 2012

REFERENCES

[1] D. Gutierrez, H. W. Jensen, S. Narasimhan, and W. Jarosz, "Scattering course notes," proceedings of SIGGRAPH Asia, 2008. [2] M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, sixth (corrected) ed. NewYork: Pergamon Press, 1986. [3] Qualcomm MEMS Technologies, "How Mirasol Displays Work: Micro-electro- mechanical Systems (MEMS) Drive IMOD Reflective Technology," 2010. [4] J. T. Kajiya, "The rendering equation," proceedings of SIGGRAPH, 1986. pp. 143-150. [5] R. Green, "Spherical harmonic lighting: The gritty details," proceedings of Game Developers Conference, Jan 2003. pp. 1-47. [6] M. Pharr and G. Humphreys, Physically Based Rendering: From Theory to Implementation, Second ed: Morgan Kaufmann, 2010. [7] F. A. Jenkins and H. E. White, Fundamentals of Optics, 4th ed: Mcgraw-Hill College, 1976. [8] M. Ashikmi, S. Premoze, and P. Shirley, "A microfacet based BRDF generator," proceedings of SIGGRAPH, 2000. pp. 65-74. [9] G. M. Morris, T. R. M. Sales, S. Chakmakjian, and D. J. Schertler, "Engineered diffusers™ for display and illumination systems: Design, fabrication, and applications," proceedings of Yountville Conference, Yountville, California, 2006. [10] F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis, "Geometrical Considerations and Nomenclature of Reflectance," in National Bureau of Standard, 1977. [11] S. Chandrasekhar, Radiative Transfer: Oxford University Press, 1960. [12] L. G. Henyey and J. L. Greenstein, "Diffuse radiation in the galaxy," Astrophysical Journal, vol. 93, pp. 70-83, 1941. [13] M. Pharr and P. Hanrahan, "Monte carlo evaluation of non-linear scattering equations for subsurface reflection," proceedings of Computer Graphics: Annual Conference Series, 2000.

88 Texas Tech University, Zhanpeng Feng, May 2012

[14] J. Stam, "Multiple scattering as a diffusion process," proceedings of Eurographics Rendering Workshop, 1995. [15] A. Ishimaru, Wave Propagation and Scattering in Random Media: Oxford University Press, 1978. [16] H. W. Jensen, S. R. Mahschner, M. Levoy, and P. Hanrahan, "A practical model for subsurface light transport," proceedings of SIGGRAPH, 2001. pp. 511-518. [17] H. W. Jensen and J. Buhler, "A rapid hierarchical rendering technique for translucent materials," proceedings of SIGGRAPH, 2002. pp. 576-581. [18] A. Arbree, B. Walter, and K. Bala, "Single-pass scalable subsurface rendering with lightcuts," proceedings of Computer Graphics Forum, 2008. pp. 507-516. [19] C. Donner and H. W. Jensen, "Light diffusion in multi-layered translucent materials," ACM Transactions on Graphics, vol. 24, pp. 1032-1039, 2005. [20] K. Li, F. Pellacini, and K. Torrance, "A hybrid Monte Carlo method for accurate and efficient subsurface scattering," proceedings of Rendering Techniques: Eurographics Rendering Symposium, 2005. pp. 283-290. [21] T. Haber, T. Mertens, P. Bekaert, and F. V. Reeth, "A computational approach to simulate subsurface light diffusion in arbitrarily shaped objects," proceedings of Graphics Interface, 2005. pp. 79-86. [22] X. Tong, J. Wang, S. Lin, B. Guo, and H. Y. Shum, "Modeling and rendering of quasi-homogeneous materials," ACM Transactions on Graphics, vol. 24, pp. 1054-1061, 2005. [23] J. Wang, S. Zhao, X. Tong, S. Lin, Z. Lin, Y. Dong, B. Guo, and H. Y. Shum, "Modeling and rendering of heterogeneous translucent materials using the diffusion equation," ACM Transactions on Graphics, vol. 27, pp. 9:1-9:18, 2008. [24] J. Gu, R. Ramamoorthi, P. Belhumeur, and S. Nayar, "Dirty glass: Rendering contamination on transparent surfaces," proceedings of Eurographics Symposium on Rendering, 2007. pp. 159-170. [25] Q. Dai, J. Wang, Y. Liu, J. Snyder, E. Wu, and B. Guo, "The dual-microfacet model for capturing thin transparent slabs," proceedings of Computer Graphics Forum (Pacific Graphics), 2009. pp. 1917-1925.

89 Texas Tech University, Zhanpeng Feng, May 2012

[26] J. F. Blinn, "Models of light reflection for computer synthesized pictures," proceedings of SIGGRAPH, 1977. pp. 192-198. [27] R. L. Cook and K. E. Torrance, "A reflectance model for computer graphics," ACM Transactions on Graphics, vol. 1, pp. 7-24, 1982. [28] G. J. Ward, "Measuring and modeling anisotropic reflection," proceedings of SIGGRAPH, 1992. pp. 265-272. [29] E. P. F. Lafortune, S. C. Foo, K. E. Torrance, and D. P. Greenberg, "Non-linear approximation of reflectance functions," proceedings of SIGGRAPH, 1997. pp. 117-126. [30] P. Hanrahan and W. Krueger, "Reflection from layered surfaces due to subsurface scattering," proceedings of SIGGRAPH, 1983. pp. 83-90. [31] J. Stam, "An illumination model for a skin layer bounded by rough surfaces," proceedings of Rendering Techniques 2001: 12th Eurographics Workshop on Rendering, 2001. pp. 39-52. [32] J. J. Koenderink and S. C. Pont, "The secret of velvety skin," Machine Vision and Applications, vol. 14, pp. 260-268, 2003. [33] S. Ershov, K. Kolchin, and K. Myszkowshi, "Rendering pearlescent appearance based on paint-composition modeling," proceedings of Computer Graphics Forum (Eurographics), 2001. pp. 227-238. [34] M. Rump, G. Muller, R. Sarlette, D. Koch, and R. Klein, "Photo-realistic rendering of metallic car paint from image-based measurements," proceedings of Computer Graphics Forum, 2008. pp. 527-536. [35] A. Ozturk, M. Kurt, A. Bilgili, and C. Gungor, "Linear approximation of bidirectional reflectance distribution functions," Computers and Graphics, vol. 32, pp. 149-158, 2008. [36] C. Donner, J. Lawrence, R. Ramamoorthi, T. Hachisuka, H. W. Jensen, and S. Nayar, "An empirical BSSRDF model," proceedings of SIGGRAPH, 2009. pp. 30:1-30:10. [37] W. Matusik, H. Pfister, M. Brand, and L. McMillan, "A data-driven reflectance model," proceedings of SIGGRAPH, 2003. pp. 759-769.

90 Texas Tech University, Zhanpeng Feng, May 2012

[38] J. Lawrence, A. Ben-Artzi, C. DeCoro, W. Matusik, H. Pfister, R. Ramamoorthi, and S. Rusinkiewicz, "Inverse shade trees for non-parametric material representation and editing," ACM Transactions on Graphics, vol. 25, 2006. [39] W. Matusik, M. Zwicker, and F. Durand, "Texture design using a simplicial complex of morphable textures," ACM Transactions on Graphics, vol. 24, pp. 787-794, 2005. [40] S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. Nayar, and H. W. Jensen, "Acquiring scattering properties of participating media by dilution," proceedings of SIGGRAPH, 2006. pp. 1003-1012. [41] T. G. Fiske, "MEMs-based reflective display measurements in ambient use conditions," SID Symposium Digest of Technical Papers, vol. 42, pp. 954-956, June 2011. [42] M. E. Becker, "Display reflectance: Basics, measurement, and rating," Journal of the Society for Information Display, vol. 14, pp. 1003-1017, 2006. [43] H. Li, S. C. Foo, K. E. Torrance, and S. H. Westin, "Automated three-axis gonioreflectometer for computer graphics applications," Optical Engineering, vol. 45, pp. 043605, 2006. [44] R. Rykowski and J. Lee, "Novel technology for view angle performance measurement," IMID/IDMC/Asia Display Digest, October 2008. [45] T. Weyrich, J. Lawrence, H. P. A. Lensch, S. Rusinkiewicz, and T. Zickler, "Principles of appearance acquisition and representation," Foundations and Trends® in Computer Graphics and Vision, vol. 4, pp. 75-191, October 2009. [46] R. Rykowski, "BSDF Data Interchange File Format Specification." http://www.zemax.com/kb/articles/229/2/BSDF-Data-Interchange-File-Format- Specification/Page1.html: ZEMAX, May 2008. [47] J. C. Stover, Optical Scattering: Measurement and Analysis, Second ed. Bellingham WA: SPIE, 1995. [48] H. W. Jensen, J. Arvo, P. Dutré, A. Keller, A. Owen, M. Pharr, and P. Shirley, "Monte Carlo ray tracing," ACM SIGGRAPH Course Notes, 2003. [49] E. Veach, Robust Monte Carlo Methods for Light Transport Simulation. PhD thesis Stanford University, 1997.

91 Texas Tech University, Zhanpeng Feng, May 2012

[50] T. Kollig and A. Keller, "Efficient multidimensional sampling," Computer Graphics Forum (Proceedings of Eurographics), vol. 21, pp. 557-563, 2002. [51] B. Walter, "Notes on the Ward BRDF," Technical Report PCG-05-06, Cornell Program of Computer Graphics, April 2005. [52] D. Geisler-Moroder and A. Dür, "A new Ward BRDF model with bounded albedo," Computer Graphics Forum, vol. 29, pp. 1391-1398, 2010. [53] G. J. Ward and P. Heckbert, "Irradiance gradients," proceedings of 3rd Eurographics Workshop on Rendering, 1992. pp. 85-98. [54] E. F. Kelley, M. Lindfors, and J. Penczek, "Display daylight ambient contrast measurement methods and daylight readability," Journal of Society of Information Display, vol. 14, pp. 1019-1030, November 2006. [55] C. A. Poynton, Digital Video and HDTV: Algorithms and Interfaces: Morgan Kaufmann, 2003. [56] A. Glassner, "Soap bubbles: Part 2," IEEE Computer Graphics and Applications, vol. 20, pp. 99-109, 2000. [57] S. Kubota, S. Okada, E. Sakai, and T. Fujioka, "Measurement of light incident on mobile displays in various environments," Journal of the Society for Information Display, vol. 14, pp. 999-1002, November 2006. [58] CIE145:2002, The correlation of models for vision and visual performance 2002. [59] S. R. Marschner, S. H. Westin, A. Arbree, and J. T. Moon, "Measuring and modeling the appearance of finished wood," ACM Transactions on Graphics, 2005. [60] B. Walter, S. R. Marschner, H. Li, and K. E. Torrance, "Microfacet models for refraction through rough surfaces," proceedings of Eurographics Symposium on Rendering, 2007. pp. 195-206. [61] C. Donner and H. W. Jensen, "A Spectrual BSSRDF for shading human skin," proceedings of Rendering Techniques: 17th Eurographics Workshop on Rendering, 2006. pp. 409-417. [62] A. Ghosh, T. Hawkins, P. Peers, S. Frederiksen, and P. Debevec, "Practical modeling and acquisition of layered facial reflectance," proceedings of SIGGRAPH Asia, 2008. pp. 139:1-139:10.

92 Texas Tech University, Zhanpeng Feng, May 2012

[63] C. Donner, T. Weyrich, E. d'Eon, R. Ramamoorthi, and S. Rusinkiewicz, "A layered, heterogeneous reflectance model for acquiring and rendering human skin," proceedings of SIGGRAPH Asia, 2008. pp. 140:1-140:12. [64] R. W. Floyd and L. Steinberg, "An adaptive algorithm for spatial grey scale," Proceedings of the Society of Information Display vol. 17, pp. 75-77, 1976. [65] M. B. Chorin, "Improving LCD TV color using Multi-Primary technology," Paper PC2-2, FPD International Forum, 2005. [66] C. A. Bouman, "Digital Halftoning." https://engineering.purdue.edu/~bouman/ece637/notes/pdf/Halftoning.pdf, 2011. [67] P. Clarberg, W. Jarosz, T. A. Moller, and H. W. Jensen, "Wavelet importance sampling: efficiently evaluating products of complex functions," in SIGGRAPH, 2005. [68] R. Ng, Digital Light Field Photography. PhD thesis Stanford University, 2006. [69] P. E. Debevec, "Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with and high dynamic range photography," proceedings of SIGGRAPH, July 1998. [70] P. E. Debevec, "High-Resolution Light Probe Image Gallery," http://gl.ict.usc.edu/Data/HighResProbes/. [71] S. Teller, "MIT City Scanning Project: Fully Automated Model Acquisition in Urban Areas," http://city.csail.mit.edu/. [72] Breault Research Organization Inc., "Advanced Systems Analysis Program (ASAP) [computer software]." Tucson, Arizona, 2010. [73] J. Kautz, P.-P. Sloan, and J. Lehtinen, "Precomputed radiance transfer: Theory and practice," SIGGRAPH Courses, 2005. [74] P.-P. Sloan, J. Kautz, and J. Snyder, "Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments," proceedings of SIGGRAPH, 2002. pp. 527-536. [75] R. Ng, R. Ramamoorthi, and P. Hanrahan, "All-frequency shadows using non- linear wavelet lighting approximation," ACM Transactions on Graphics, vol. 22, pp. 376–381, 2003.

93