<<

A Terrain Rendering Primer Andrew Liounis and John Christian LSIC Workshop on Lunar Mapping for Precision Landing March 4, 2021 Outline

• Purpose • Rendering Situations • Rendering Types • Properties • Geometric Camera Models • Illumination Conditions • General Considerations • Conclusion

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 2 Outline

• Purpose • Rendering Situations • Rendering Types • Optics Properties • Geometric Camera Models • Illumination Conditions • General Considerations • Conclusion

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 3 We want to equip you to make informed decisions about rendering software.

• Give a basic understanding of how different rendering techniques work. • Clarify how to determine what you need from a renderer. • Point out potential issues when choosing/using a renderer that are commonly overlooked.

• NOT to recommend one rendering technique (or software) over another. • Different techniques excel in different areas

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 4 We will use these terms in the presentation.

• Representative • Able to generate images that appear realistic of what is expected to be “seen” • Predictive • Able to generate images that show what is expected to be “seen”

Representative Predictive Real

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 5 Outline

• Purpose • Rendering Situations • Rendering Types • Optics Properties • Geometric Camera Models • Illumination Conditions • General Considerations • Conclusion

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 6 There are numerous situations in which we need to render synthetic images.

• Training human operators • Generating simulation data for training/testing algorithms • Generating predictive maps for navigation

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 7 There are numerous situations in which we need to render synthetic images.

• Training human operators • Speed • Responsive • Representative • *Virtual Reality

• Generating simulation data for training/testing algorithms • Generating predictive maps for navigation Image from https://spacecenter.org/how-nasa-uses-virtual-reality-to-train-astronauts/ March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 8 There are numerous situations in which we need to render synthetic images.

• Training human operators • Generating simulation data for training/testing algorithms • Representative • Truth scene • *Speed • *Responsiveness

• Generating predictive maps for navigation

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 9 There are numerous situations in which we need to render synthetic images.

• Training human operators • Generating simulation data for training/testing algorithms • Generating predictive maps for navigation • Predictive • Scene control • Camera modelling • *Speed

Mars 2020 TRN Map (sample) OSIRIS-REx Feature Map (right) Mars image from https://astrogeology.usgs.gov/search/map/Mars/Mars2020/JEZ_ctx_B_soc_008_orthoMosaic_6m_Eqc_latTs0_lon0 March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 10 Outline

• Purpose • Rendering Situations • Rendering Types • Optics Properties • Geometric Camera Models • Illumination Conditions • General Considerations • Conclusion

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 11 There are 3 primary categories for rendering synthetic images.

• Rasterization • Single Bounce Ray Tracing (ray casting) • Multi Bounce Ray Tracing (path tracing)

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 12 There are 3 primary categories for rendering synthetic images.

• Rasterization • Very fast, especially with GPU acceleration • Produces decently representative images, especially from a human perspective • Does not handle shadow well (though there are techniques to handle it better) • Can sometimes create artifacts with surfaces that intersect each other. • Single Bounce Ray Tracing (ray casting) • Multi Bounce Ray Tracing (path tracing)

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 13 There are 3 primary categories for rendering synthetic images.

• Rasterization • Single Bounce Ray Tracing (ray casting) • Good at shadows (especially with collimated light) • Can produce Predictive images, especially for atmosphere-less bodies • Can be computationally expensive leading to slow rendering speeds • Occasionally creates rendering artifacts without the proper settings. • Multi Bounce Ray Tracing (path tracing)

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 14 There are 3 primary categories for rendering synthetic images.

• Rasterization • Single Bounce Ray Tracing (ray casting) • Multi Bounce Ray Tracing (path tracing) • Good at shadows (including “soft” shadows) • Handles reflections • Can handle bodies with atmospheres • Very computationally expensive • Can create rendering artifacts without proper settings

Image rendered using GSFC Freespace March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 15 Outline

• Purpose • Rendering Situations • Rendering Types • Optics Properties • Geometric Camera Models • Illumination Conditions • General Considerations • Conclusion

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 16 In addition to geometry, we need to consider the optical system of the camera we are modelling. • In a perfect system, all light coming from a single direction would be focused onto an infinitesimal point on a detector. • Cameras are not perfect systems and light from a single direction is spread out over an area on a detector, typically multiple pixels, leading to an effective loss of resolution. • This is represented by the Modulation Transform Function (MTF) in the frequency domain and the Point Spread Function (PSF) in the spatial domain

Image taken from https://www.edmundoptics.com/knowledge-center/application-notes/optics/introduction-to-modulation-transfer-function/ March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 17 In addition to geometry, we need to consider the optical system of the camera we are modelling.

• We have 2 options for intensity calculations. • Use a full model of the electronics of the camera • Predicts the actual “DN” values the detector will receive (intensity of each pixel) • Neglect the electronics of the camera • Gives the relative intensity of each pixel (typically stretched to make use of the full dynamic range of the camera) • The first option is generally only necessary for determining required exposure times/detector gain settings. • The second option is generally satisfactory for most other applications. • Most algorithms normalize intensity gradients anyway.

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 18 In addition to geometry, we need to consider the optical system of the camera we are modelling.

• When ray tracing (single or multi-bounce) it is beneficial to subsample each pixel. • Pixels integrate all light within their individual FOV. • With ray tracing, we are essentially doing a Riemann sum. • More samples -> more accurate approximation. • 2 cases where particularly necessary • Model resolution >> camera GSD • Multi-bounce stochastic ray tracing • Not using subsampling can lead to speckling artifacts, especially for rough surfaces.

Image taken from “By Qutorial - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=49995993” March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 19 Outline

• Purpose • Rendering Situations • Rendering Types • Optics Properties • Geometric Camera Models • Illumination Conditions • General Considerations • Conclusion

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 20 Camera schematics modeled after: We can model most camera systems using the Christian, J.A., “A Tutorial on Horizon-Based Optical Navigation and Attitude Determination with Space Imaging Systems,” IEEE Access, Vol. simple pinhole camera model. 9, 2021, pp. 19819-19853.

Aperture Stop As an example, consider a FocalPlane simple Gauss lens.

Lenses

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 21 Camera schematics modeled after: We can model most camera systems using the Christian, J.A., “A Tutorial on Horizon-Based Optical Navigation and Attitude Determination with Space Imaging Systems,” IEEE Access, Vol. simple pinhole camera model. 9, 2021, pp. 19819-19853.

Aperture Stop As an example, consider a FocalPlane simple Gauss lens. Collimated light arriving along the boresight direction is focused to a point on the focal plane. Lenses

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 22 Camera schematics modeled after: We can model most camera systems using the Christian, J.A., “A Tutorial on Horizon-Based Optical Navigation and Attitude Determination with Space Imaging Systems,” IEEE Access, Vol. simple pinhole camera model. 9, 2021, pp. 19819-19853.

Aperture Stop As an example, consider a FocalPlane simple Gauss lens. Collimated light arriving along the boresight direction is focused to a point on the focal plane. Lenses Collimated light arriving from another direction is focused to a different point on the focal plane

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 23 Camera schematics modeled after: We can model most camera systems using the Christian, J.A., “A Tutorial on Horizon-Based Optical Navigation and Attitude Determination with Space Imaging Systems,” IEEE Access, Vol. simple pinhole camera model. 9, 2021, pp. 19819-19853.

Aperture Stop Entrance � As an example, consider a FocalPlane simple Gauss lens. Collimated light arriving along the boresight direction is focused to a point � on the focal plane. Lenses Rear Effective Principal Plane Collimated light arriving from another direction is focused to a different point on the focal plane

This allows us to identify the locations of the , exit pupil (not shown), and rear principal plane.

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 24 Camera schematics modeled after: We can model most camera systems using the Christian, J.A., “A Tutorial on Horizon-Based Optical Navigation and Attitude Determination with Space Imaging Systems,” IEEE Access, Vol. simple pinhole camera model. 9, 2021, pp. 19819-19853.

Aperture Stop Entrance Pupil �

FocalPlane To get to the pinhole camera model, we replace the full system with a thin lens.

� The thin lens has a diameter of the

Lenses entrance pupil. Rear Effective Focal Length Principal Plane It appears to be at the center of the � entrance pupil to an observer outside the

FocalPlane camera, but at the rear principal plane to an observer inside the camera.

� The center ray through the thin lens is undistorted and is the so-called pinhole. Effective Focal Length

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 25 Camera schematics modeled after: We can model most camera systems using the Christian, J.A., “A Tutorial on Horizon-Based Optical Navigation and Attitude Determination with Space Imaging Systems,” IEEE Access, Vol. simple pinhole camera model. 9, 2021, pp. 19819-19853.

The sophisticated analyst will realize that it’s often easier to To get to the pinhole camera work in the image plane, completely abstracting away the model, we replace the full inside of the camera. system with a thin lens. For details, see the OPNAV Tutorial. The thin lens has a diameter of the entrance pupil. 1

� It appears to be at the center of the entrance pupil to an observer outside the FocalPlane camera, but at the rear principal plane to an observer inside the camera. � The center ray through the thin lens is

Effective Focal Length Image Plane undistorted and is the so-called pinhole.

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 26 All real cameras have systematic deviations from the pinhole geometry due to optical aberrations and manufacturing imperfections.

The most common way to capture these effects is with the Brown distortion model (this is what is used in MATLAB, OpenCV, and many other popular contemporary software packages):

Image Credit: Christian, J.A., Benhacine, L., Hikes, J., and D’Souza, C., “Geometric Calibration of the Orion Optical Navigation Camera using Star Field Images,” The Journal of the Astronautical Sciences, Vol. 63, No. 4, 2016, pp 335-353. March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 27 All real cameras have systematic deviations from the pinhole geometry due to optical aberrations and manufacturing imperfections.

The Brown model is not the only one, and a number of variations exist (e.g., the model used by JPL for OPNAV).

We need not use a physics-inspired model. For example, we could also model the distortion map using a set of basis functions (e.g., Legendre, Zernike) and find the coefficients to transform between distorted and undistorted images.

When rendering, there are (at least) two ways to model this effect: 1. Distort the rays prior to ray casting (best performance, but not always easy to do) 2. Render an undistorted image and then distort the result (not as good, but easy to do)

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 28 Outline

• Purpose • Rendering Situations • Rendering Types • Optics Properties • Geometric Camera Models • Illumination Conditions • General Considerations • Conclusion

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 29 The scene illumination conditions can have a significant impact on terrain appearance.

For an airless body (like the Moon), we can usually approximate the incoming sunlight as a collimated light source. The same is true for Earthshine.

For a body with an atmosphere (like Earth or Mars), the atmosphere scatters sunlight. Accurate modeling scattering in a heterogeneous atmosphere can be complicated, and the appearance of terrain can change considerably with local weather conditions.

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 30 In addition to the scene illumination conditions, terrain appearance 712 depends on how the surface reflectsCHRISTIAN light.

P where n is from Eq. (12), TC is the known rotation from the body principal axis frame to the camera frame, and Q−1 is a diagonal matrix of the body’s principal axis dimensions as given in Eq. (20). Note that this solution forForr^ renderingdoes not require an the image, numeric we inversion are interested of a matrix, in nor does it requirefinding that anythe matrix amount factorization of incident actually light be performed. at each

point on the surface that is reflected in the Image Credit: III. Propertiesdirection of of a Litthe Planetary camera. LimbThis may as Viewed be Christian, J.A., from a Camera in Space “Accurate Planetary computed using the bidirectional reflectance Limb Localization for The planets and moons in our Solar System are naturally Image-Based Spacecraft distribution function (BRDF). Navigation,” Journal of illuminated by the sun. Consequently, the apparent brightness and Spacecraft and Rockets, photometric behavior of a particular celestial body in a digital image Vol. 54, No. 3, 2017, pp. is governed by how its surface (and/or atmosphere) scatters light. The 708-730. subsections thatSimple follow BRDFs discuss include: the challenges Lambertian, with atmospheres, Lomell-Seeliger, McEwen/LunarFig. 4 Depiction-Lambert of angles defining the BRDF. reflectance models for celestial bodies, and local terrain. To account for details such as grain size, porosity,B. and Reflectance other Models effects for Atmospherelesswe need a more Bodies complicated A. Atmospheres Complicate Limb Localization The reflectance of a planetary surface is described by its Gaseous planetarymodel, atmospheres such as Hapke scatter incident(physics sunlight based) through or Phong a bidirectional(empirical). reflectance distribution function (BRDF). A BRDF variety of processes [38,39]. Observations of Earth’s atmosphere defines the ratio of the reflected radiance to the incident irradiance from space, forIn example,stochastic are dominatedray tracing, by Rayleigh we require scattering a model [40] forand the is generallyprobability a function of a ofparticular the surface properties,bounce direction. the wavelength and Mie scattering [41]. Over the last 20 years, there has been a great of light, and the viewing/lighting geometry. The viewing/lighting deal of work onMarch the 4, rapid 2021 evaluation of such scatteringLSIC to accuratelyWorkshop on Lunar Mappinggeometry for Precision is generally Landing parameterized by three angles: incidence31 angle ’ – render an atmosphere s appearance in an image [42 44]. Although i, emission angle e, and phase angle g. These angles are shown the synthetic images produced by these rendering techniques may in Fig. 4. look realistic, they are not necessarily predictive. Some authors have suggested that planetary atmospheres soften (or smooth out) the apparent horizon and, therefore, improve the 1. Simple Reflectance Models accuracy of limb localization and the resulting OPNAVstate estimate The simplest reasonable reflectance model follows the empirically [32]. We, however, argue the opposite for two important reasons. derived Lambert’s law (ideal diffuse reflection), where the radiance is First, the atmosphere does not soften the horizon edge, but obscures the same in all directions. For a Lambertian surface, the reflectance is it, creating a new apparent horizon at some altitude above the actual only a function of the Lambert albedo AL and the incidence angle i: planetary surface. Second, the altitude of this new apparent horizon is A difficult to accurately predict because it is a function of weather, r i; e; g L cos i (25) complicated scattering of sunlight in a heterogeneous atmosphere, L †ˆ π and the illumination/viewing geometry. It has long been known that the apparent horizon for a body with an Lambert’s law, first published in the mid-1700s [46], has been an atmosphere occurs at some altitude above the planet’s surface. For enduring photometric relationship due to its simplicity and wide example, this was a major challenge for Earth limb–star applicability for high-albedo surfaces (which are commonly measurements with the space sextant during the Apollo Program, encountered in everyday life on Earth). Most planetary bodies, where each astronaut had a personalized “calibration” for the spot in however, have a low albedo and are not well modeled as Lambertian the atmosphere they would mark as the horizon [15]. Others have objects. done something similar with imagery by inflating the planet’s size The Lommel–Seeliger law is a slightly more sophisticated model, with an atmospheric altitude parameter [23,32]. The challenge, again, which considers both the incidence and emission angles [47–50]. is that this apparent altitude is highly variable and difficult to predict. First posited in the late 1800s (see Appendix A), it is the simplest For these reasons, the discussions that follow will focus primarily model that may be derived directly from the radiative transfer on bodies with no atmosphere (e.g., Moon, Mercury). Such bodies equation [51–53] and is given by have a crisp horizon along the lit limb, with an intensity gradient across the horizon driven largely by the camera’s point spread w cos i r i; e; g (26) function (PSF). The decision to focus on atmosphereless bodies is LS †ˆ4π cos i cos e consistent with past operational experience on actual spaceflight ‡ Downloaded by WEST VIRGINIA UNIVERSITY on May 19, 2017 | http://arc.aiaa.org DOI: 10.2514/1.A33692 missions. To date, most robotic planetary exploration missions have where w is the single scattering albedo. Despite its limitations, this avoided performing OPNAVwith bodies having an atmosphere (e.g., model has found widespread use in planetary photometry. Of Venus, Earth, Titan, any of the gas giants) for four primary reasons. particular note is that the Lommel–Seeliger law assumes a smooth First, the accuracy of atmospheric models is difficult to verify ahead surface and, therefore, predicts a surge in apparent brightness near the of time. Second, atmospheric models of sufficient fidelity tend to be limb (an effect known as limb brightening). Natural bodies with a very complicated and difficult to implement (by an analyst on Earth) reasonable degree of macroscopic surface roughness, such as the within the demanding and fast-paced context of ground operations. Moon, do not exhibit such limb brightening. This shortcoming of the Third, the temporal variability of atmospheres can be significant and Lommel–Seeliger law in accurately predicting limb brightness is often makes steady-state atmosphere models ineffectual for precision especially important in the present context, where the focus is on OPNAVapplications. A classic example is cloud formations, which navigation using the body’s lit limb. are hard to model, can change significantly over a short period of Thus, a more sophisticated model is needed. Two such models are time, and often have a dominant effect on the appearance of an object – in an image (see [24] for an example with clouds on Earth). Fourth, it now considered: the Kasseleinen Shkuratov model and the has been possible (thus far) to successfully navigate all missions Hapke model. using only radiometric navigation and optical observations of objects with no atmosphere, thus making it unnecessary to invest the 2. Kasseleinen–Shkuratov Model substantial resources required to adequately mature techniques for The Kasseleinen–Shkuratov (KS) model [54,55] is an empirical dealing with an atmosphere. model that has become quite popular in recent years for modeling the In summary, accurate horizon-based OPNAVusing bodies with an surface scattering of atmosphereless bodies, including the Moon atmosphere is challenging and is a topic of ongoing research. The [55], Mercury [56], and Vesta [57]. The KS model has a rather simple interested reader is directed to [45] for more details. functional form of Outline

• Purpose • Rendering Situations • Rendering Types • Optics Properties • Geometric Camera Models • Illumination Conditions • General Considerations • Conclusion

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 32 There are variety of general considerations to think about when developing a rendering pipeline:

Does the tool allow for scripting and does it have a well-tested API in your preferred programming language (e.g., Python, C/C++, MATLAB)

What are the tool inputs/outputs? In particular, what are the specific file types and how compatible are these with the other tools in your pipeline?

What is the scope and scale required? How large are the models that you must input? Can your software and hardware hold everything in memory at once, or do you need to load things in pieces as they are needed?

What is your required rendering speed and on what hardware?

Your rendering pipeline is only as good as your models: garbage in, garbage out (GIGO).

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 33 Outline

• Purpose • Rendering Situations • Rendering Types • Optics Properties • Geometric Camera Models • Illumination Conditions • General Considerations • Conclusion

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 34 Closing Thoughts • Consider what your needs are. • Go for a balance of speed and performance. • We generally have plenty of time for rendering on the ground. Speed is not necessarily king • One tool may not (likely will not) meet all your needs. • In fact, it is generally desirable to have your renderer for test images be different from your renderer for your navigation maps, to simulate that we can’t perfectly model cameras. • There are a lot of details that we are not able to cover in this presentation • Make sure you research what you choose. • Encouragement: Choosing the “wrong” renderer is unlikely to lead to a mission critical failure. • However, it may lead to headaches and/or leaving performance on the table.

March 4, 2021 LSIC Workshop on Lunar Mapping for Precision Landing 35