Scientific image rendering forspacesceneswith the SurRender software
Scientific image rendering for space scenes with the SurRender software
R. Brochard, J. Lebreton*, C. Robin, K. Kanani, G. Jonniaux, A. Masson, N. Despré, A. Berjaoui
Airbus Defence andSpace,31rue des Cosmonautes,31402 Toulouse Cedex, France [email protected]
*Corresponding Author
Abstract
The autonomy of spacecrafts can advantageously be enhanced by vision-based navigation (VBN) techniques.
Applications range from manoeuvers around Solar System objects and landing on planetary surfaces, to in-orbit servicing orspace debris removal, and even ground imaging. The development and validation of VBN algorithms for space exploration missions relies on the availability of physically accurate relevant images. Yet archival data from past missions can rarely serve this purpose and acquiring new data is often costly. Airbus has developed the image rendering software SurRender, which addresses the specific challenges of realistic image simulation with high level of representativeness for space scenes. In this paper we introduce the software SurRender and how its unique capabilities have proved successful for a variety of applications. Images are rendered by raytracing, which implements the physical principles of geometrical light propagation. Images are rendered in physical units using a macroscopic instrument model and scene objects reflectance functions. It is specially optimized for space scenes, with huge distances between objects and scenes up to Solar System size. Raytracing conveniently tackles some important effects for VBN algorithms: image quality, eclipses, secondary illumination, subpixel limb imaging, etc. From a userstandpoint, a simulation is easily setup using the available interfaces (MATLAB/Simulink, Python, and more) by specifying the position of the bodies (Sun, planets, satellites, …) over time, complex 3D shapes and material surface properties, before positioning the camera. SurRender comes with its own modelling tool, SuMoL, enabling to go beyond existing models for shapes, materials and sensors (projection, temporal sampling, electronics, etc.). Su r Render is natively designed to simulate different kinds of sensors (visible, LIDAR, …). Additional tools are available for manipulating huge datasets (“giant textures”) used to store albedo maps and digital elevation models (up to 256 TB), or for procedural (fractal) texturing that generates high-quality images for a large range of observing distances (from millions of km to touchdown). We illustrate SurRender performances with a selection of case studies.We place particular emphasis on a Moon landing simulation we recently computed that represents 40 GB of data and a 900-km flyby.The SurRender software canbe made available to interestedreaders uponrequest.
Keywords: computervision,navigation,image rendering,spaceexploration, raytracing Acronyms/Abbreviations
ADS: Airbus Defence and Space removal. It is an asset when manoeuvers do not rely on the ground to avoid transmission delays and also because
BRDF: BidirectionalReflectance DistributionFunction a very high reactivity maximizes the science return of the DEM: Digital Elevation Model GNC: Guidance,Navigation &Control GPS: Global Positioning System GSD: Ground Sample Distance IMU: InertialMeasurementUnit JUICE: JUpiterICy moons Explorer LIDAR: LIght DetectionAndRanging LSB: Least Significant Bit PDS: Planetary Data System PSF: Point Spread Function RAM:RandomAccessMemory R&T: Research andTechnology VBN: Vision-Based Navigation mission during high relative velocity phases. Traditionally, Guidance, Navigation & Control (GNC) filters hybridize the input from several sensors, such as IMUs, GPS (in earth orbit), radars and altimeters. Over the last decades, it has become evident that the addition of measurements from Vision-Based Navigation (VBN) systems greatly improves the performances of autonomous navigation solutions [1, 2, 3]. ADS has developed a portfolio of VBN algorithms including relative navigation, absolute navigation, model-based navigation as well as perception and reconstruction techniques. These computer vision techniques rely on the availability of abundant test images to insure their development and validation.
1. Introduction
Archival data from past missions can be used to test
Solar System exploration probes as well as in-orbit algorithms but, when they are available, they are robotic spacecrafts require a high level of autonomy to generally sensor-specific, they lack exhaustiveness, perform their missions. The applications range from representativeness, and often ground truth references. manoeuvers and landing around Solar System objects, to Some experiments are carried out on ground, thanks to in-orbit robotic servicing missions, or space debris test benches equipped with robotic arms handling sensors
- V1.0 - October 2, 2018
- Page 1 of 11
Scientific image rendering forspacesceneswith the SurRender software and target mock-up. Their representativeness is limited in content of a pixel is determined by casting several rays terms of range, spatial irradiance, available scenarios, originating from this pixel, and finding which real world inter alia, and these facilities have high operating costs, objects this ray encounters until it finally intersects with
- so that onlya fewtest casescanbe run.
- a light source (Sun/stars, planets or artificial light
To assess performances and robustness of VBN sources). Physical optics-level effects, such as diffraction solutions, computer generated data are highly of the light by the aperture of the imaging instrument, are recommended as they allow testing solutions in taken into account at macroscopic level using a Point exhaustive cases and provide ground truth comparisons. Spread Function (PSF). The light flux is stochastically The objective of the image processing part of the VBN, sampled within the pixels (including the probability derived fromcomputervision technics, is often to extract densityfunctiondefined bythe PSF).
- information from the image geometry: track features,
- The raytracer handles multiple diffusions and
match edges, etc. In this context, the capability of the reflections (This recursive raytracing technique is called algorithms to extract such information depends on pathtracing). Interaction of the light with the surface of notions such as texture, contrast, noise, which all come the scene objects is modelled in terms of Bidirectional from a broader field called radiometry. The same Reflectance Distribution Function (BRDF). The objects
- challengesexist forvision sensordesign.
- themselves can take arbitrarily complex shapes. The
Some simulator softwares are well-known to the geometry of Solar System objects are described by public: examples include the Celestia and B l ender Digital Elevation Maps (DEM) and spheroid models software or rendering engines used for video game or by which are the basic input needed to calculate light animation studios. Although visually impressive, these scattering.Artificialobjectsare described by3Dmeshes.
- simulators lack the realism needed for advanced image
- Providing the incoming irradiance is known, the
processing techniques. Typically they are designed to image is naturally rendered in physical units: each pixel cope with the human vision, but they do not have the contains an irradiance value expressed in W/m². photometric accuracythat actualsensorsare sensitive to. Providing the number of rays is large enough, the realism
Space applications require specialized software. For is ensured at subpixel level. Providing the models are instance the PANGU [4, 5] utility is commonly used in accurate enough, SurRender images are virtually the European space sector. Our team at Airbus has been undistinguishable fromactualphotographs.
- developing since 2011 the image rendering software
- In its standard implementation SurRender produces
SurRender, which addresses the specific challenges of visible (scattered light) images (although it is potentially physical image simulation (raytracing) with high level of able to make thermal infrared images, see Sec. 6.1). In representativeness for space scenes. Even if it is based on addition to images, useful output is generated such as classical 3D rendering techniques, it adapts those to depth maps (range imaging) and normal maps (slope specific space properties: high variability of objects mapping,hazard detection and avoidance). Active optical sizes, very high range between them, specific optical sensors such as LiDARs can also be modelled with properties. The software recently went through a formal SurRender (Sec. 2.4). qualification process, it has been used in many R&D and technological development projects for ESA (JUICE, GENEVIS), CNES, for the European Community
Sun
(NEOShield-2) and internalprojects(e.g.SpaceTug).
Section 2 of this paper presents the methods that
Light source
sampling
compose the Su r Render software, fromfirst principles,to computational implementation. In section 3, we introduce the user interfaces, the embedded modelling language SuMoL and sensor models. Section 4 presents a variety of tests that have been made for its formal validation. In Section 5 we demonstrate the performances of SurRender by focusing on the example of a Moon landing application. In section 6 we list additional examples of applications. Finally in Section 7 we discuss future ambitionsand collaboration opportunities.
Object 1
Optics
Image
plane
(sensor)
Object 2
Light bounces
Fig. 1: Principleofbackward raytracing 2.2 Compute r i mplementation
SurRender enables two distinct modes of image rendering: raytracing as presented above and OpenGL (Open Graphics Library). The latter runs on GPU and allows very fast image generation, compatible with realtime processing, but at the cost of physical accuracy and
2. Methods
2.1 Physical principles
SurRender implements the physical principles of light propagation (see Fig. 1). It solves geometrical optics
equations with (backward-) r a ytracing techniques. The
- V1.0 - October 2, 2018
- Page 2 of 11
Scientific image rendering forspacesceneswith the SurRender software fine optics/sensor modelling. This rendering mode is thus 2.3 Optimizations suitable for real-time setups, which do not focus on fine The implementation of raytracing techniques in the functional performance. In this paper we focus on the context of space exploration poses several challenges. raytracing model. SurRender generates images using backward raytracing.
The rendering engine is written in C++, it runs on Raytracing is a stochastic process, in which rays are
CPU and on multiple cores. It is currently in its version casted recursively from pixel to light source. In order to 5. The raytracer is implemented using classical reduce the noise intrinsic to this stochastic process, a acceleration structures (Kd-tree, Bounding Volume great number of rays must be traced. This process is Hierarchy) optimized for sparse scenes, as well as relief computationally intensive, and the raytracing software mapping with more specific ray-marching techniques [6, must be heavily optimized to ensure the maximum image 7]). In addition to a number of optimizations detailed in quality (lowest noise) within a minimum rendering time. Section 2.3, SurRender is characterized by a highly Space scenes are generally sparse. Randomly sampling
- efficient management ofcache memory.
- the volume space would therefore prove highly
To that purpose specific data formats were developed inefficient since most rays back-propagated from the that are specially designed for space applications. A first detector pixels would not intercept any object. This pre-processing is applied to DEM to calculate cone maps difficulty is addressed by explicitly targeting the rays and h e ight maps. Cone maps represents, for each given toward scene objects, thus avoiding a lot of unnecessary point, the cone in which a ray can be traced without computation. The ray casting is focused in directions that encountering an obstacle. Height maps represent the contribute to the final image. Small objects are rendered height with respect to a reference elevation. So-called efficiently by explicitly targeting them at a su b -pixel giant textures are used to store them. Giant textures are level. This allows simulating continuously a rendezvous memory-mapped, and loaded in RAM on demand: a mission from infinity to contact. SurRender also pyramid scheme is used in such a way that only the level implements forward raytracing functionalities, enabling of details needed (given the size and resolution at stake photon mapping, a process that provides maps of for a given scene) is loaded. The textures can be received surface radiances forall objects in the scene. All generated from actual space mission data using common these optimizations – among others - enable to simulate formats (PNG, JPEG, etc.) or from PDS data*. They can scenesat SolarSystemscales.
also be enriched using a procedural fractal generator to
create additional details at small scales. This data 3. How to use SurRender
management scheme is up to 50 times more efficient than 3.1 Use r i nterfaces
- manipulating meshesforplanetarysurfaces.
- The main application runs a s e rver that can be located
Computations are generally performed with double on the same computer as the client part, or on a remote precision, which is essential for planet size objects. computer. Users interact with the server using the Indeed the elevation variations between the typical surrender client. The server receives commands fromthe surface elevation of a Solar System object and its radius client trough a TCP/IP link, and sends the resulting is about the order of magnitude of the LSB of floating image back. If sent over a network, images can be point numbers (typically 1 m vs 1000 km). However compressed to reduce the transmission duration. A height maps and cone maps can be stored respectively in specialized mode exists that emulates a camera by
compactfloat and hal f f loa tformats forefficiency†.
redirecting the data fluxtowards an optical stimulator for
The size of the giant textures can far exceed the use with real sensors on test benches (hardware-in-the- available RAM/cache memory available on the loop)orto anotherclient like an Ethernet camera. computer. To that purpose SurRender can be used with the FUSE software to create arbitrarily large virtual files scripts. Interfaces are available in a variety of language: to manage the fractaltextures. MATLAB, Simulink, Python 3, C++, Lua, Java. Cloud
For the user, SurRender is instantiated using external
Generating an image using raytracing lasts from a few computing capabilities were recently implemented seconds to several minutes depending on the desired enabling a huge gain in performances through a higher
- image quality and the scene content.
- levelof parallelization.
The scripts describe the scene and its objects: camera, light sources, stars, planets, satellites, their positions, attitudes, shapes and surface properties. There is no intrinsic limitation in the complexity of the scene within this framework. A number of default models are provided that can easily be modified, parameterized or complemented by the user using the SuMoL language (see Sec. 3.2).
*
"PlanetaryData System", https://pds.nasa.gov. Floats are stored on 32 bits, doubles on 64 bits. Half floats
†
and compact floats (Su r R e n der-specific) are encoded only on 16
bits (reduced precision), respectively 17 bits (almost as precise as 32bits floats).
- V1.0 - October 2, 2018
- Page 3 of 11
Scientific image rendering forspacesceneswith the SurRender software
External tools are provided in the package in order to global shutter), HAS2 and HAS3 sensors [11] - which prepare the datasets (formatting, and pre-calculation of implement windowing and rolling shutter (see Fig. 2) - cone maps, fractal generator, etc.). Using these tools, it is and more. very easy to import data at usual formats such as the NASA PDSformat.
Active optical sensors such as LIDARs or time-offlight cameras can also be used. The simulation includes the light emission (and reflections) from the spot or the laser.
3.2 SuMoL
SurRender embeds its own language, SuMoL
(standing for SurRender Modelling Language), a highlevel programming language derived from C. SuMoL is used to model various components of a simulation. The list of currently available models we provide includes:
- Projection models (pinhole,fisheye,orthographic) - Geometrical objects (spheres, spheroids, planar & sphericalDEMs)
- BRDFs (Lambertian / mate, Mirror, Hapke [8], Phong [9],Oren-Nayar[10])
- PSFs (can be field-dependent andchromatic) - Sampling (sequential,rolling shutter, snapshot,etc.) - Sensors properties (Section2.5).
They implement analytical (or numerical) models, and
they can be interfaced with the client “on-the-fly” i.e.
without recompiling the whole software. They make SurRender a very versatile rendering tool, as new or specific sensor and material behaviors can be easily described by the user without changing the rendering engine. A SuMoL editor is provided as an external tool so that a user can easily implement his/her own models if needed.
Fig. 2 : HAS3 sensor simulation. Sensor sub - w indows at different integration times enable si m u ltaneous observatio n of f aint sta r s a ndbright object s .
4. Validation and performances
4.1 Analytica l val i dation
SurRender has passed a formal qualification review internal to Airbus. Its physical accuracy has been tested in specific contexts that can be studied analytically, allowing quantitative assessments. Both geometric and radiometric correctness have been controlled. We verified the (subpixel) position, orientation and irradiance of objects (points, spheroids, textures) in the image plane (with pinhole model, distortions, rolling shutter), including BRDFs (Lambertian sphere or plane) and the PSF effect. The tests demonstrated exquisite accuracy and precision. The report of this analysis can be provided toan interestedreaderupon request.
3.3 Senso r m odels
By default “ideal” images are produced, not altered
by any real life optical effects, in irradiance units (W/m²)‡. The images can be retrieved in greyscale or in multiple wavelengths channels: the bandwidth is specified by the user, and the BRDF and stellar spectra are chromatic. These images already account for the sensor field of view (FOV) and the size of the pixel matrix. A PSF can be specified in order to emulate diffraction by the optics.
Importantly fine sensor models can be implemented using external scripts and SuMoL wrappers. Various sensor properties and setups can be tuned: integration time, readout noise, photon noise, transmission, gain, pupil diameter, motion blur, defocus, etc. The acquisition mode can also be simulated: rolling or global shutter, push-broom, snapshot, windowing (accounting for detector motion during the acquisition). Recently the possibility of a PSF varying in the field-of-view was implemented, as well as chromatic aberrations. All these effects are natural byproducts of the raytracer. Available
sensors include a “generic sensor” (classical noises and
This analytical validation was completed by a qualitative study of rendered images for complex scenes against realimages.
4.2 Validation wit h r eal scenes
We conducted a classical test in computer graphics:
the accuracy of the “Cornell box” rendering. Fig. 3
allows qualitative comparisons of SurRender raytracing results, with real data photographed with a CCD camera. The visual properties of the image are correctly simulated, with various optical effects such as shadows, BRDFs, inter-reflections, etc., even though in this test, pieces of information were missing (PSF, postprocessing, precise camera pose, light spectrum). This particular simulation is very demanding due to the nonsparsity of the scene and the multiple reflections involved.