<<

Scientific image rendering for space scenes with the SurRender software

Scientific image rendering for space scenes with the SurRender software

R. Brochard, J. Lebreton*, C. Robin, K. Kanani, G. Jonniaux, A. Masson, N. Despré, A. Berjaoui Airbus Defence and Space, 31 rue des Cosmonautes, 31402 Toulouse Cedex, France [email protected] *Corresponding Author

Abstract The autonomy of spacecrafts can advantageously be enhanced by vision-based navigation (VBN) techniques. Applications range from manoeuvers around objects and landing on planetary surfaces, to in - servicing or space debris removal, and even ground imaging. The development and validation of VBN algorithms for missions relies on the availability of physically accurate relevant images. Yet archival data from past missions can rarely serve this purpose and acquiring new data is often costly. Airbus has developed the image rendering software SurRender, which addresses the specific challenges of realistic image simulation with high level of representativeness for space scenes. In this paper we introduce the software SurRender and how its unique capabilities have proved successful for a variety of applications. Images are rendered by raytracing, which implements the physical principles of geometrical light propagation. Images are rendered in physical units using a macroscopic instrument model and scene objects reflectance functions. It is specially optimized for space scenes, with huge distances between objects and scenes up to Solar System size. Raytracing conveniently tackles some important effects for VBN algorithms: image quality, , secondary illumination, subpixel limb imaging, etc. From a user standpoint, a simulation is easily setup using the available interfaces (MATLAB/Simulink, Python, and more) by specifying the position of the bodies (, , satellites, …) over time, complex 3D shapes and material surface properties, before positioning the camera. SurRender comes with its own modelling tool, SuMoL, enabling to go beyond existing models for shapes, materials and sensors (projection, temporal sampling, electronics, etc.). SurRender is natively designed to simulate different kinds of sensors (visible, , …). Additional tools are available for manipulating huge datasets (“giant textures”) used to store albedo maps and digital models (up to 256 TB), or for procedural (fractal) texturing that generates high-quality images for a large range of observing distances (from millions of km to touchdown). We illustrate SurRender performances with a selection of case studies. We place particular emphasis on a landing simulation we recently computed that represents 40 GB of data and a 900-km flyby. The SurRender software can be made available to interested readers upon request. Keywords: computer vision, navigation, image rendering, space exploration, raytracing

Acronyms/Abbreviations removal. It is an asset when manoeuvers do not rely on ADS: Airbus Defence and Space the ground to avoid transmission delays and also because BRDF: Bidirectional Reflectance Distribution Function a very high reactivity maximizes the science return of the DEM: mission during high relative velocity phases. GNC: Guidance, Navigation & Control Traditionally, Guidance, Navigation & Control (GNC) GPS: Global Positioning System filters hybridize the input from several sensors, such as GSD: Ground Sample Distance IMUs, GPS (in orbit), radars and altimeters. Over IMU: Inertial Measurement Unit the last decades, it has become evident that the addition JUICE: ICy Explorer of measurements from Vision-Based Navigation (VBN) LIDAR: LIght Detection And Ranging systems greatly improves the performances of LSB: Least Significant Bit autonomous navigation solutions [1, 2, 3]. ADS has PDS: Planetary Data System developed a portfolio of VBN algorithms including PSF: Point Spread Function relative navigation, absolute navigation, model-based RAM: Random Access Memory navigation as well as perception and reconstruction R&T: Research and Technology techniques. These computer vision techniques rely on the VBN: Vision-Based Navigation availability of abundant test images to insure their development and validation. 1. Introduction Archival data from past missions can be used to test Solar System exploration probes as well as in-orbit algorithms but, when they are available, they are robotic spacecrafts require a high level of autonomy to generally sensor-specific, they lack exhaustiveness, perform their missions. The applications range from representativeness, and often ground truth references. manoeuvers and landing around Solar System objects, to Some experiments are carried out on ground, thanks to in-orbit robotic servicing missions, or space debris test benches equipped with robotic arms handling sensors

V1.0 - October 2, 2018 Page 1 of 11 Scientific image rendering for space scenes with the SurRender software

and target mock-up. Their representativeness is limited in content of a pixel is determined by casting several rays terms of range, spatial irradiance, available scenarios, originating from this pixel, and finding which real world inter alia, and these facilities have high operating costs, objects this ray encounters until it finally intersects with so that only a few test cases can be run. a light source (Sun/stars, planets or artificial light To assess performances and robustness of VBN sources). Physical -level effects, such as diffraction solutions, computer generated data are highly of the light by the aperture of the imaging instrument, are recommended as they allow testing solutions in taken into account at macroscopic level using a Point exhaustive cases and provide ground truth comparisons. Spread Function (PSF). The light flux is stochastically The objective of the image processing part of the VBN, sampled within the pixels (including the probability derived from computer vision technics, is often to extract density function defined by the PSF). information from the image geometry: track features, The raytracer handles multiple diffusions and match edges, etc. In this context, the capability of the reflections (This recursive raytracing technique is called algorithms to extract such information depends on pathtracing). Interaction of the light with the surface of notions such as texture, contrast, noise, which all come the scene objects is modelled in terms of Bidirectional from a broader field called radiometry. The same Reflectance Distribution Function (BRDF). The objects challenges exist for vision sensor design. themselves can take arbitrarily complex shapes. The Some simulator softwares are well-known to the geometry of Solar System objects are described by public: examples include the Celestia and Blender Digital Elevation Maps (DEM) and spheroid models software or rendering engines used for video game or by which are the basic input needed to calculate light animation studios. Although visually impressive, these scattering. Artificial objects are described by 3D meshes. simulators lack the realism needed for advanced image Providing the incoming irradiance is known, the processing techniques. Typically they are designed to image is naturally rendered in physical units: each pixel cope with the human vision, but they do not have the contains an irradiance value expressed in W/m². photometric accuracy that actual sensors are sensitive to. Providing the number of rays is large enough, the realism Space applications require specialized software. For is ensured at subpixel level. Providing the models are instance the PANGU [4, 5] utility is commonly used in accurate enough, SurRender images are virtually the European space sector. Our team at Airbus has been undistinguishable from actual photographs. developing since 2011 the image rendering software In its standard implementation SurRender produces SurRender, which addresses the specific challenges of visible (scattered light) images (although it is potentially physical image simulation (raytracing) with high level of able to make thermal infrared images, see Sec. 6.1). In representativeness for space scenes. Even if it is based on addition to images, useful output is generated such as classical 3D rendering techniques, it adapts those to depth maps (range imaging) and normal maps (slope specific space properties: high variability of objects mapping, hazard detection and avoidance). Active optical sizes, very high range between them, specific optical sensors such as can also be modelled with properties. The software recently went through a formal SurRender (Sec. 2.4). qualification process, it has been used in many R&D and technological development projects for ESA (JUICE, GENEVIS), CNES, for the European Community Sun (NEOShield-2) and internal projects (e.g. SpaceTug). Section 2 of this paper presents the methods that Light compose the SurRender software, from first principles, to Object 1 computational implementation. In section 3, we source sampling introduce the user interfaces, the embedded modelling Image Optics language SuMoL and sensor models. Section 4 presents a plane variety of tests that have been made for its formal (sensor) validation. In Section 5 we demonstrate the performances Object 2 of SurRender by focusing on the example of a Moon Light bounces landing application. In section 6 we list additional examples of applications. Finally in Section 7 we discuss Fig. 1: Principle of backward raytracing future ambitions and collaboration opportunities. 2.2 Computer implementation 2. Methods SurRender enables two distinct modes of image 2.1 Physical principles rendering: raytracing as presented above and OpenGL SurRender implements the physical principles of light (Open Graphics Library). The latter runs on GPU and propagation (see Fig. 1). It solves geometrical optics allows very fast image generation, compatible with real- equations with (backward-) raytracing techniques. The time processing, but at the cost of physical accuracy and

V1.0 - October 2, 2018 Page 2 of 11 Scientific image rendering for space scenes with the SurRender software

fine optics/sensor modelling. This rendering mode is thus 2.3 Optimizations suitable for real-time setups, which do not focus on fine The implementation of raytracing techniques in the functional performance. In this paper we focus on the context of space exploration poses several challenges. raytracing model. SurRender generates images using backward raytracing. The rendering engine is written in C++, it runs on Raytracing is a stochastic process, in which rays are CPU and on multiple cores. It is currently in its version casted recursively from pixel to light source. In order to 5. The raytracer is implemented using classical reduce the noise intrinsic to this stochastic process, a acceleration structures (Kd-tree, Bounding Volume great number of rays must be traced. This process is Hierarchy) optimized for sparse scenes, as well as relief computationally intensive, and the raytracing software mapping with more specific ray-marching techniques [6, must be heavily optimized to ensure the maximum image 7]). In addition to a number of optimizations detailed in quality (lowest noise) within a minimum rendering time. Section 2.3, SurRender is characterized by a highly Space scenes are generally sparse. Randomly sampling efficient management of cache memory. the volume space would therefore prove highly To that purpose specific data formats were developed inefficient since most rays back-propagated from the that are specially designed for space applications. A first detector pixels would not intercept any object. This pre-processing is applied to DEM to calculate cone maps difficulty is addressed by explicitly targeting the rays and height maps. Cone maps represents, for each given toward scene objects, thus avoiding a lot of unnecessary point, the cone in which a ray can be traced without computation. The ray casting is focused in directions that encountering an obstacle. Height maps represent the contribute to the final image. Small objects are rendered height with respect to a reference elevation. So-called efficiently by explicitly targeting them at a sub-pixel giant textures are used to store them. Giant textures are level. This allows simulating continuously a rendezvous memory-mapped, and loaded in RAM on demand: a mission from infinity to contact. SurRender also pyramid scheme is used in such a way that only the level implements forward raytracing functionalities, enabling of details needed (given the size and resolution at stake photon mapping, a process that provides maps of for a given scene) is loaded. The textures can be received surface radiances for all objects in the scene. All generated from actual space mission data using common these optimizations – among others - enable to simulate formats (PNG, JPEG, etc.) or from PDS data*. They can scenes at Solar System scales. also be enriched using a procedural fractal generator to create additional details at small scales. This data 3. How to use SurRender management scheme is up to 50 times more efficient than 3.1 User interfaces manipulating meshes for planetary surfaces. The main application runs a server that can be located Computations are generally performed with double on the same computer as the client part, or on a remote precision, which is essential for size objects. computer. Users interact with the server using the Indeed the elevation variations between the typical surrender client. The server receives commands from the surface elevation of a Solar System object and its radius client trough a TCP/IP link, and sends the resulting is about the order of magnitude of the LSB of floating image back. If sent over a network, images can be point numbers (typically 1 m vs 1000 km). However compressed to reduce the transmission duration. A height maps and cone maps can be stored respectively in specialized mode exists that emulates a camera by compact float and half float formats for efficiency†. redirecting the data flux towards an optical stimulator for The size of the giant textures can far exceed the use with real sensors on test benches (hardware-in-the- available RAM/cache memory available on the loop) or to another client like an Ethernet camera. computer. To that purpose SurRender can be used with For the user, SurRender is instantiated using external the FUSE software to create arbitrarily large virtual files scripts. Interfaces are available in a variety of language: to manage the fractal textures. MATLAB, Simulink, Python 3, C++, Lua, Java. Cloud Generating an image using raytracing lasts from a few computing capabilities were recently implemented seconds to several minutes depending on the desired enabling a huge gain in performances through a higher image quality and the scene content. level of parallelization. The scripts describe the scene and its objects: camera, light sources, stars, planets, satellites, their positions, attitudes, shapes and surface properties. There is no intrinsic limitation in the complexity of the scene within this framework. A number of default models are * "Planetary Data System", https://pds.nasa.gov. provided that can easily be modified, parameterized or † Floats are stored on 32 bits, doubles on 64 bits. Half floats complemented by the user using the SuMoL language and compact floats (SurRender-specific) are encoded only on 16 (see Sec . 3.2). bits (reduced precision), respectively 17 bits (almost as precise as 32bits floats).

V1.0 - October 2, 2018 Page 3 of 11 Scientific image rendering for space scenes with the SurRender software

External tools are provided in the package in order to global shutter), HAS2 and HAS3 sensors [11] - which prepare the datasets (formatting, and pre-calculation of implement windowing and rolling shutter (see Fig. 2) - cone maps, fractal generator, etc.). Using these tools, it is and more. very easy to import data at usual formats such as the Active optical sensors such as LIDARs or time-of- NASA PDS format. flight cameras can also be used. The simulation includes the light emission (and reflections) from the spot or the 3.2 SuMoL laser. SurRender embeds its own language, SuMoL (standing for SurRender Modelling Language), a high- level programming language derived from C. SuMoL is used to model various components of a simulation. The list of currently available models we provide includes: - Projection models (pinhole, fisheye, orthographic) - Geometrical objects (spheres, spheroids, planar & spherical DEMs) - BRDFs (Lambertian / mate, Mirror, Hapke [8], Phong [9], Oren-Nayar [10]) - PSFs (can be field-dependent and chromatic) - Sampling (sequential, rolling shutter, snapshot, etc.) - Sensors properties (Section 2.5). They implement analytical (or numerical) models, and they can be interfaced with the client “on-the-fly” i.e. Fig. 2 : HAS3 sensor simulation. Sensor sub-windows at without recompiling the whole software. They make different integration times enable simultaneous SurRender a very versatile rendering tool, as new or observation of faint stars and bright objects. specific sensor and material behaviors can be easily described by the user without changing the rendering 4. Validation and performances engine. A SuMoL editor is provided as an external tool 4.1 Analytical validation so that a user can easily implement his/her own models SurRender has passed a formal qualification review if needed. internal to Airbus. Its physical accuracy has been tested in specific contexts that can be studied analytically, 3.3 Sensor models allowing quantitative assessments. Both geometric and By default “ideal” images are produced, not altered radiometric correctness have been controlled. We by any real life optical effects, in irradiance units verified the (subpixel) position, orientation and (W/m²)‡. The images can be retrieved in greyscale or in irradiance of objects (points, spheroids, textures) in the multiple wavelengths channels: the bandwidth is image plane (with pinhole model, distortions, rolling specified by the user, and the BRDF and stellar spectra shutter), including BRDFs (Lambertian sphere or plane) are chromatic. These images already account for the and the PSF effect. The tests demonstrated exquisite sensor field of view (FOV) and the size of the pixel accuracy and precision. The report of this analysis can be matrix. A PSF can be specified in order to emulate provided to an interested reader upon request. diffraction by the optics. This analytical validation was completed by a Importantly fine sensor models can be implemented qualitative study of rendered images for complex scenes using external scripts and SuMoL wrappers. Various against real images. sensor properties and setups can be tuned: integration time, readout noise, photon noise, transmission, gain, 4.2 Validation with real scenes pupil diameter, motion blur, defocus, etc. The acquisition We conducted a classical test in computer graphics: mode can also be simulated: rolling or global shutter, the accuracy of the “Cornell box” rendering. Fig. 3 push-broom, snapshot, windowing (accounting for allows qualitative comparisons of SurRender raytracing detector motion during the acquisition). Recently the results, with real data photographed with a CCD camera. possibility of a PSF varying in the field-of-view was The visual properties of the image are correctly implemented, as well as chromatic aberrations. All these simulated, with various optical effects such as shadows, effects are natural byproducts of the raytracer. Available BRDFs, inter-reflections, etc., even though in this test, sensors include a “generic sensor” (classical noises and pieces of information were missing (PSF, post- processing, precise camera pose, light spectrum). This ‡ Irradiance is defined as the radiant flux received by a surface particular simulation is very demanding due to the non- per unit area (W/m²). It corresponds to the radiance (W.m -².sr-1) sparsity of the scene and the multiple reflections received from the scene, modulated by the solid angle of each involved. pixel.

V1.0 - October 2, 2018 Page 4 of 11 Scientific image rendering for space scenes with the SurRender software

compared to actual space images from the Hayabusa / AMICA imager, and to images acquired on a test bench by industrial partners with a real camera and a mock-up of the scene. The 3D shape of the numerical model and of the mockup was constructed from AMICA images [13]. A uniform albedo and a Lambertian BRDF were used, and the illumination direction was close to the camera. A thorough analysis of the geometric precision and the radiometric precision was performed.

Fig. 3: The Cornell box test. Left: SurRender image, right: real image, both at 500 nm. Note the soft shadows and the multiple light reflections between objects.

SurRender has also been validated against real space images. In Fig. 4 we show a comparison with real data from New Horizon’s LORRI imager. The scene includes Europa and . on Io is correctly simulated. The only visible difference is the presence of volcanos around Io, which are not yet implemented. Then, in Fig. 4 we also show how the subpixel accuracy was validated against real images by showing a zoom on the limb of Ganymede, using a simple PSF model and actual data from New Horizons / LORRI.

Fig. 5: Itokawa, as seen by the Hayabusa probe (top panels) and simulated with SurRender (bottom panels). Left panels: greyscale images; right panel: logarithmic images.

As illustrated in Fig. 5, the visual inspection of the data is very convincing. Geometrical tests reveal error typically smaller than one pixel in average between the 3 images series based on different metrics. Radiometric tests reveal than SurRender and Hayabusa match within a few percent in their radiometric level. Only the residual background noise slightly differs from the actual image as can be seen on the right panels of Fig. 5. Indeed we used the AMICA official sensor specification [14], which did not document this residual pattern. The SurRender images are only as good as the detector model. In sum, Fig. 4: A view of the Jupiter moons from, left: SurRender, despite model inaccuracies and realisation error, the right: real images (New Horizons). Top panels: Io and simulated images are very close to reality. In fact, Europa. Note the illumination of Io by Jupiter, and the SurRender proved more practical and to some extent presence of a volcano on the real image that is not yet more realistic than the physical test bench as it enables to simulated. Bottom panels: the limb of Ganymede. simulate a variety of scene configurations without the limitations of the laboratory. Another test consisted in Another study was performed to qualitatively and observing the results of applying an IP algorithm, namely quantitatively validate the performances against real data, template matching, to different image sets. It was this time in framework of the European Commission § especially important as its success demonstrated the project NEOShield-2 [12]. Simulated images were viability of SurRender simulations to qualify VBN algorithms. § Science and Technology for Near-Earth Object Impact Prevention. http://www.neoshield.eu/

V1.0 - October 2, 2018 Page 5 of 11 Scientific image rendering for space scenes with the SurRender software

Fig. 6: A Flyby of the Moon towards the South Pole simulated with SurRender. The rendering is made at a variety of distances: from thousands of km (top-left), a 100 km (top-right), a few tens of km (bottom-left) and finally few km (bottom-right). model. The ultimate resolution that can be achieved here 5. Case study: landing is limited by the data, not by the capacities of SurRender. Recently the software was used to tackle a difficult The DEM tiles are preprocessed to generate cone challenge. In the context of the ESA R&T study maps and height maps. The reflectance map has some GENEVIS**, simulated images were needed for a ~900 voids that we eliminate using extrapolations with a km-long flyby along a meridian of the Moon down to the Gaussian filter. A procedural fractal generator is used to South Pole. To that purpose we downloaded public data enhance the data with additional details at smaller scales. from the NASA PDS archive. We used a lunar DEM The PDS DEM natively uses bilinear interpolations to fill based on data from NASA’s Lunar Reconnaissance gaps. Surface normals are computed using a continuous Orbiter (LRO) / Lunar Orbiter Laser Altimeter (LOLA) differentiation scheme in order to avoid artefacts. instrument [15]. The GSD is as small as 118 m at the Intersection with the model, i.e. cone maps are computed equator and the reported vertical accuracy is 1 m on using the step mapping technique with sub-centimeter average. In addition, we used a reflectance map at 750 accuracy preventing visible artefacts when viewpoint nm from JAXA / Kaguya Multiband Imager changes. After compression to half float and compact [16], which has a GSD down to 237m. We assume that float formats, the full input weights as much as 36 the BRDF of the Moon is best described by an Hapke gigabytes. Yet the efficient giant texture management enables to run the simulations with a framerate of about ** Generic Vision-Based Technology Building Blocks for Space 0.2 Hz on CPU for an image size of 1024x1024 pixels Applications. Disclaimer: the view expressed herein can in no way and 128 rays per pixels. We used cloud computing to be taken to reflect the official opinion of the European Space distribute the workload over 10 machines with 16 cores Agency. each, rendering 46.000 images in about 3 days. For this simulation, a “generic sensor” was used. The model includes pupil and lens size, quantum efficiency, lens transmittance, gain bandwidth and integration time,

V1.0 - October 2, 2018 Page 6 of 11 Scientific image rendering for space scenes with the SurRender software

readout noise, a global shutter with a certain line addressing time, etc. Photon noise and a (single) PSF are also accounted for. A selection of images is presented in Fig. 6, embracing various scales. These images have the scientific quality required for the testing of computer- vision algorithms.

6. Additional case studies 6.1 : Itokawa SurRender is particularly useful to simulate the approach of small Solar System bodies, which involves a Fig. 7: Simulated images of asteroid 67P/CP wide distance dynamics, from far-range imaging to (“Chury”) at thousands of kilometers then tens of touchdown. We have developed a number of scenarios kilometers. for asteroids based on public data, including 67P/Churyumov-Gerasimenko, Eros and Itokawa. In Fig. 6.2 (landscape) 7, we display images of 67P/CG produced with Other useful output of SurRender consists of depth SurRender. At large distances, a 3D model of the asteroid maps and normal maps. They are especially useful for is used based on a mesh shape from Hayabusa / AMICA precision landing applications, or for surface navigation data [13]. At close range, a DEM model of Philae (rovers) that require hazard detection functions. landing site “Agilkia” is loaded: it corresponds to the SurRender can be used to produce slope maps (or hybridation of descent imaging and a fractal model of equivalently normal maps), and to simulate obstacles, or craters [17], and has a spatial resolution as small as 5 irregular terrains. In Fig. 8, we illustrate these mm. Uniform albedo and a Hapke BRDF are assumed. applications on the case of Mars. This scene is generated For reference, the dimensions of 67P/CG’s small lobe are using a digital elevation model and an albedo map 2.5x2.5x2.0 km, the dimensions of the large lobe are assuming Lambertian BRDF. Topographic maps of Mars 4.1x3.2x1.3km. The sensor model consists of a simple produced with MRO/HiRISE stereo images were used Gaussian PSF model. The simulation ensures a [18]. continuous rendering from millions of kilometers to sub- These simulations were performed for an R&T study meters distances with adequate level of details. These focusing on the fusion of LiDAR and camera images. results were obtained in the context of the LoVa Indeed depth maps are the basis for the simulation of ††project. LiDAR acquisition. A complete simulation would also account for the laser beam propagation which is perfectly feasible. Similarly SurRender can be used to simulate †† Localisation Visuelle Approche astéroïde, ADS/CNES/LAM time-of-flight cameras.

Fig. 8: Synthetic image of the surface of Mars generated with SurRender (left) using MRO/HiRISE images and DEM and the associated depth map (center) and slope map (right).

V1.0 - October 2, 2018 Page 7 of 11 Scientific image rendering for space scenes with the SurRender software

6.3 The Jovian system Fig. 9 shows an image of Jupiter and its moons as SurRender is being used for the validation of rendered by SurRender. For this particular illustration, autonomous navigation solutions for the JUICE the position and size of the Moons were fixed arbitrarily spacecraft, ESA’s future large mission that will visit the based on an esthetic criterion. Jupiter and its moons are Galilean satellites of Jupiter (Europa, Ganymede and modelled with ellipsoids. The textures are composite Callisto). The image processing is based on the precise images (2048x4096) processed from Voyager and localization of Jovian moons’ limb during flybys [19]. To Galileo data [21] ‡‡ . The raytracer natively produces that purpose we designed high fidelity simulations of the effects that were not thought ahead, for example one can Jovian system. The simulation includes a detailed model notice that the shadow of Io is visible on Jupiter and has of the navigation camera, including multiple physical fuzzy edges where the is only partial. In a similar effects of the optics (blooming, chromaticity, ghosts, manner, some ghost effects may have impacted the variable PSF) and the detector (noises, cross-talk, image processing performance, but they were tackled variable integration time). Due to the high (subpixel) adequately thanks to adaptations of the algorithm to these precision required to render the limbs, an accurate model effects. Eventually these simulations will be made on a of the moons’ BRDF is required. This problem is being test-bench using an optical stimulator. The navigation tackled by Belgacem et al. 2018 [20] who revisit archival camera will then be tested under the real sky to further data from the Voyager and New Horizons spacecrafts. validate the algorithms and the assumptions before the The results will be implemented in the ultimate launch, scheduled in 2022. simulations. 6.4 Artificial objects SurRender is also able to simulate artificial objects such as satellites and spacecrafts. It is routinely used at ADS for the design and validation of rendezvous missions including the planned in-orbit servicing vehicle SpaceTug, the space debris removal demonstrator RemoveDebris [22] and the future ESA milestone Mars Sample Return (MSR) [23]. It has even been used to simulate urban landscapes with a DEM, buildings and vehicles for ground-based applications. Fig. 10 shows examples of images produced with SurRender. The 3D model may include moving parts, such as Solar panel, of even a robotic arm. The instrument model may also include active sensors, like in the example of SpaceTug depicted in Fig. 10 that includes a light spot. The scene configuration requires a 3D model of the target: mesh models can easily be imported using standard data formats such as OBJ, 3DS, Collada formats. The main difficulty is to setup a BRDF model for each surface. Default materials available in SurRender such as Fig. 9: Jupiter, Europa, Ganymede and Callisto the Lambertian BRDF (mate surface) or the Phong simulated with SurRender. The scene (planet positions) BRDF (plastic, MLI) properly describe satellite materials, represents an artist’s view. An actual camera model is and SuMoL enables the addition of new materials. used with exaggerated setup to highlight achromatism. The shadow of Io can be seen on Jupiter; chromatic aberrations can be seen on Callisto (right).

‡‡ http://stevealbers.net/albers/sos/sos.html

V1.0 - October 2, 2018 Page 8 of 11 Scientific image rendering for space scenes with the SurRender software

Fig. 10 Top Panel: The SPOT 5 and ENVISAT satellites rendered with SurRender on an Earth background. Bottom panel: simulation of a rendezvous scenario. A telecom satellite is approached by the SpaceTug. Note the shadow of the tug on the satellite. On the right panel, the tug light spot enlightens the sun shadow.

7. Discussion 7.1 Why SurRender? The current capabilities of the SurRender software SurRender is a unique tool and it offers a lot of undoubtedly put it among the most powerful space advantages due to its very conception. simulators available on the market. Airbus and its - Planetary bodies are described using geometrical partners have used and are using it for countless projects. model & giant textures rather than meshes. This Within its fields of applications, it is virtually unlimited. yields a huge increase in performances in rendering New scenarios and models can easily be setup and time and image quality (no geometrical defects). simulated in no time benefiting from the accessible - Pathtracing spontaneously generates natural effects interfaces and from SuMoL. The computer performances such as the secondary illumination of Io by Jupiter of SurRender are exquisite. It can simulate highly (Fig. 3), geometrically correct shadows / eclipses and realistic ray-traced images in a matter of seconds reflexions, etc. These effects do not need to be (depending on scene complexity), and in realtime in manually setup. OpenGL mode. Cloud computing capabilities are also - The raytracer accurately renders images with subpixel being tested to fasten the rendering of even more accuracy. This is due to the physical nature of the complex scenes, the rendering engine will be updated to algorithm that does not rely on any spatial sampling adapt to these new possibilities. SurRender is definitely (unlike OpenGL). an asset for the qualification of VBN solutions. It can - Similarly it naturally supports temporal detector also be used for sensor specification by providing sampling (rolling shutter, pushbroom). representative target signal level. - SurRender runs on CPU allowing virtually unlimited memory use, notably using cloud computing.

V1.0 - October 2, 2018 Page 9 of 11 Scientific image rendering for space scenes with the SurRender software

- SurRender is designed to simulate a continuous and scenes for the Moon, Mars, the Jovian system, approach with distances varying by several orders of Itokawa, Eros, 67P/CG. SurRender is being used for in- magnitude, with a constant SNR and without any orbit applications such as SpaceTug and RemoveDebris, threshold effects. for MSR and for ExoMars. Various sensor models are - The PSF implementation obeys the laws of physical already available and we are building a camera model optics and naturally accounts for the PSF trail to that will include a great number of optical defects, as simulate effects such as blooming even from out-of- well as active sensors. The limit is the detail level of the field objects. detector and object models themselves. We are open to - SuMoL is a versatile add-on that gives limitless collaborations to enrich the possibilities of SurRender. possibilities to the user in terms of scene description, An instrument manufacturer or an academic could use sensor design, etc. SurRender to test his instrument or scene model, for - Real-time rendering is supported through OpenGL example to generate new scenes or even write a with the same scene description/models as the complementary scene generator. The interfaces are easily raytracer. accessible to a trained engineer or scientist and it is compatible with standard data formats. Interested readers 7.2 Future improvements are invited to contact us via the SurRender website: At the moment SurRender 5 is designed to render www.airbus.com/SurRenderSoftware.html. optical (visual/near-infrared) images, it does not natively handle thermal infrared images. Yet it is possible to input Acknowledgements emission efficiencies to mimic infrared light. At this We thank the institutions and partners who placed stage only equilibrium temperature can be modelled that trust in SurRender enabling to support its development, way (no time dependence). It is perfectly possible to in particular the European Space Agency (ESA), the couple the software with a thermic simulator that would Centre National d’Etudes Spatiales (CNES) and the provide temperature or emission maps. In future European Commission (EU). Some results presented in versions, these features could become standard. this paper were carried out under a programme of and Furthermore we are planning to develop LIDAR funded by the European Space Agency (ESA Contract modelling capabilities. Currently SurRender produces No. 4000115365/15/NL/GLC/fk). Some results were essentially depth maps, and it can include illumination carried out under CNES R&T program R-S16/BS-0005- for the spot or laser. Some work is needed (SuMoL 032 and some other under EU’s Horizon 2020 grant interfaces, optical paths) to increase the realism, for agreement No 640351. example by simulating complex equations of light propagation to generate interferences (speckles). We are This paper was presented for the 69th International also testing the ability of SurRender to handle moving Astronautical Congress (IAC), Bremen, Germany, 1-5 parts, such as a robotic arm. Although feasible, this poses October 2018. Contribution reference: IAC- some challenges (computing performance, conventions 18,A3,2A,x43828. Published by the IAF, with for cinematics) that are the topic of new R&T projects. permission and released to the IAF to publish in all forms. The current limitation for Earth applications is the lack of a model for transparent media. Therefore References atmospheres cannot be accurately rendered. Yet they can [1] G. Flandin, Perrimon, N, B. V. P. Polle, C. Philippe be mimicked using the BRDF of an hologram, an and R. Drai, "Vision Based Navigation for Space approximation that holds as long as the observer is Exploration," IFAC Proceedings Volumes, vol. 43, located outside of the atmosphere. This addition will no. 15, pp. 285-290, 2010. make it able to simulate (semi-) transparent media like [2] J. Gil-Fernandez and G. Ortega-Hernando, lenses, oceans, ice (subsurface diffusion), etc. "Autonomous vision-based navigation for proximity SurRender does not simulate the path of light through operations around binary asteroids," CEAS Space a refractive or reflective optics. Instead, a projection Journal, vol. 10, no. 2, 2018. model is used and some limitations exist owing to the optical models. We are continuously improving [3] K. Kanani, A. Petit, E. Marchand, T. Chabot and B. SurRender to include more physical effects. Gerber, "Vision Based Navigation for Debris Removal Missions," in 63rd International 7.3 Collaboration opportunities Astronautical Congress, Naples, 2012. The remaining limitation of SurRender is the [4] S. Parkes, I. Martin and M. Dunstan, "Planet Surface available manpower to import new data and develop Simulation with PANGU," in Eighth International always more scenarios. Our teams and partners develop Conference on Space Operations , Montréal, 2004. new scenes and models whenever motivated by a project. [5] N. Rowell, S. Parkes, M. Dunstan and O. Dubois- For Solar System exploration, we have developed models Matra, "PANGU: VIRTUAL SPACECRAFT

V1.0 - October 2, 2018 Page 10 of 11 Scientific image rendering for space scenes with the SurRender software

IMAGE GENERATION," in 5th Int. Conf. on Moon, Many Measurements 3: Spectral reflectance," Astrodynamics, ICATT, ESA ESTEC, Noordwijk, Icarus, vol. 226, no. 1, pp. 364-374, 2013. The Netherlands, 2012. [17] L. Jorda, R. Gaskell et C. Capanna, «The global [6] L. Baboud and X. Décoret, "Rendering Geometry shape, density and rotation of Comet with Relief Textures.," in Graphics Interface '06, 67P/Churyumov-Gerasimenko from preperihelion Quebec, Canada, 2006. Rosetta/OSIRIS observations,» Icarus, vol. 277, pp. [7] H. Nguyen, GPU Gems 3, Addison-Wesley 257-278, 2016. Professional, 2007. [18] R. Kirk, E. Howington-Kraus and M. Rosiek, [8] B. Hapke, Theory of reflectance and emittance "Ultrahigh resolution topographic mapping of Mars spectroscopy, Cambridge University Press, 1992. with MRO HiRISE stereo images: Meter-scale slopes of candidate Phoenix landing sites," J. [9] B. T. Phong, "Illumination for Computer Generated Geophys. Res., vol. 113, no. E00A24, 2008. Pictures," Communications of the ACM, vol. 18, no. 6, 1975. [19] G. Jonniaux et D. Gherardi, «Robust extraction of navigation data from images for planetary approach [10] M. Nayar and S. Oren, "Generalization of Lambert's and landing,» chez ESA GNC, Oporto, 2014. Reflectance Model," SIGGRAPH., pp. 239-246, 1994. [20] I. Belgacem, F. Schmidt and G. Jonniaux, "Estimation of Hapke's Parameters on Selected [11] M. Innocent, "HAS3: A radiation tolerant CMOS Areas of Europa Using a Bayesian Approach," in image sensor for space applications," in 49th Lunar and Conference, The International Workshop on Image Sensors, 2017. Woodlands, Texas, 2018. [12] M. Chapuy, N. Despré, P. Hyounet, F. Capolupo [21] S. C. Albers, A. E. MacDonald and D. Himes, and R. Brochard, "NEOShield-2: Design and End- "Displaying Planetary and Geophysical Datasets on to-End Validation of an Autonomous Closed-Loop NOAAs Science On a Sphere (TM)," in American GNC System for Asteroid Kinetic Impactor Geophysical Union, Fall Meeting 2005, 2005. Missions," in 10th International ESA Conference on Guidance, Navigation & Control Systems, Salzburg, [22] J. Forshaw, G. Aglietti and Navarathinam, 2017. "RemoveDEBRIS: An in-orbit active debris removal demonstration mission," Acta Astronautica, vol. [13] O. Barnouin and E. Kahn, "Hayabusa AMICA 127, pp. 448-463, 2016. Images with Geometry Backplanes V1.0," NASA Planetary Data System, id. HAY-A-AMICA-3- [23] S. Vijendran, J. Huesing, F. Beyer and A. AMICAGEOM-V1.0, 2012. McSweeney, "Mars Sample Return Earth Return Orbiter mission overview," in 2nd International [14] M. Ishiguro, R. Nakamura and D. Tholen, "The Mars Sample Return, Berlin, 2018. Hayabusa Spacecraft Asteroid Multi-band Imaging Camera (AMICA)," Icarus, vol. 207, no. 2, pp. 714- 731, 2010. [15] M. Barker, E. Mazarico and Neumann, "A new lunar digital elevation model from the Lunar Orbiter Laser Altimeter and SELENE Terrain Camera," Icarus, vol. 273, p. 346–355, 2016. [16] M. Ohtake, C. M. Pieters and P. Isaacson, "One

V1.0 - October 2, 2018 Page 11 of 11