
Physically Based Shader Design in Arnold by Anders Langlands Introduction alShaders is an open-source, production shader library for Arnold. It was created as a hobby project, both as a learning exercise for the author to get to grips with the Arnold SDK, as well as to fill a gap in the Arnold toolset, since no production-quality library existed that was available or fully functional across all of the DCC applications supported by Arnold. It was made open source in order that others might learn from it and hopefully contribute to its development. Since its inception in 2012, it has seen action in many studios around the world and across a wide variety of work. In this document, we will examine what constitutes a production shader library and examine the design choices that shaped the form alShaders would take. As we will see, many of those choices follow naturally from the design of the renderer itself, so we will also take a brief look at the design of Arnold. We will primarily focus on the design of the surface shader, alSurface, examining the way it is structured in order to create a simple-to-use, physically plausible shader. We will cover the outputs it generates and how they are intended to be used within a visual effects pipeline. We will also look at some of the tricks employed to reduce noise and render faster, even if it sometimes means breaking physical correctness. Finally, we will see what other choices could have been made in its design, along with potential areas for improvement in the future, including potentially fruitful research avenues based on recent work within the graphics community. 1 Figure 1: Examples of work using alShaders. Images used with permission courtesy of (clockwise, from top-left): Nozon, Storm, Brett Sinclair, Phil Amelung & Daniel Hennies, Psyop, Pascal Floerks. What Makes a Production Shader Library? A full-featured shader library should be able to correctly shade the vast majority of assets that come through a typical production pipeline. This means that we need surface shaders to handle the most common materials: skin, fibers, metal, plastics, glass etc. Ideally these materials should be handled with as few different shaders as possible, so that users do not have to learn multiple interfaces (or make a judgment call about which shader is best for a particular material), and to reduce the amount of code maintenance required. In alShaders all of these materials are handled by alSurface, with the notable exception of fibers, for which we provide alHair. Multiple surface shaders can be layered together with alLayer to create more complex materials. In order to create realistic materials, users must be able to generate patterns to drive material parameters, either by reading from textures or through procedural noise functions. In most Digital Content Creation (DCC) applications such as Maya, file-based textures are fairly deeply ingrained within the application's workflow, so we rely on translating the application's native functionality. For procedurals we provide several different noise types such as alFractal and alCellNoise. 2 These patterns must usually be remapped and blended together in a variety of ways in order to get the desired final appearance, so we provide nodes for basic mathematical operations on both colors and floats (alCombineColor/Float) as well as nodes for remapping their values in ways such as in- put/output range selection, contrast and gamma (alRemapColor/Float). Several nodes|in particuar the procedural patterns|include remapping controls directly so as to reduce the amount of nodes required for a given network, for the sake of performance and network legibility. Finally, more complicated shading setups might require knowledge of the geometry of the scene such as surface curvature (alCurvature), normal directions and point positions (alInputVector), or selecting or caching subnetworks for efficiency (alSwitch and alCache, respectively). Figure 2: Different materials created using a combination of alSurface, alLayer, alHair, procedural noise, remapping and blending nodes. Head model by Lee Perry-Smith. Hair model by Cem Yuksel. Design Philosophy In this section we will examine the philosophy that guided the choices made in designing the shader library. Many of us in the visual effects industry have spent a significant amount of time trying to turn PRMan into a physically based renderer 1, often with limited success. That experience teaches us that it is better to play to a renderer's strengths (and manage its limitations), rather than trying to write a whole new renderer inside a shader. Arnold Arnold is an extremely fast, brute-force, unidirectional path tracer. It is targeted primarily at efficient rendering of the massive data sets typical in high-end visual effects production and is therefore tuned to offer the fastest possible ray tracing while keeping the lowest possible memory footprint. While brute-force Monte Carlo may seem decidedly \old school" compared to the myriad advanced algorithms available to choose from today, it fits perfectly with the guiding principle behind much of Arnold's design, namely that CPU time is cheaper than artist time. While algorithms such as photon mapping and irradiance caching can generate high-quality results in a short time frame, they rely on the user understanding and correctly setting a whole host of extra parameters that can interact in unintuitive ways. Managing these parameters requires extra artist time, which is expensive and could be better spent making a shot look good. Furthermore, setting these parameters incorrectly could make a long render unusable due to flickering or other artifacts. 1Interestingly, Pixar have now taken a leaf out of Arnold's book and gone fully raytraced as well. 3 Figure 3: Arnold continually reduces both render time and memory usage with each release. In contrast, the brute-force approach presents the user with a simple choice: take less samples for a faster, noisier result, or take more samples for a slower, cleaner result. Tuning a scene to get the desired level of noise in the desired amount of time then becomes a matter of setting a small number of parameters that interact with each other in very predictable ways, and can often be set per sequence| or even once for the entire show|and then forgotten about, leaving the lighting artist to concentrate on lighting. In order to make the renderer fast enough to handle actual production workflows without any of the complex cheats out there, Arnold concentrates on multiple importance sampling (MIS) and highly efficient sample distribution in order to reduce the number of samples that need to be used for a clean result from any given scene. The API itself is C and fairly low level, giving shader writers the choice of either relying on the built-in integration and MIS functions or write their own. Rays are traced strictly in order and Arnold provides a message-passing API to allow communication between different vertices of a single path. Figure 4: Examples of Solid Angle's published research on sampling techniques. The Design of alShaders These characteristics lead us naturally to a set of guiding design principles for the development of the library, ensuring that we use Arnold in the most effective way possible. Don't Try to Turn Arnold Into Something It's Not: Understand the strengths of the renderer| 4 raytracing speed, sampling efficiency, etc.|and use those features to solve the rendering equation. In particular there should be no interpolation or precalculation of illumination. Simplicity: Choose algorithms that use as few parameters as possible, or reparameterize to make the user interface simple. Predictability: User parameters should drive changes in the shading result in a predictable way. If the user inputs a color value (say, with a texture map), then the shader result should be as close to that color as possible without introducing unexpected color shifts due to the algorithms used. Scalar parameters should be remapped wherever possible to produce a linear response to input. Efficiency: Use multiple importance sampling for everything, and in particular use Arnold's built-in sampling routines wherever possible to allow the renderer to manage sample distribution efficiently. The user should not have to worry about edge cases. The renderer should perform reasonably well in every case and the user should not have to remember dozens of rules, such as: if it's this type of scene, I have to enable this parameter. Scalability: We are aiming for production-quality rendering, so motion blur is a necessity. Conse- quently, We should use algorithms and techniques that work efficiently at high AA sample counts. Plausibility: Finally, we take it for granted that all algorithms should be physically plausible. More- over, Simplicity should apply here so that plausible materials are made as simple as possible. The user should not have to worry about whether they've achieved the look they want \in the right way". In other words, if it looks right, it is right. The Layered Uber-Shader Model vs. The BSDF-Stack Model The first question to answer when designing a new shader is: do you want it to be an uber shader or a stack shader? In recent times, BSDF-stack models have become more fashionable, while uber-shader models have fallen out of favour. In this section we will examine the pros and cons of each model as they relate to our stated design philosophy. The BSDF-Stack Model The BSDF stack is normally implemented as a series of BSDF shading nodes connected to a Stack shader that is responsible for handling the light and sampling loops and accumulating the results from each connected BSDF node. This model offers a number of attractive features, namely: Flexibilty: The design is extremely flexible, as the user can combine whatever BSDFs they like in order to create their materials.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-