Ray Tracing on Graphics Hardware
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Ray-Tracing in Vulkan.Pdf
Ray-tracing in Vulkan A brief overview of the provisional VK_KHR_ray_tracing API Jason Ekstrand, XDC 2020 Who am I? ▪ Name: Jason Ekstrand ▪ Employer: Intel ▪ First freedesktop.org commit: wayland/31511d0e dated Jan 11, 2013 ▪ What I work on: Everything Intel but not OpenGL front-end – src/intel/* – src/compiler/nir – src/compiler/spirv – src/mesa/drivers/dri/i965 – src/gallium/drivers/iris 2 Vulkan ray-tracing history: ▪ On March 19, 2018, Microsoft announced DirectX Ray-tracing (DXR) ▪ On September 19, 2018, Vulkan 1.1.85 included VK_NVX_ray_tracing for ray- tracing on Nvidia RTX GPUs ▪ On March 17, 2020, Khronos released provisional cross-vendor extensions: – VK_KHR_ray_tracing – SPV_KHR_ray_tracing ▪ Final cross-vendor Vulkan ray-tracing extensions are still in-progress within the Khronos Vulkan working group 3 Overview: My objective with this presentation is mostly educational ▪ Quick overview of ray-tracing concepts ▪ Walk through how it all maps to the Vulkan ray-tracing API ▪ Focus on the provisional VK/SPV_KHR_ray_tracing extension – There are several details that will likely be different in the final extension – None of that is public yet, sorry. – The general shape should be roughly the same between provisional and final ▪ Not going to discuss details of ray-tracing on Intel GPUs 4 What is ray-tracing? All 3D rendering is a simulation of physical light 6 7 8 Why don't we do all 3D rendering this way? The primary problem here is wasted rays ▪ The chances of a random photon from the sun hitting your scene is tiny – About 1 in -
Directx ® Ray Tracing
DIRECTX® RAYTRACING 1.1 RYS SOMMEFELDT INTRO • DirectX® Raytracing 1.1 in DirectX® 12 Ultimate • AMD RDNA™ 2 PC performance recommendations AMD PUBLIC | DirectX® 12 Ultimate: DirectX Ray Tracing 1.1 | November 2020 2 WHAT IS DXR 1.1? • Adds inline raytracing • More direct control over raytracing workload scheduling • Allows raytracing from all shader stages • Particularly well suited to solving secondary visibility problems AMD PUBLIC | DirectX® 12 Ultimate: DirectX Ray Tracing 1.1 | November 2020 3 NEW RAY ACCELERATOR AMD PUBLIC | DirectX® 12 Ultimate: DirectX Ray Tracing 1.1 | November 2020 4 DXR 1.1 BEST PRACTICES • Trace as few rays as possible to achieve the right level of quality • Content and scene dependent techniques work best • Positive results when driving your raytracing system from a scene classifier • 1 ray per pixel can generate high quality results • Especially when combined with techniques and high quality denoising systems • Lets you judiciously spend your ray tracing budget right where it will pay off AMD PUBLIC | DirectX® 12 Ultimate: DirectX Ray Tracing 1.1 | November 2020 5 USING DXR 1.1 TO TRACE RAYS • DXR 1.1 lets you call TraceRay() from any shader stage • Best performance is found when you use it in compute shaders, dispatched on a compute queue • Matches existing asynchronous compute techniques you’re already familiar with • Always have just 1 active RayQuery object in scope at any time AMD PUBLIC | DirectX® 12 Ultimate: DirectX Ray Tracing 1.1 | November 2020 6 RESOURCE BALANCING • Part of our ray tracing system -
Visualization Tools and Trends a Resource Tour the Obligatory Disclaimer
Visualization Tools and Trends A resource tour The obligatory disclaimer This presentation is provided as part of a discussion on transportation visualization resources. The Atlanta Regional Commission (ARC) does not endorse nor profit, in whole or in part, from any product or service offered or promoted by any of the commercial interests whose products appear herein. No funding or sponsorship, in whole or in part, has been provided in return for displaying these products. The products are listed herein at the sole discretion of the presenter and are principally oriented toward transportation practitioners as well as graphics and media professionals. The ARC disclaims and waives any responsibility, in whole or in part, for any products, services or merchandise offered by the aforementioned commercial interests or any of their associated parties or entities. You should evaluate your own individual requirements against available resources when establishing your own preferred methods of visualization. What is Visualization • As described on Wikipedia • Illustration • Information Graphics – visual representations of information data or knowledge • Mental Image – imagination • Spatial Visualization – ability to mentally manipulate 2dimensional and 3dimensional figures • Computer Graphics • Interactive Imaging • Music visual IEEE on Visualization “Traditionally the tool of the statistician and engineer, information visualization has increasingly become a powerful new medium for artists and designers as well. Owing in part to the mainstreaming -
Real-Time Rendering Techniques with Hardware Tessellation
Volume 34 (2015), Number x pp. 0–24 COMPUTER GRAPHICS forum Real-time Rendering Techniques with Hardware Tessellation M. Nießner1 and B. Keinert2 and M. Fisher1 and M. Stamminger2 and C. Loop3 and H. Schäfer2 1Stanford University 2University of Erlangen-Nuremberg 3Microsoft Research Abstract Graphics hardware has been progressively optimized to render more triangles with increasingly flexible shading. For highly detailed geometry, interactive applications restricted themselves to performing transforms on fixed geometry, since they could not incur the cost required to generate and transfer smooth or displaced geometry to the GPU at render time. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit, complex geometry can now be generated on-the-fly within the GPU’s rendering pipeline. This has enabled the generation and displacement of smooth parametric surfaces in real-time applications. However, many well- established approaches in offline rendering are not directly transferable due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this survey, we provide an overview of recent work and challenges in this topic by summarizing, discussing, and comparing methods for the rendering of smooth and highly-detailed surfaces in real-time. 1. Introduction Hardware tessellation has attained widespread use in computer games for displaying highly-detailed, possibly an- Graphics hardware originated with the goal of efficiently imated, objects. In the animation industry, where displaced rendering geometric surfaces. GPUs achieve high perfor- subdivision surfaces are the typical modeling and rendering mance by using a pipeline where large components are per- primitive, hardware tessellation has also been identified as a formed independently and in parallel. -
Geforce ® RTX 2070 Overclocked Dual
GAMING GeForce RTX™ 2O7O 8GB XLR8 Gaming Overclocked Edition Up to 6X Faster Performance Real-Time Ray Tracing in Games Latest AI Enhanced Graphics Experience 6X the performance of previous-generation GeForce RTX™ 2070 is light years ahead of other cards, delivering Powered by NVIDIA Turing, GeForce™ RTX 2070 brings the graphics cards combined with maximum power efficiency. truly unique real-time ray-tracing technologies for cutting-edge, power of AI to games. hyper-realistic graphics. GRAPHICS REINVENTED PRODUCT SPECIFICATIONS ® The powerful new GeForce® RTX 2070 takes advantage of the cutting- NVIDIA CUDA Cores 2304 edge NVIDIA Turing™ architecture to immerse you in incredible realism Clock Speed 1410 MHz and performance in the latest games. The future of gaming starts here. Boost Speed 1710 MHz GeForce® RTX graphics cards are powered by the Turing GPU Memory Speed (Gbps) 14 architecture and the all-new RTX platform. This gives you up to 6x the Memory Size 8GB GDDR6 performance of previous-generation graphics cards and brings the Memory Interface 256-bit power of real-time ray tracing and AI to your favorite games. Memory Bandwidth (Gbps) 448 When it comes to next-gen gaming, it’s all about realism. GeForce RTX TDP 185 W>5 2070 is light years ahead of other cards, delivering truly unique real- NVLink Not Supported time ray-tracing technologies for cutting-edge, hyper-realistic graphics. Outputs DisplayPort 1.4 (x2), HDMI 2.0b, USB Type-C Multi-Screen Yes Resolution 7680 x 4320 @60Hz (Digital)>1 KEY FEATURES SYSTEM REQUIREMENTS Power -
NVIDIA Quadro Technical Specifications
NVIDIA Quadro Technical Specifications NVIDIA Quadro Workstation GPU High-resolution Antialiasing ° Dassault CATIA • Full 128-bit floating point precision • Up to 16x full-scene antialiasing (FSAA), ° ESRI ArcGIS pipeline at resolutions up to 1920 x 1200 ° ICEM Surf • 12-bit subpixel precision • 12-bit subpixel sampling precision ° MSC.Nastran, MSC.Patran • Hardware-accelerated antialiased enhances AA quality ° PTC Pro/ENGINEER Wildfire, points and lines • Rotated-grid FSAA significantly 3Dpaint, CDRS The NVIDIA Quadro® family of In addition to a full line up of 2D and • Hardware OpenGL overlay planes increases color accuracy and visual ° SolidWorks • Hardware-accelerated two-sided quality for edges, while maintaining ° UDS NX Series, I-deas, SolidEdge, professional solutions for workstations 3D workstation graphics solutions, the lighting performance3 Unigraphics, SDRC delivers the fastest application NVIDIA Quadro professional products • Hardware-accelerated clipping planes and many more… Memory performance and the highest quality include a set of specialty solutions that • Third-generation occlusion culling • Digital Content Creation (DCC) graphics. have been architected to meet the • 16 textures per pixel • High-speed memory (up to 512MB Alias Maya, MOTIONBUILDER needs of a wide range of industry • OpenGL quad-buffered stereo (3-pin GDDR3) ° NewTek Lightwave 3D Raw performance and quality are only sync connector) • Advanced lossless compression ° professionals. These specialty Autodesk Media and Entertainment the beginning. The NVIDIA -
Tracepro's Accurate LED Source Modeling Improves the Performance of Optical Design Simulations
TECHNICAL ARTICLE March, 2015 TracePro’s accurate LED source modeling improves the performance of optical design simulations. Modern optical modeling programs allow product design engineers to create, analyze, and optimize LED sources and LED optical systems as a virtual prototype prior to manufacturing the actual product. The precision of these virtual models depends on accurately representing the components that make up the model, including the light source. This paper discusses the physics behind light source modeling, source modeling options in TracePro, selection of the best modeling method, comparing modeled vs measured results, and source model reporting. Ray Tracing Physics and Source Representation In TracePro and other ray tracing programs, LED sources are represented as a series of individual rays, each with attributes of wavelength, luminous flux, and direction. To obtain the most accurate results, sources are typically represented using millions of individual rays. Source models range from simple (e.g. point source) to complex (e.g. 3D model) and can involve complex interactions completely within the source model. For example, a light source with integrated TIR lens can effectively be represented as a “source” in TracePro. However, sources are usually integrated with other components to represent the complete optical system. We will discuss ray tracing in this context. Ray tracing engines, in their simplest form, employ Snell’s Law and the Law of Reflection to determine the path and characteristics of each individual ray as it passes through the optical system. Figure 1 illustrates a simple optical system with reflection and refraction. Figure 1 – Simple ray trace with refraction and reflection TracePro’s sophisticated ray tracing engine incorporates specular transmission and reflection, scattered transmission and reflection, absorption, bulk scattering, polarization, fluorescence, diffraction, and gradient index properties. -
Extending the Graphics Pipeline with Adaptive, Multi-Rate Shading
Extending the Graphics Pipeline with Adaptive, Multi-Rate Shading Yong He Yan Gu Kayvon Fatahalian Carnegie Mellon University Abstract compute capability as a primary mechanism for improving the qual- ity of real-time graphics. Simply put, to scale to more advanced Due to complex shaders and high-resolution displays (particularly rendering effects and to high-resolution outputs, future GPUs must on mobile graphics platforms), fragment shading often dominates adopt techniques that perform shading calculations more efficiently the cost of rendering in games. To improve the efficiency of shad- than the brute-force approaches used today. ing on GPUs, we extend the graphics pipeline to natively support techniques that adaptively sample components of the shading func- In this paper, we enable high-quality shading at reduced cost on tion more sparsely than per-pixel rates. We perform an extensive GPUs by extending the graphics pipeline’s fragment shading stage study of the challenges of integrating adaptive, multi-rate shading to natively support techniques that adaptively sample aspects of the into the graphics pipeline, and evaluate two- and three-rate imple- shading function more sparsely than per-pixel rates. Specifically, mentations that we believe are practical evolutions of modern GPU our extensions allow different components of the pipeline’s shad- designs. We design new shading language abstractions that sim- ing function to be evaluated at different screen-space rates and pro- plify development of shaders for this system, and design adaptive vide mechanisms for shader programs to dynamically determine (at techniques that use these mechanisms to reduce the number of in- fine screen granularity) which computations to perform at which structions performed during shading by more than a factor of three rates. -
Graphics Pipeline and Rasterization
Graphics Pipeline & Rasterization Image removed due to copyright restrictions. MIT EECS 6.837 – Matusik 1 How Do We Render Interactively? • Use graphics hardware, via OpenGL or DirectX – OpenGL is multi-platform, DirectX is MS only OpenGL rendering Our ray tracer © Khronos Group. All rights reserved. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/. 2 How Do We Render Interactively? • Use graphics hardware, via OpenGL or DirectX – OpenGL is multi-platform, DirectX is MS only OpenGL rendering Our ray tracer © Khronos Group. All rights reserved. This content is excluded from our Creative Commons license. For more information, see http://ocw.mit.edu/help/faq-fair-use/. • Most global effects available in ray tracing will be sacrificed for speed, but some can be approximated 3 Ray Casting vs. GPUs for Triangles Ray Casting For each pixel (ray) For each triangle Does ray hit triangle? Keep closest hit Scene primitives Pixel raster 4 Ray Casting vs. GPUs for Triangles Ray Casting GPU For each pixel (ray) For each triangle For each triangle For each pixel Does ray hit triangle? Does triangle cover pixel? Keep closest hit Keep closest hit Scene primitives Pixel raster Scene primitives Pixel raster 5 Ray Casting vs. GPUs for Triangles Ray Casting GPU For each pixel (ray) For each triangle For each triangle For each pixel Does ray hit triangle? Does triangle cover pixel? Keep closest hit Keep closest hit Scene primitives It’s just a different orderPixel raster of the loops! -
Ray-Tracing Procedural Displacement Shaders
Ray-tracing Procedural Displacement Shaders Wolfgang Heidrich and Hans-Peter Seidel Computer Graphics Group University of Erlangen fheidrich,[email protected] Abstract displacement shaders are resolution independent and Displacement maps and procedural displacement storage efficient. shaders are a widely used approach of specifying geo- metric detail and increasing the visual complexity of a scene. While it is relatively straightforward to handle displacement shaders in pipeline based rendering systems such as the Reyes-architecture, it is much harder to efficiently integrate displacement-mapped surfaces in ray-tracers. Many commercial ray-tracers tessellate the surface into a multitude of small trian- Figure 1: A simple displacement shader showing a gles. This introduces a series of problems such as wave function with exponential decay. The underly- excessive memory consumption and possibly unde- ing geometry used for this image is a planar polygon. tected surface detail. In this paper we describe a novel way of ray-tracing Unfortunately, because displacement shaders mod- procedural displacement shaders directly, that is, ify the geometry, their use in many rendering systems without introducing intermediate geometry. Affine is limited. Since ray-tracers cannot handle proce- arithmetic is used to compute bounding boxes for dural displacements directly, many commercial ray- the shader over any range in the parameter domain. tracing systems, such as Alias [1], tessellate geomet- The method is comparable to the direct ray-tracing ric objects with displacement shaders into a mul- of B´ezier surfaces and implicit surfaces using B´ezier titude of small triangles. Unlike in pipeline-based clipping and interval methods, respectively. renderers, such as the Reyes-architecture [17], these Keywords: ray-tracing, displacement-mapping, pro- triangles have to be kept around, and cannot be ren- cedural shaders, affine arithmetic dered independently. -
Getting Started (Pdf)
GETTING STARTED PHOTO REALISTIC RENDERS OF YOUR 3D MODELS Available for Ver. 1.01 © Kerkythea 2008 Echo Date: April 24th, 2008 GETTING STARTED Page 1 of 41 Written by: The KT Team GETTING STARTED Preface: Kerkythea is a standalone render engine, using physically accurate materials and lights, aiming for the best quality rendering in the most efficient timeframe. The target of Kerkythea is to simplify the task of quality rendering by providing the necessary tools to automate scene setup, such as staging using the GL real-time viewer, material editor, general/render settings, editors, etc., under a common interface. Reaching now the 4th year of development and gaining popularity, I want to believe that KT can now be considered among the top freeware/open source render engines and can be used for both academic and commercial purposes. In the beginning of 2008, we have a strong and rapidly growing community and a website that is more "alive" than ever! KT2008 Echo is very powerful release with a lot of improvements. Kerkythea has grown constantly over the last year, growing into a standard rendering application among architectural studios and extensively used within educational institutes. Of course there are a lot of things that can be added and improved. But we are really proud of reaching a high quality and stable application that is more than usable for commercial purposes with an amazing zero cost! Ioannis Pantazopoulos January 2008 Like the heading is saying, this is a Getting Started “step-by-step guide” and it’s designed to get you started using Kerkythea 2008 Echo. -
Ray Tracing (Shading)
CS4620/5620: Lecture 35 Ray Tracing (Shading) Cornell CS4620/5620 Fall 2012 • Lecture 35 © 2012 Kavita Bala • 1 (with previous instructors James/Marschner) Announcements • 4621 – Class today • Turn in HW3 • PPA3 is going to be out today • PA3A is out Cornell CS4620/5620 Fall 2012 • Lecture 35 © 2012 Kavita Bala • 2 (with previous instructors James/Marschner) Shading • Compute light reflected toward camera • Inputs: – eye direction – light direction (for each of many lights) l n – surface normal v – surface parameters (color, shininess, …) Cornell CS4620/5620 Fall 2012 • Lecture 35 © 2012 Kavita Bala • 3 (with previous instructors James/Marschner) Light • Local light – Position • Directional light (e.g., sun) – Direction, no position Cornell CS4620/5620 Fall 2012 • Lecture 35 © 2012 Kavita Bala • 4 (with previous instructors James/Marschner) Lambertian shading • Shading independent of view direction illumination from source l n Ld = kd I max(0, n l) Q v · diffuse coefficient diffusely reflected light Cornell CS4620/5620 Fall 2012 • Lecture 35 © 2012 Kavita Bala • 5 (with previous instructors James/Marschner) Image so far Scene.trace(Ray ray, tMin, tMax) { surface, t = hit(ray, tMin, tMax); if surface is not null { point = ray.evaluate(t); normal = surface.getNormal(point); return surface.shade(ray, point, normal, light); } else return backgroundColor; } … Surface.shade(ray, point, normal, light) { v = –normalize(ray.direction); l = normalize(light.pos – point); // compute shading } Cornell CS4620/5620 Fall 2012 • Lecture 35 © 2012 Kavita