Graphics Shaders Mike Hergaarden January 2011, VU Amsterdam
Total Page:16
File Type:pdf, Size:1020Kb
Graphics shaders Mike Hergaarden January 2011, VU Amsterdam [Source: http://unity3d.com/support/documentation/Manual/Materials.html] Abstract Shaders are the key to finalizing any image rendering, be it computer games, images or movies. Shaders have greatly improved the output of computer generated media; from photo-realistic computer generated humans to more vivid cartoon movies. Furthermore shaders paved the way to general-purpose computation on GPU (GPGPU). This paper introduces shaders by discussing the theory and practice. Introduction A shader is a piece of code that is executed on the Graphics Processing Unit (GPU), usually found on a graphics card, to manipulate an image before it is drawn to the screen. Shaders allow for various kinds of rendering effect, ranging from adding an X-Ray view to adding cartoony outlines to rendering output. The history of shaders starts at LucasFilm in the early 1980’s. LucasFilm hired graphics programmers to computerize the special effects industry [1]. This proved a success for the film/ rendering industry, especially at Pixars Toy Story movie launch in 1995. RenderMan introduced the notion of Shaders; “The Renderman Shading Language allows material definitions of surfaces to be described in not only a simple manner, but also highly complex and custom manner using a C like language. Using this method as opposed to a pre-defined set of materials allows for complex procedural textures, new shading models and programmable lighting. Another thing that sets the renderers based on the RISpec apart from many other renderers, is the ability to output arbitrary variables as an image—surface normals, separate lighting passes and pretty much anything else can be output from the renderer in one pass.” [1] The term shader was first only used to refer to “pixel shaders”, but soon enough new uses of shaders such as vertex and geometry shaders were introduced, making the term shaders more general. Shaders have forced some important movements in the video(card) hardware which will be further examined in the following sections. From the fixed-function pipeline to the programmable pipeline Before exploring the latest shader types and possibilities it is important to provide some more background on shaders. To this end we’ll look at the graphics pipeline process to put shaders into perspective. In the 1980s developing computer graphics was a pain; every hardware needed it’s own custom software. To this end OpenGL and DirectX were introduced in 1992 respectively 1995. OpenGL and DirectX provide an easy to use graphics API to provide (hardware) abstraction. In 1995 the first consumer videocard was introduced. The graphic APIs in combination with the new hardware made real time (3D) graphics and games finally possible. There was no support for shaders yet. At this time the graphics APIs and video hardware only offered the fixed-function pipeline (FFP) as the one and only graphics processing pipeline. The FFP allowed for various customization of the rendering process. However the possibilities are predefined and therefore limited. The FFP did not allow for custom algorithms. Shader a-like effects are possible using the FFP, but only within the predefined constraints. Examples are setting fog color, lightning etc. For specific image adjustments, such as we want to do using shaders, custom access to the rendering process is required. Figure 1: FFP in OpenGL ES 2.0 (http://www.khronos.org/opengles/2_X/) In 2001 the GeForce3/Radeon 8500 introduced the first limited programmability in the graphics pipeline and the Radeon 9700/GeForce FX brought full programmability in 2002 [2]. This is called the programmable pipeline (PP), see figure 2. The orange boxes in figure 1 and 2 clearly show the difference between the FFP and the PP. The previously fixed-functions are replaced with general phases in which you can modify the data however you want; The PP allows you to do whatever you can program. You’re in total control of every vertex and pixel on the screen and by now also geometry. However, with more power comes more responsibility. The PP required a lot more work as well. Shader languages, which we’ll discuss in a later section, help to address this problem. Figure 2: PP in OpenGL ES 2.0 (http://www.khronos.org/opengles/2_X/) The evolution of the GPU: GPGPU The videocard has evolved to it’s current state thanks to 3D graphics. By making use of the video card 3D and 2D graphics are hardware accelerated. The GPU has reached a point where it so powerful that it’s a pity to use it’s power just for graphics; Nowadays the GPU is the most powerful computational hardware in a computer. Initiatives to use the GPU for other computations have risen quickly. This GPU utilization is called the General-Purpose computation on GPU (GPGPU)[15]. The key features of GPGPU are the high flexibility, programmability, parallel nature, high-performance, high data throughput. High level programming languages such as C provide interfaces to the GPU allowing any application to make use of the GPU. The performance gained using the GPU is of a bigger magnitude than CPU optimizations. An interesting GPGPU application is the Folding@Home project [16]. This project allows anyone to donate idle time of their computer to the project which studies medical subjects. Playstation 3 owners are able to easily donate CPU and GPU computational efforts as the program is built in the Playstations main menu and supported by Sony. Types of shaders Originally shaders only performed pixel operations; operations are done on every pixel. this is what we now call pixel shaders. Since the introduction of other shaders the term shader is now a more general term to describe any of the following three shaders. Examples of shaders are shown in the shader techniques section. Pixel shaders(DirectX) / Fragment shaders(OpenGL) Calculate the color of individual pixels. The image on the left has various applied effects such as rim lightning and normal mapping [6]. Note that DirectX calls this pixel shaders and OpenGL calls this fragment shaders. Vertex shaders Modifies a vertices position, color and texture coordinates. To the left an example is shown where the vertices are moved along their normals [6]. Note that vertices can be (re)moved but new vertices cannot be added, that would fall under the geometry shader. Geometry shaders The geometry shader is the newest of the three shader types. It can modify vertices. I.e. procedurally generated geometry. An example is shown on the left, where the bunny has polygonal fur generated by the geometry shader [7]. Since this shader is only supported for a few years now it’s (possible) utilization is the most interesting. Even more so than the other two shader types it allows for procedural/dynamic content generation. One use that spring to mind immediately is dynamic grass/hair/fur. Unified shader model While the previously mentioned shader types are separate entities in OpenGL and DirectX, a “quite recent” (2006) change by nVidia introduces the Unified shader model hereby merging all shader types into one [8][9]. This unified shader model consists of two parts: A unified shading architecture and a unified shader model. In DirectX the Unified shader model is called “Shader model 4.0”, in OpenGL it’s simply “Unified shader model” . The instructions for all three types of shaders are now merged into one instruction set for all shaders. To this end, the hardware had to undergo some changes as well. The hardware used to have separate units for vertex processing, pixel shading etc. To accommodate for the unified shader model a single shader core and processors were developed. Any of the independent processors can handle any type of shading operation [14]. This hardware change allows for more flexible workload distribution as it’s perfectly fine to dedicate all processors to pixel shading when no vertex shading is required. Shader languages Originally shaders were written in assembly, but as with any other programming fields it became clear that a higher level language was required. When the programmable pipeline was introduced one could finally program shaders in assembly. However, shaders were hard to write and read. Assembly shader code: vs_2_0 dcl_position v0 dcl_texcoord v1 dp4 oPos.x, v0, c0 To make shaders easier to understand and debug higher level programming languages were developed. These shader languages compile to assembly shader code. The three major shader languages that have been developed are: HLSL (DirectX): ● Outputs a DirectX shader program. ● Latest version as of January 2011: Shader model 5 ( http://msdn.microsoft.com/directx ) ● Syntax similar to Cg GLSL (OpenGL): ● Outputs a OpenGL shader program. ● Latest version as of January 2011: OpenGL 4.1 (http://www.opengl.org/registry/ #apispecs) Cg (nVidia): ● Outputs a DirectX or OpenGL shader program ● Latest version as of January 2011: Cg 3.0 ( http://developer.nvidia.com/page/ cg_main.html ) ● Syntax similar to HLSL The Cg and HLSL syntax is very similar since nVidia and Microsoft co-developed these languages. The main difference between Cg and HLSL is that Cg has extra function to support GLSL. None of these language is ultimately the best as it depends on the targeted platform(s) and the specific features/performance you need. Cg seems to be the most appealing language since it supports both the major paltforms. Cg is the shader language used in the Unity game engine, see Appendix 2 and 3 for more details on this subject. For a simple demonstration of a shader we look at an Cg shader, or actually, we will look at an CgFX “effect”. This needs explanation first. Cg shaders are not perfectly transferable between applications as application specific details are always embedded in Cg shaders by requirement.