<<

1dv438 Teori Contents

1 Math 1 1.1 Basis (linear algebra) ...... 1 1.1.1 Definition ...... 1 1.1.2 Expression of a basis ...... 2 1.1.3 Properties ...... 2 1.1.4 Examples ...... 3 1.1.5 Extending to a basis ...... 3 1.1.6 Example of alternative proofs ...... 3 1.1.7 Ordered bases and coordinates ...... 4 1.1.8 Related notions ...... 4 1.1.9 Proof that every vector space has a basis ...... 5 1.1.10 See also ...... 6 1.1.11 Notes ...... 6 1.1.12 References ...... 6 1.1.13 External links ...... 6 1.2 Multiplication of vectors ...... 6 1.2.1 See also ...... 7 1.3 Identity matrix ...... 7 1.3.1 See also ...... 7 1.3.2 Notes ...... 7 1.3.3 External links ...... 7 1.4 Translation (geometry) ...... 8 1.4.1 Matrix representation ...... 8 1.4.2 Translations in physics ...... 9 1.4.3 See also ...... 9 1.4.4 External links ...... 9 1.4.5 References ...... 9 1.5 (geometry) ...... 9 1.5.1 Normal to surfaces in 3D space ...... 10 1.5.2 Hypersurfaces in n-dimensional space ...... 11 1.5.3 Varieties defined by implicit equations in n-dimensional space ...... 11 1.5.4 Uses ...... 12

i ii CONTENTS

1.5.5 Normal in geometric optics ...... 12 1.5.6 See also ...... 12 1.5.7 References ...... 12 1.5.8 External links ...... 12

2 Materials 13 2.1 ...... 13 2.1.1 History ...... 13 2.1.2 Advantages ...... 13 2.1.3 Disadvantages ...... 14 2.1.4 Applications ...... 14 2.1.5 Related ...... 15 2.1.6 References ...... 15 2.1.7 See also ...... 15 2.2 ...... 15 2.2.1 Perspective correctness ...... 16 2.2.2 Development ...... 17 2.2.3 See also ...... 17 2.2.4 References ...... 18 2.2.5 External links ...... 18 2.3 Reflection mapping ...... 18 2.3.1 Types ...... 19 2.3.2 History ...... 20 2.3.3 See also ...... 20 2.3.4 References ...... 20 2.3.5 External links ...... 20 2.4 ...... 20 2.4.1 History ...... 21 2.4.2 How it works ...... 21 2.4.3 Calculating tangent space ...... 21 2.4.4 Normal mapping in games ...... 21 2.4.5 See also ...... 22 2.4.6 References ...... 22 2.4.7 External links ...... 22 2.5 ...... 22 2.5.1 Meaning of the term in different contexts ...... 23 2.5.2 See also ...... 23 2.5.3 Further reading ...... 23 2.6 ...... 24 2.6.1 Usage example ...... 24 2.6.2 See also ...... 24 2.7 UV mapping ...... 24 CONTENTS iii

2.7.1 UV mapping ...... 24 2.7.2 Finding UV on a sphere ...... 25 2.7.3 See also ...... 25 2.7.4 Notes ...... 25 2.7.5 References ...... 25 2.7.6 External links ...... 25 2.8 Mipmap ...... 26 2.8.1 Basic use ...... 26 2.8.2 Origin ...... 26 2.8.3 How it works ...... 26 2.8.4 Anisotropic filtering ...... 27 2.8.5 Summed-area tables ...... 27 2.8.6 References ...... 27 2.8.7 See also ...... 27 2.9 (video games) ...... 27 2.9.1 Advanced skyboxes ...... 28 2.9.2 See also ...... 29 2.9.3 External links ...... 29 2.10 Materials system ...... 29 2.10.1 References ...... 29 2.11 ...... 29 2.11.1 History ...... 30 2.11.2 Design ...... 30 2.11.3 Types ...... 30 2.11.4 Parallel processing ...... 31 2.11.5 Programming ...... 31 2.11.6 See also ...... 31 2.11.7 References ...... 31 2.11.8 Further reading ...... 32 2.11.9 External links ...... 32 2.12 ...... 32 2.12.1 Drawing ...... 32 2.12.2 ...... 32 2.12.3 Flat shading ...... 34 2.12.4 Smooth shading ...... 34 2.12.5 Flat vs. smooth shading ...... 35 2.12.6 See also ...... 35 2.12.7 References ...... 35 2.13 ...... 35 2.13.1 Offline rendering ...... 35 2.13.2 Real-time rendering ...... 36 iv CONTENTS

2.13.3 References ...... 37 2.13.4 Notes ...... 37 2.14 OpenGL Shading Language ...... 37 2.14.1 Background ...... 37 2.14.2 Versions ...... 38 2.14.3 Operators ...... 38 2.14.4 Functions and control structures ...... 38 2.14.5 Compilation and execution ...... 38 2.14.6 Examples ...... 38 2.14.7 See also ...... 39 2.14.8 References ...... 39 2.14.9 ...... 40 2.14.10 External links ...... 40 2.15 ...... 40 2.15.1 History ...... 40 2.15.2 Phong interpolation ...... 40 2.15.3 Phong reflection model ...... 40 2.15.4 See also ...... 41 2.15.5 References ...... 41 2.16 ...... 41 2.16.1 Description ...... 41 2.16.2 Comparison with other shading techniques ...... 41 2.16.3 See also ...... 42 2.16.4 References ...... 42

3 Other 43 3.1 WebGL ...... 43 3.1.1 Design ...... 43 3.1.2 History ...... 43 3.1.3 Support ...... 43 3.1.4 Content creation and ecosystem ...... 44 3.1.5 Security ...... 44 3.1.6 Similar technologies for 3D in a browser ...... 44 3.1.7 References ...... 45 3.1.8 External links ...... 46 3.2 Three.js ...... 46 3.2.1 Overview ...... 46 3.2.2 History ...... 46 3.2.3 Features ...... 46 3.2.4 Usage ...... 47 3.2.5 Selected Uses and Works ...... 47 3.2.6 Community ...... 48 CONTENTS v

3.2.7 See also ...... 48 3.2.8 References ...... 48 3.2.9 Bibliography ...... 50 3.2.10 External links ...... 50 3.3 Computer graphics ...... 50 3.3.1 Overview ...... 50 3.3.2 History ...... 51 3.3.3 Introduction ...... 51 3.3.4 Image types ...... 57 3.3.5 Concepts and principles ...... 59 3.3.6 Pioneers in computer graphics ...... 60 3.3.7 Study of computer graphics ...... 62 3.3.8 Applications ...... 62 3.3.9 References ...... 62 3.3.10 Further reading ...... 63 3.3.11 External links ...... 63 3.4 ...... 63 3.4.1 History ...... 63 3.4.2 Dedicated vs integrated graphics ...... 63 3.4.3 Power demand ...... 64 3.4.4 Size ...... 64 3.4.5 Multi-card scaling ...... 65 3.4.6 Device drivers ...... 65 3.4.7 Industry ...... 65 3.4.8 Size of market and impact of accelerated processing units on video card sales ...... 65 3.4.9 Parts ...... 66 3.4.10 See also ...... 68 3.4.11 References ...... 69 3.4.12 External links ...... 69 3.5 ...... 70 3.5.1 History ...... 70 3.5.2 Computational functions ...... 72 3.5.3 GPU forms ...... 73 3.5.4 Sales ...... 75 3.5.5 See also ...... 75 3.5.6 References ...... 76 3.5.7 External links ...... 77 3.6 Anti-aliasing ...... 77 3.6.1 See also ...... 77 3.7 ...... 77 3.7.1 History ...... 79 vi CONTENTS

3.7.2 Production ...... 79 3.7.3 Mechanics ...... 80 3.7.4 Types ...... 80 3.7.5 Viewing ...... 83 3.7.6 Traditional anaglyph processing methods ...... 84 3.7.7 Anaglyphic color channels ...... 86 3.7.8 Applications ...... 86 3.7.9 See also ...... 87 3.7.10 References ...... 87 3.7.11 External links ...... 88 3.8 Polarized 3D system ...... 88 3.8.1 Types of ...... 88 3.8.2 System construction and examples ...... 89 3.8.3 History ...... 90 3.8.4 Advantages and disadvantages ...... 90 3.8.5 See also ...... 91 3.8.6 References ...... 91 3.8.7 External links ...... 91 3.9 ...... 92 3.9.1 Advantages and disadvantages ...... 92 3.9.2 Crosstalk ...... 93 3.9.3 Standards ...... 93 3.9.4 Timeline ...... 93 3.9.5 Hardware ...... 94 3.9.6 Therapeutic alternating occlusion ...... 96 3.9.7 See also ...... 96 3.9.8 References ...... 96 3.9.9 External links ...... 96 3.10 ...... 96 3.10.1 Background ...... 97 3.10.2 Side-by-side ...... 98 3.10.3 3D viewers ...... 101 3.10.4 Other display methods without viewers ...... 103 3.10.5 Stereo photography techniques ...... 104 3.10.6 Stereo window ...... 104 3.10.7 Uses ...... 105 3.10.8 Bibliography ...... 106 3.10.9 External links ...... 107 3.11 Vertex Buffer Object ...... 107 3.11.1 Basic VBO functions ...... 107 3.11.2 Example usage in Using OpenGL 2.1 ...... 108 CONTENTS vii

3.11.3 Example usage in C Using OpenGL 3.x and OpenGL 4.x ...... 108 3.11.4 References ...... 109 3.11.5 External links ...... 109 3.12 ...... 109 3.12.1 Concept ...... 109 3.12.2 Programmable graphics pipeline ...... 109 3.12.3 Stages of the graphics pipeline ...... 109 3.12.4 The graphics pipeline in hardware ...... 110 3.12.5 See also ...... 110 3.12.6 References ...... 110 3.12.7 External links ...... 111 3.13 High-dynamic-range rendering ...... 111 3.13.1 History ...... 111 3.13.2 Examples ...... 111 3.13.3 Limitations and compensations ...... 112 3.13.4 Applications in computer entertainment ...... 112 3.13.5 See also ...... 113 3.13.6 References ...... 114 3.13.7 External links ...... 114 3.14 High-dynamic-range imaging ...... 114 3.14.1 Photography ...... 115 3.14.2 History of HDR photography ...... 116 3.14.3 Video ...... 117 3.14.4 Example ...... 118 3.14.5 HDR sensors ...... 118 3.14.6 See also ...... 118 3.14.7 References ...... 118 3.15 (graphics) ...... 120 3.15.1 Algorithm overview ...... 120 3.15.2 Detailed description of ray tracing computer algorithm and its genesis ...... 120 3.15.3 Adaptive depth control ...... 124 3.15.4 Bounding volumes ...... 124 3.15.5 In real time ...... 124 3.15.6 Computational Complexity ...... 125 3.15.7 See also ...... 125 3.15.8 References ...... 125 3.15.9 External links ...... 126 3.16 Rendering (computer graphics) ...... 126 3.16.1 Usage ...... 127 3.16.2 Features ...... 127 3.16.3 Techniques ...... 128 viii CONTENTS

3.16.4 Radiosity ...... 130 3.16.5 Sampling and filtering ...... 130 3.16.6 Optimization ...... 130 3.16.7 Academic core ...... 130 3.16.8 Chronology of important published ideas ...... 131 3.16.9 See also ...... 132 3.16.10 References ...... 133 3.16.11 Further reading ...... 134 3.16.12 External links ...... 135 3.17 ...... 135 3.17.1 Etymology ...... 135 3.17.2 Technical ...... 136 3.17.3 Megapixel ...... 138 3.17.4 See also ...... 139 3.17.5 References ...... 139 3.17.6 External links ...... 139 3.18 ...... 140 3.18.1 History ...... 140 3.18.2 Overview ...... 140 3.18.3 Representations ...... 142 3.18.4 Basic properties ...... 144 3.18.5 Physics ...... 148 3.18.6 Vectors as directional derivatives ...... 149 3.18.7 Vectors, pseudovectors, and transformations ...... 149 3.18.8 See also ...... 150 3.18.9 Notes ...... 150 3.18.10 References ...... 151 3.18.11 External links ...... 151

4 Text and image sources, contributors, and licenses 152 4.1 Text ...... 152 4.2 Images ...... 160 4.3 Content license ...... 167 Chapter 1

Math

1.1 Basis (linear algebra) dependent on these vectors.[1] In more general terms, a basis is a linearly independent spanning set. Given a basis of a vector space V, every element of V can be expressed uniquely as a linear combination of ba- sis vectors, whose coefficients are referred to as vector coordinates or components. A vector space can have several distinct sets of basis vectors; however each such set has the same number of elements, with this number being the dimension of the vector space.

1.1.1 Definition

y

1

(−2,1) (0,1)

x −2 −1 0 (1,0) 1 2

−1

This picture illustrates the standard basis in R2. The blue and orange vectors are the elements of the basis; the vector can be given in terms of the basis vectors, and so is linearly dependent upon them.

A basis B of a vector space V over a field F is a linearly independent subset of V that spans V.

In more detail, suppose that B = { v1, …, vn } is a fi- nite subset of a vector space V over a field F (such as the The same vector can be represented in two different bases (purple real or complex numbers R or C). Then B is a basis if it and arrows). satisfies the following conditions:

Basis vector redirects here. For basis vector in • the linear independence property, the context of crystals, see crystal structure. For a more general concept in physics, see frame of for all a , …, an ∈ F, if a v + … + reference. 1 1 1 anvn = 0, then necessarily a1 = … A set of vectors in a vector space V is called a basis, or a = an = 0; and set of basis vectors, if the vectors are linearly indepen- dent and every other vector in the vector space is linearly • the spanning property,

1 2 CHAPTER 1. MATH

for every x in V it is possible to z choose a1, …, an ∈ F such that x = a1v1 + … + anvn. Y The numbers aᵢ are called the coordinates of the vector x with respect to the basis B, and by the first property they are uniquely determined. Z A vector space that has a finite basis is called finite- dimensional. To deal with infinite-dimensional spaces, β we must generalize the above definition to include infinite basis sets. We therefore say that a set (finite or infinite) B ⊂ V is a basis, if y X γ • every finite subset B0 ⊆ B obeys the independence α property shown above; and

• for every x in V it is possible to choose a1, …, an ∈ x F and v1, …, vn ∈ B such that x = a1v1 + … + anvn. N The sums in the above definition are all finite because without additional structure the axioms of a vector space Basis defined by Euler angles - The xyz (fixed) system is shown do not permit us to meaningfully speak about an infinite in blue, the XYZ (rotated) system is shown in red. The line of nodes, labeled N, is shown in green. sum of vectors. Settings that permit infinite linear combi- nations allow alternative definitions of the basis concept: see Related notions below. Change of basis matrix: This matrix can be used to It is often convenient to list the basis vectors in a specific change different objects of the space to the new basis. order, for example, when considering the transformation Therefore is called "change of basis" matrix. It is impor- matrix of a linear map with respect to a basis. We then tant to note that some objects change their components speak of an ordered basis, which we define to be a with this matrix and some others, like vectors, with its sequence (rather than a set) of linearly independent vec- inverse. tors that span V: see Ordered bases and coordinates be- low. 1.1.3 Properties

1.1.2 Expression of a basis Again, B denotes a subset of a vector space V. Then, B is a basis if and only if any of the following equivalent There are several ways to describe a basis for the space. conditions are met: Some are made ad hoc for a specific dimension. For ex- ample, there are several ways to give a basis in dim 3, like • B is a minimal generating set of V, i.e., it is a gener- Euler angles. ating set and no proper subset of B is also a generat- The general case is to give a matrix with the components ing set. of the new basis vectors in columns. This is also the more • general method because it can express any possible set of B is a maximal set of linearly independent vectors, vectors even if it is not a basis. This matrix can be seen i.e., it is a linearly independent set but no other lin- as three things: early independent set contains it as a proper subset. Basis Matrix: Is a matrix that represents the basis, be- • Every vector in V can be expressed as a linear com- cause its columns are the components of vectors of the bination of vectors in B in a unique way. If the basis basis. This matrix represents any vector of the new basis is ordered (see Ordered bases and coordinates be- as linear combination of the current basis. low) then the coefficients in this linear combination Rotation operator: When orthonormal bases are used, provide coordinates of the vector relative to the ba- any other orthonormal basis can be defined by a rotation sis. matrix. This matrix represents the rotation operator that rotates the vectors of the basis to the new one. It is ex- Every vector space has a basis. The proof of this requires actly the same matrix as before because the rotation ma- the axiom of choice. All bases of a vector space have trix multiplied by the identity matrix I has to be the new the same cardinality (number of elements), called the basis matrix. dimension of the vector space. This result is known as the 1.1. BASIS (LINEAR ALGEBRA) 3

dimension theorem, and requires the ultrafilter lemma, a 1.1.6 Example of alternative proofs strictly weaker form of the axiom of choice. Often, a mathematical result can be proven in more than Also many vector sets can be attributed a standard basis one way. Here, using three different proofs, we show that which comprises both spanning and linearly independent the vectors (1,1) and (−1,2) form a basis for R2. vectors. Standard bases for example: From the definition of basis In Rn {E1,...,En} where En is the n-th column of the iden- tity matrix which consists of all ones in the main diagonal We have to prove that these two vectors are linearly inde- and zeros everywhere else. This is because the columns pendent and that they generate R2. of the identity matrix are linearly independent can always span a vector set by expressing it as a linear combination. Part I: If two vectors v,w are linearly independent, then av + bw = 0 (a and b scalars) implies a = 0, b = 0. In P2 where P2 is the set of all polynomials of degree at most 2 {1,x,x2} is the standard basis. To prove that they are linearly independent, suppose that there are numbers a,b such that: In M22 {M₁,₁,M₁,₂,M₂,₁,M₂,₂} where M22 is the set of all 2×2 matrices. and M, is the 2×2 matrix with a 1 in the m,n position and zeros everywhere else. This again is a a(1, 1) + b(−1, 2) = (0, 0) standard basis since it is linearly independent and span- ning. (i.e., they are linearly dependent). Then:

1.1.4 Examples (a − b, a + 2b) = (0, 0) • Consider R2, the vector space of all coordinates (a, and b) where both a and b are real numbers. Then a a − b = 0 very natural and simple basis is simply the vectors and a + 2b = 0. e1 = (1,0) and e2 = (0,1): suppose that v = (a, b) is a vector in R2, then v = a (1,0) + b (0,1). But any two linearly independent vectors, like (1,1) and (−1,2), will also form a basis of R2. Subtracting the first equation from the second, we obtain:

• More generally, the vectors e1, e2, ..., en are linearly independent and generate Rn. Therefore, they form 3b = 0 a basis for Rn and the dimension of Rn is n. This so basis is called the standard basis. b = 0.

• Let V be the real vector space generated by the func- tions et and e2t. These two functions are linearly in- Adding this equation to the first equation then: dependent, so they form a basis for V.

• Let R[x] denote the vector space of real polynomials; then (1, x, x2, ...) is a basis of a = 0. R[x]. The dimension of R[x] is therefore equal to aleph-0. Hence we have linear independence. Part II: To prove that these two vectors generate R2, we have to let (a,b) be an arbitrary element of R2, and show 1.1.5 Extending to a basis that there exist numbers r,s ∈ R such that:

Let S be a subset of a vector space V. To extend S to a basis means to find a basis B that contains S as a subset. This can be done if and only if S is linearly independent. r(1, 1) + s(−1, 2) = (a, b). Almost always, there is more than one such B, except in rather special circumstances (i.e. S is already a basis, or Then we have to solve the equations: S is empty and V has two elements). A similar question is when does a subset S contain a basis. This occurs if and only if S spans V. In this case, S will usually contain several different bases. r − s = a 4 CHAPTER 1. MATH

r + 2s = b. where {ei} is the standard basis for Fn. Subtracting the first equation from the second, we get: Conversely, given an ordered basis, consider the map de- fined by

3s = b − a, φ(x) = x1v1 + x2v2 + ... + xnvn, and then

s = (b − a)/3, n where x = x1e1 + x2e2 + ... + xnen is an element of F . and finally It is not hard to check that φ is a linear isomorphism. r = s + a = ((b − a)/3) + a = (b + 2a)/3. These two constructions are clearly inverse to each other. Thus ordered bases for V are in 1-1 correspondence with By the dimension theorem linear isomorphisms Fn → V. The inverse of the linear isomorphism φ determined by Since (−1,2) is clearly not a multiple of (1,1) and since an ordered basis {vi} equips V with coordinates: if, for a (1,1) is not the vector, these two vectors are linearly vector v ∈ V, φ−1(v) = (a , a ,...,an) ∈ Fn, then the com- 2 1 2 independent. Since the dimension of R is 2, the two ponents aj = aj(v) are the coordinates of v in the sense vectors already form a basis of R2 without needing any that v = a1(v) v1 + a2(v) v2 + ... + an(v) vn. extension. The maps sending a vector v to the components aj(v) are linear maps from V to F, because of φ−1 is linear. Hence By the invertible matrix theorem they are linear functionals. They form a basis for the dual space of V, called the dual basis. Simply compute the determinant

[ ] 1.1.8 Related notions 1 −1 det = 3 ≠ 0. 1 2 Analysis Since the above matrix has a nonzero determinant, its columns form a basis of R2. See: invertible matrix. In the context of infinite-dimensional vector spaces over the real or complex numbers, the term Hamel basis (named after Georg Hamel) or algebraic basis can be 1.1.7 Ordered bases and coordinates used to refer to a basis as defined in this article. This is to make a distinction with other notions of “basis” that ex- A basis is just a linearly independent set of vectors with or ist when infinite-dimensional vector spaces are endowed without a given ordering. For many purposes it is conve- with extra structure. The most important alternatives are nient to work with an ordered basis. For example, when orthogonal bases on Hilbert spaces, Schauder bases and working with a coordinate representation of a vector it is Markushevich bases on normed linear spaces. The term customary to speak of the “first” or “second” coordinate, Hamel basis is also commonly used to mean a basis for which makes sense only if an ordering is specified for the the real numbers R as a vector space over the field of basis. For finite-dimensional vector spaces one typically rational numbers. (In this case, the dimension of R over indexes a basis {vi} by the first n integers. An ordered Q is uncountable, specifically the continuum, the cardi- basis is also called a frame. nal number 2ℵ₀.) Suppose V is an n-dimensional vector space over a field The common feature of the other notions is that they per- F. A choice of an ordered basis for V is equivalent to mit the taking of infinite linear combinations of the basic a choice of a linear isomorphism φ from the coordinate vectors in order to generate the space. This, of course, re- space Fn to V. quires that infinite sums are meaningfully defined on these spaces, as is the case for topological vector spaces – a Proof. The proof makes use of the fact that the standard large class of vector spaces including e.g. Hilbert spaces, basis of Fn is an ordered basis. Banach spaces or Fréchet spaces. Suppose first that The preference of other types of bases for infinite- dimensional spaces is justified by the fact that the Hamel n φ : F → V basis becomes “too big” in Banach spaces: If X is an infinite-dimensional normed vector space which is is a linear isomorphism. Define an ordered basis {vi} for complete (i.e. X is a Banach space), then any Hamel basis V by of X is necessarily uncountable. This is a consequence of the Baire category theorem. The completeness as well as vi = φ(ei) for 1 ≤ i ≤ n infinite dimension are crucial assumptions in the previous 1.1. BASIS (LINEAR ALGEBRA) 5

claim. Indeed, finite-dimensional spaces have by defini- Note that if V = {0}, then the empty set is a basis for V. tion finite bases and there are infinite-dimensional (non- Now we consider the case where V contains at least one complete) normed spaces which have countable Hamel nonzero element, say v. c x = bases. Consider 00 , the space of the sequences Define the set X as all linear independent subsets of V. (x ) n of real numbers which have only finitely many non- Note that since V contains the nonzero element v, the zero elements, with the norm ∥x∥ = sup |x |. Its n n singleton subset L = {v} of V is necessarily linearly inde- standard basis, consisting of the sequences having only pendent. one non-zero element, which is equal to 1, is a countable Hamel basis. Hence the set X contains at least the subset L = {v}, and so X is nonempty.

Example In the study of Fourier series, one learns that We let X be partially ordered by inclusion: If L1 and L2 the functions {1} ∪ { sin(nx), cos(nx): n = 1, 2, 3, ... } belong to X, we say that L1 ≤ L2 when L1 ⊂ L2. It is easy are an “orthogonal basis” of the (real or complex) vector to check that (X, ≤) satisfies the definition of a partially space of all (real or complex valued) functions on the in- ordered set. terval [0, 2π] that are square-integrable on this interval, We now note that if Y is a subset of X that is totally or- i.e., functions f satisfying dered by ≤, then the union LY of all the elements of Y (which are themselves certain subsets of V) is an upper ∫ bound for Y. To show this, it is necessary to verify both 2π |f(x)|2 dx < ∞. that a) LY belongs to X, and that b) every element L of 0 Y satisfies L ≤ LY. Both a) and b) are easy to check. The functions {1} ∪ { sin(nx), cos(nx): n = 1, 2, 3, ... Now we apply Zorn’s lemma, which asserts that because } are linearly independent, and every function f that is X is nonempty, and every totally ordered subset of the square-integrable on [0, 2π] is an “infinite linear combi- partially ordered set (X, ≤) has an upper bound, it follows nation” of them, in the sense that that X has a maximal element. (In other words, there exists some element Lₐₓ of X satisfying the condition ∫ that whenever Lₐₓ ≤ L for some element L of X, then L 2π ∑n ( ) 2 = Lₐₓ.) lim a0+ ak cos(kx)+bk sin(kx) −f(x) dx = 0 n→∞ 0 k=1 Finally we claim that Lₐₓ is a basis for V. Since Lₐₓ belongs to X, we already know that Lₐₓ is a linearly in- for suitable (real or complex) coefficients ak, bk. But dependent subset of V. most square-integrable functions cannot be represented as finite linear combinations of these basis functions, Now suppose Lₐₓ does not span V. Then there exists which therefore do not comprise a Hamel basis. Every some vector w of V that cannot be expressed as a linearly Hamel basis of this space is much bigger than this merely combination of elements of Lₐₓ (with coefficients in the countably infinite set of functions. Hamel bases of spaces field F). Note that such a vector w cannot be an element of this kind are typically not useful, whereas orthonormal of Lₐₓ. bases of these spaces are essential in Fourier analysis. Now consider the subset L of V defined by L = Lₐₓ ∪ {w}. It is easy to see that a)Lₐₓ ≤ L (since Lₐₓ is a Affine geometry subset of L), and that b)Lₐₓ ≠ L (because L contains the vector w that is not contained in Lₐₓ). The related notions of an affine space, projective space, But the combination of a) and b) above contradict the fact convex set, and cone have related notions of affine ba- that Lₐₓ is a maximal element of X, which we have al- sis[2] (a basis for an n-dimensional affine space is n + 1 ready proved. This contradiction shows that the assump- points in general linear position), projective basis (es- tion that Lₐₓ does not span V was not true. sentially the same as an affine basis, this is n + 1 points Hence Lₐₓ does span V. Since we also know that Lₐₓ in general linear position, here in projective space), con- is linearly independent over the field F, this verifies that vex basis (the vertices of a polytope), and cone basis[3] Lₐₓ is a basis for V. Which proves that the arbitrary vec- (points on the edges of a polygonal cone); see also a tor space V has a basis. Hilbert basis (linear programming). Note: This proof relies on Zorn’s lemma, which is logi- cally equivalent to the Axiom of Choice. It turns out that, 1.1.9 Proof that every vector space has a conversely, the assumption that every vector space has a basis basis can be used to prove the Axiom of Choice. Thus the two assertions are logically equivalent. Let V be any vector space over some field F. Every vector space must contain at least one element: the zero vector 0. 6 CHAPTER 1. MATH

1.1.10 See also • Grassmann, Hermann (1844), Die Lineale Aus- dehnungslehre - Ein neuer Zweig der Mathematik (in • Change of basis German), reprint: Hermann Grassmann. Translated by Lloyd C. Kannenberg. (2000), Extension The- • Frame of a vector space ory, Kannenberg, L.C., Providence, R.I.: American • Spherical basis Mathematical Society, ISBN 978-0-8218-2031-5

• Hamilton, William Rowan (1853), Lectures on 1.1.11 Notes Quaternions, Royal Irish Academy

[1] Halmos, Paul Richard (1987) Finite-dimensional vector • Möbius, August Ferdinand (1827), Der Barycen- spaces (4th edition) Springer-Verlag, New York, page 10, trische Calcul : ein neues Hülfsmittel zur analytischen ISBN 0-387-90093-4 Behandlung der Geometrie (Barycentric calculus: a [2] Notes on geometry, by Elmer G. Rees, p. 7 new utility for an analytic treatment of geometry) (in German) [3] Some remarks about additive functions on cones, Marek Kuczma • Moore, Gregory H. (1995), “The axiomatization of linear algebra: 1875–1940”, Historia Mathematica 22 (3): 262–303, doi:10.1006/hmat.1995.1025 1.1.12 References • Peano, Giuseppe (1888), Calcolo Geometrico sec- General references ondo l'Ausdehnungslehre di H. Grassmann preceduto dalle Operazioni della Logica Deduttiva (in Italian), • Blass, Andreas (1984), “Existence of bases implies Turin the axiom of choice”, Axiomatic set theory, Contem- porary Mathematics volume 31, Providence, R.I.: American Mathematical Society, pp. 31–33, ISBN 1.1.13 External links 0-8218-5026-1, MR 763890 • Brown, William A. (1991), Matrices and vector • Instructional from Khan Academy spaces, New York: M. Dekker, ISBN 978-0-8247- • 8419-5 Introduction to bases of subspaces • Proof that any subspace basis has same num- • Lang, Serge (1987), Linear algebra, Berlin, New ber of elements York: Springer-Verlag, ISBN 978-0-387-96412-6 • Hazewinkel, Michiel, ed. (2001), “Basis”, Historical references Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 • Banach, Stefan (1922), “Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales (On operations in abstract sets and their 1.2 Multiplication of vectors application to integral equations)", Fundamenta Mathematicae (in French) 3, ISSN 0016-2736 In mathematics, Vector multiplication refers to one of • Bolzano, Bernard (1804), Betrachtungen über einige several techniques for the multiplication of two (or more) Gegenstände der Elementargeometrie (Considera- vectors with themselves. It may concern any of the fol- tions of some aspects of elementary geometry) (in lowing articles: German) • Bourbaki, Nicolas (1969), Éléments d'histoire des • Dot product — also known as the “scalar product”, mathématiques (Elements of history of mathematics) an operation that takes two vectors and returns a (in French), Paris: Hermann scalar quantity. The dot product of two vectors can be defined as the product of the magnitudes of the • Dorier, Jean-Luc (1995), “A general outline of the two vectors and the cosine of the angle between the genesis of vector space theory”, Historia Mathemat- two vectors. Alternatively, it is defined as the prod- ica 22 (3): 227–261, doi:10.1006/hmat.1995.1024, uct of the projection of the first vector onto the sec- MR 1347828 ond vector and the magnitude of the second vector. Thus, • Fourier, Jean Baptiste Joseph (1822), Théorie ana- lytique de la chaleur (in French), Chez Firmin Didot, père et fils A B = ||A|| ||B|| cos θ. 1.4. TRANSLATION (GEOMETRY) 7

• Cross product — also known as the “vector prod- Where n×n matrices are used to represent linear transfor- uct”, a binary operation on two vectors that results mations from an n-dimensional vector space to itself, In in another vector. The cross product of two vectors represents the identity function, regardless of the basis. in 3-space is defined as the vector perpendicular to The ith column of an identity matrix is the unit vector ei. the plane determined by the two vectors whose mag- It follows that the determinant of the identity matrix is 1 nitude is the product of the magnitudes of the two and the trace is n. vectors and the sine of the angle between the two vectors. So, if n is the unit vector perpendicular to Using the notation that is sometimes used to concisely the plane determined by vectors A and B, describe diagonal matrices, we can write:

A × B = ||A|| ||B|| sin θ n. In = diag(1, 1, ..., 1). • Triple products — products involving three vectors. It can also be written using the Kronecker delta notation: • Multiple cross products — products involving more than three vectors. (In)ij = δij.

1.2.1 See also The identity matrix also has the property that, when it is the product of two square matrices, the matrices can be • Scalar multiplication said to be the inverse of one another. • Matrix multiplication The identity matrix of a given size is the only idempotent matrix of that size having full rank. That is, it is the only • Vector addition matrix such that (a) when multiplied by itself the result is itself, and (b) all of its rows, and all of its columns, are linearly independent. 1.3 Identity matrix The principal square root of an identity matrix is itself, and this is its only positive definite square root. However, In linear algebra, the identity matrix or unit matrix of every identity matrix with at least two rows and columns size n is the n × n square matrix with ones on the main di- has an infinitude of symmetric square roots.[3] agonal and zeros elsewhere. It is denoted by In, or simply by I if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum me- 1.3.1 See also chanics, the identity matrix is denoted by a boldface one, 1; otherwise it is identical to I.) Less frequently, some • Binary matrix mathematics books use U or E to represent the identity • matrix, meaning “unit matrix”[1] and the German word Zero matrix “Einheitsmatrix”,[2] respectively. • Unitary matrix

 • Matrix of ones 1 0 0 ··· 0   [ ] 0 1 0 ··· 0 [ ] 1 0 0   1 0   0 0 1 ··· 0 I1 = 1 ,I2 = , = 0 1 0 , ··· ,In =  1.3.2 Notes  0 1 . . . . . 0 0 1 ......  0[1] 0Pipes, 0 ··· Louis1 Albert (1963), Matrix Methods for Engineer- ing, Prentice-Hall International Series in Applied Mathe- When A is m×n, it is a property of matrix multiplication matics, Prentice-Hall, p. 91. that [2] “Identity Matrix” on MathWorld;

[3] Mitchell, Douglas W. “Using Pythagorean triples to gen- ImA = AIn = A. erate square roots of I2". The Mathematical Gazette 87, November 2003, 499-500. In particular, the identity matrix serves as the unit of the ring of all n×n matrices, and as the identity element of the general linear group GL(n) consisting of all invertible n×n 1.3.3 External links matrices. (The identity matrix itself is invertible, being its own inverse.) • Identity matrix at PlanetMath.org. 8 CHAPTER 1. MATH

If v is a fixed vector, then the translation Tᵥ will work as Tᵥ(p) = p + v. If T is a translation, then the image of a subset A under the function T is the translate of A by T. The translate of A by Tᵥ is often written A + v. In a Euclidean space, any translation is an isometry. The set of all translations forms the translation group T, which is isomorphic to the space itself, and a normal subgroup of Euclidean group E(n ). The quotient group of E(n ) by T is isomorphic to the orthogonal group O(n ):

E(n ) /T ≅ O(n ).

1.4.1 Matrix representation

A translation is an affine transformation with no fixed points. Matrix multiplications always have the origin as a A translation moves every point of a figure or a space by the same amount in a given direction. fixed point. Nevertheless, there is a common workaround using homogeneous coordinates to represent a translation of a vector space with matrix multiplication: Write the 3-dimensional vector w = (wx, wy, wz) using 4 homoge- B’C’ neous coordinates as w = (wx, wy, wz, 1).[2] To translate an object by a vector v, each homogeneous vector p (written in homogeneous coordinates) can be A’ multiplied by this translation matrix:

  M/2 1 0 0 vx   0 1 0 vy Tv =   0 0 1 vz M 0 0 0 1

As shown below, the multiplication will give the expected CB result:

     A 1 0 0 vx px px + vx      0 1 0 vypy py + vy  Tvp =    =   = p + v A reflection against an axis followed by a reflection against a 0 0 1 vz pz pz + vz second axis parallel to the first one results in a total motion which 0 0 0 1 1 1 is a translation. The inverse of a translation matrix can be obtained by reversing the direction of the vector: 1.4 Translation (geometry)

−1 In Euclidean geometry, a translation is a function that Tv = T−v. moves every point a constant distance in a specified direc- tion. (Also in Euclidean geometry a transformation is a Similarly, the product of translation matrices is given by one to one correspondence between two sets of points or a adding the vectors: mapping from one plane to another.(master math Geome- try, Debra Anne, Ross)[1] A translation can be described as a rigid motion: other rigid motions include rotations TuTv = Tu+v. and reflections. A translation can also be interpreted as the addition of a constant vector to every point, or as shift- Because addition of vectors is commutative, multiplica- ing the origin of the coordinate system.A translation tion of translation matrices is therefore also commutative operator is an operator Tδ such that Tδf(v) = f(v + δ). (unlike multiplication of arbitrary matrices). 1.5. NORMAL (GEOMETRY) 9

1.4.2 Translations in physics 1.4.5 References

In physics, translation (Translational motion) is move- [1] Osgood, William F. & Graustein, William C. (1921). ment that changes the position of an object, as opposed Plane and solid analytic geometry. The Macmillan Com- to rotation. For example, according to Whittaker:[3] pany. p. 330. [2] Richard Paul, 1981, Robot manipulators: mathematics, If a body is moved from one position to an- programming, and control : the computer control of robot other, and if the lines joining the initial and fi- manipulators, MIT Press, Cambridge, MA nal points of each of the points of the body are [3] Edmund Taylor Whittaker (1988). A Treatise on the Ana- a set of parallel straight lines of length ℓ, so lytical Dynamics of Particles and Rigid Bodies (Reprint of that the orientation of the body in space is un- fourth edition of 1936 with foreword by William McCrea altered, the displacement is called a translation ed.). Cambridge University Press. p. 1. ISBN 0-521- parallel to the direction of the lines, through a 35883-3. distance ℓ. — E.T. Whittaker: A Treatise on the Ana- lytical Dynamics of Particles and Rigid Bodies, 1.5 Normal (geometry) p. 1 “Normal vector” redirects here. For a normalized vector, A translation is the operation changing the positions of all or vector of length one, see unit vector. points (x, y, z) of an object according to the formula This article is about the normal to 3D surfaces. For the normal to 3D curves, see Frenet–Serret formulas. In geometry, a normal is an object such as a line or vec- (x, y, z) → (x + ∆x, y + ∆y, z + ∆z)

where (∆x, ∆y, ∆z) is the same vector for each point of the object. The translation vector (∆x, ∆y, ∆z) com- mon to all points of the object describes a particular type n1 of displacement of the object, usually called a linear dis- placement to distinguish it from displacements involving rotation, called angular displacements. When considering spacetime, a change of time coordi- nate is considered to be a translation. For example, the Galilean group and the Poincaré group include transla- tions with respect to time.

1.4.3 See also

• Translational symmetry • Transformation matrix • Rotation matrix • Scaling (geometry) n2 • Advection • Vertical translation A polygon and two of its normal vectors

1.4.4 External links tor that is perpendicular to a given object. For example, in the two-dimensional case, the normal line to a curve • Translation Transform at cut-the-knot at a given point is the line perpendicular to the tangent line to the curve at the point. • Geometric Translation (Interactive ) at Math Is Fun In the three-dimensional case a surface normal, or sim- ply normal, to a surface at a point P is a vector that is • Understanding 2D Translation and Understanding perpendicular to the tangent plane to that surface at P. 3D Translation by Roger Germundsson, The Wol- The word “normal” is also used as an adjective: a line fram Demonstrations Project. normal to a plane, the normal component of a force, the 10 CHAPTER 1. MATH

For a hyperplane in n+1 dimensions, given by the equa- tion

r = a0 + α1a1 + ··· + αnan

where a0 is a point on the hyperplane and ai for i = 1, ..., n are non-parallel vectors lying on the hyperplane, a normal to the hyperplane is any vector in the null space of A where A is given by

A = [a1 ... an] That is, any vector orthogonal to all in-plane vectors is by definition a surface normal. If a (possibly non-flat) surface S is parameterized by a system of curvilinear coordinates x(s, t), with s and t real variables, then a normal is given by the cross product of the partial derivatives A normal to a surface at a point is the same as a normal to the tangent plane to that surface at that point. ∂x ∂x × . ∂s ∂t normal vector, etc. The concept of normality general- If a surface S is given implicitly as the set of points izes to orthogonality. (x, y, z) satisfying F (x, y, z) = 0 , then, a normal at The concept has been generalized to differentiable man- a point (x, y, z) on the surface is given by the gradient ifolds of arbitrary dimension embedded in a Euclidean space. The normal vector space or normal space of a manifold at a point P is the set of the vectors which ∇F (x, y, z). are orthogonal to the tangent space at P. In the case of since the gradient at any point is perpendicular to the level differential curves, the curvature vector is a normal vec- set, and F (x, y, z) = 0 (the surface) is a level set of F . tor of special interest. For a surface S given explicitly as a function f(x, y) of The normal is often used in computer graphics to de- the independent variables x, y (e.g., f(x, y) = a00 + termine a surface’s orientation toward a light for a01y + a10x + a11xy ), its normal can be found in at flat shading, or the orientation of each of the corners least two equivalent ways. The first one is obtaining its (vertices) to mimic a curved surface with Phong shading. implicit form F (x, y, z) = z −f(x, y) = 0 , from which the normal follows readily as the gradient 1.5.1 Normal to surfaces in 3D space ∇F (x, y, z) Calculating a surface normal (Notice that the implicit form could be defined alterna- For a convex polygon (such as a triangle), a surface nor- tively as mal can be calculated as the vector cross product of two (non-parallel) edges of the polygon. F (x, y, z) = f(x, y) − z For a plane given by the equation ax + by + cz + d = 0 , the vector (a, b, c) is a normal. these two forms correspond to the interpretation of the For a plane given by the equation surface being oriented upwards or downwards, respec- tively, as a consequence of the difference in the sign of the partial derivative ∂F /∂z .) The second way of ob- taining the normal follows directly from the gradient of r(α, β) = a + αb + βc the explicit form, i.e., a is a point on the plane and b and c are (non-parallel) vectors lying on the plane, the normal to the plane is a ∇f(x, y) vector normal to both b and c which can be found as the cross product b × c . by inspection, 1.5. NORMAL (GEOMETRY) 11

∇F (x, y, z) = kˆ − ∇f(x, y) , where kˆ is the upward unit vector. ⇐⇒ (W n) · (Mt) = 0

∇ ˆ − ∂f(x,y)ˆ− ⇐⇒ (W n)T (Mt) = 0 Note that this is equal to F (x, y, z) = k ∂x i ∂f(x,y)ˆ ˆ ˆ T T ∂y j , where i and j are the x and y unit vectors. ⇐⇒ (n W )(Mt) = 0 If a surface does not have a tangent plane at a point, it ⇐⇒ nT (W T M)t = 0 does not have a normal at that point either. For example, − T a cone does not have a normal at its tip nor does it have a Clearly choosing W s.t. W T M = I , or W = M 1 normal along the edge of its base. However, the normal will satisfy the above equation, giving a W n perpendicu- to the cone is defined almost everywhere. In general, it lar to Mt , or an n′ perpendicular to t′, as required. is possible to define a normal almost everywhere for a So use the inverse transpose of the linear transformation surface that is Lipschitz continuous. when transforming surface normals. Also note that the inverse transpose is equal to the original matrix if the ma- trix is orthonormal, i.e. purely rotational with no scaling Uniqueness of the normal or shearing.

1.5.2 Hypersurfaces in n-dimensional space

The definition of a normal to a surface in three- dimensional space can be extended to (n − 1) - dimensional hypersurfaces in a n -dimensional space. A hypersurface may be locally defined implicitly as

the set of points (x1,x2,...,xn) satisfying an equation

F (x1,x2,...,xn)=0 , where F is a given scalar function. If A vector field of normals to a surface F is continuously differentiable then the hypersurface is a differentiable manifold in the neighbourhood of the A normal to a surface does not have a unique direction; points where the gradient is not null. At these points the the vector pointing in the opposite direction of a surface normal vector space has dimension one and is generated normal is also a surface normal. For a surface which is by the gradient the topological boundary of a set in three dimensions, one can distinguish between the inward-pointing nor- ( ) ∂F ∂F ∂F ∇F (x1, x2, . . . , xn) = , ,..., . mal and outer-pointing normal, which can help define ∂x1 ∂x2 ∂xn the normal in a unique way. For an oriented surface, the surface normal is usually determined by the right-hand The normal line at a point of the hypersurface is de- rule. If the normal is constructed as the cross product fined only if the gradient is not null. It is the line passing of tangent vectors (as described in the text above), it is a through the point and having the gradient as direction. pseudovector. 1.5.3 Varieties defined by implicit equa- Transforming normals tions in n-dimensional space

(NOTE: in this section we only use the upper 3x3 matrix, A differential variety defined by implicit equations in as translation is irrelevant to the calculation) the n-dimensional space is the set of the common zeros of a finite set of differential functions in n variables When applying a transform to a surface it is often useful to derive normals for the resulting surface from the original normals. f1(x1, . . . , xn), . . . , fk(x1, . . . , xn). Specifically, given a 3x3 transformation matrix M, we can determine the matrix W that transforms a vector n The Jacobian matrix of the variety is the k×n matrix perpendicular to the tangent plane t into a vector n′ per- whose i-th row is the gradient of fi. By implicit function pendicular to the transformed tangent plane M t, by the theorem, the variety is a manifold in the neighborhood of following logic: a point of it where the Jacobian matrix has rank k. At such a point P, the normal vector space is the vector Write n′ as W n. We must find W. space generated by the values at P of the gradient vectors W n perpendicular to M t of the fi. 12 CHAPTER 1. MATH

In other words, a variety is defined as the intersection of mirror k hypersurfaces, and the normal vector space at a point is the vector space generated by the normal vectors of the hypersurfaces at the point. P The normal (affine) space at a point P of the variety is the affine subspace passing through P and generated by the normal vector space at P. These definitions may be extended verbatim to the points where the variety is not a manifold. normal O Example

Let V be the variety defined in the 3-dimensional space by the equations Q x y = 0, z = 0 .

This variety is the union of the x-axis and the y-axis.

At a point (a, 0, 0) where a≠0, the rows of the Jacobian Diagram of specular reflection matrix are (0, 0, 1) and (0, a, 0). Thus the normal affine space is the plane of equation x=a. Similarly, if b≠0, the normal plane at (0, b, 0) is the plane of equation y=b. 1.5.6 See also

At the point (0, 0, 0) the rows of the Jacobian matrix are • Pseudovector (0, 0, 1) and (0,0,0). Thus the normal vector space and the normal affine space have dimension 1 and the normal • Dual space affine space is the z-axis. • Vertex normal

1.5.4 Uses 1.5.7 References

• Surface normals are essential in defining surface in- [1] “The Law of Reflection”. The Physics Classroom Tutorial. tegrals of vector fields. Retrieved 2008-03-31.

• Surface normals are commonly used in for lighting calculations; see Lambert’s co- 1.5.8 External links sine law. • An explanation of normal vectors from ’s • Surface normals are often adjusted in 3D computer MSDN graphics by normal mapping. • Clear pseudocode for calculating a surface normal from either a triangle or polygon. • Render layers containing surface normal informa- tion may be used in Digital to change the apparent lighting of rendered elements.

1.5.5 Normal in geometric optics

Main article: Specular reflection The normal is the line perpendicular to the surface of an optical medium at a given point.[1] In reflection of light, the angle of incidence and the angle of reflection are re- spectively the angle between the normal and the incident ray (on the plane of incidence) and the angle between the normal and the reflected ray. Chapter 2

Materials

2.1 Cube mapping there is a consistently changing viewpoint.

2.1.1 History

Cube mapping was first proposed in 1986 by Ned Greene in his paper “Environment Mapping and Other Applica- tions of World Projections”,[2] ten years after environ- ment mapping was first put forward by Jim Blinn and Martin Newell. However, hardware limitations on the ability to access six texture images simultaneously made it infeasible to implement cube mapping without further technological developments. This problem was reme- died in 1999 with the release of the GeForce 256. Nvidia touted cube mapping in hardware as “a break- through image quality feature of GeForce 256 that ... will allow developers to create accurate, real-time reflections. Accelerated in hardware, cube environment mapping will free up the creativity of developers to use reflections and specular lighting effects to create interesting, immersive environments.”[3] Today, cube mapping is still used in a variety of graphical applications as a favored method of environment mapping. The lower left image shows a scene with a viewpoint marked with a black dot. The upper image shows the net of the cube mapping as seen from that viewpoint, and the lower right image shows the 2.1.2 Advantages cube superimposed on the original scene. Cube mapping is preferred over other methods of envi- In computer graphics, cube mapping is a method of ronment mapping because of its relative simplicity. Also, environment mapping that uses the six faces of a cube cube mapping produces results that are similar to those as the map shape. The environment is projected onto the obtained by ray tracing, but is much more computation- sides of a cube and stored as six square textures, or un- ally efficient – the moderate reduction in quality is com- folded into six regions of a single texture. The cube map pensated for by large gains in efficiency. is generated by first rendering the scene six times from a Predating cube mapping, sphere mapping has many in- viewpoint, with the views defined by an 90 degree view [1] herent flaws that made it impractical for most applica- frustum representing each cube face. tions. Sphere mapping is view dependent meaning that a In the majority of cases, cube mapping is preferred over different texture is necessary for each viewpoint. There- the older method of sphere mapping because it eliminates fore, in applications where the viewpoint is mobile, it many of the problems that are inherent in sphere map- would be necessary to dynamically generate a new sphere ping such as image distortion, viewpoint dependency, and mapping for each new viewpoint (or, to pre-generate a computational inefficiency. Also, cube mapping provides mapping for every viewpoint). Also, a texture mapped a much larger capacity to support real-time rendering of onto a sphere’s surface must be stretched and compressed, reflections relative to sphere mapping because the combi- and warping and distortion (particularly along the edge nation of inefficiency and viewpoint dependency severely of the sphere) are a direct consequence of this. Although limit the ability of sphere mapping to be applied when these image flaws can be reduced using certain tricks and

13 14 CHAPTER 2. MATERIALS techniques like “pre-stretching”, this just adds another dering. However, this approach is limited in that the light layer of complexity to sphere mapping. sources must be either distant or infinite lights, although Paraboloid mapping provides some improvement on the fortunately this is usually the case in CAD programs. limitations of sphere mapping, however it requires two rendering passes in addition to special image warping op- Skyboxes erations and more involved computation. Conversely, cube mapping requires only a single render pass, and due to its simple nature, is very easy for devel- opers to comprehend and generate. Also, cube mapping uses the entire resolution of the texture image, compared to sphere and paraboloid mappings, which also allows it to use lower resolution images to achieve the same qual- ity. Although handling the seams of the cube map is a problem, algorithms have been developed to handle seam behavior and result in a seamless reflection.

2.1.3 Disadvantages

If a new object or new lighting is introduced into scene or if some object that is reflected in it is moving or changing in some manner, then the reflection changes and the cube Example of a texture that can be mapped to the faces of a cubic map must be re-rendered. When the cube map is affixed skybox, with faces labelled to an object that moves through the scene then the cube map must also be re-rendered from that new position. Perhaps the most trivial application of cube mapping is to create pre-rendered panoramic sky images which are then rendered by the graphical engine as faces of a cube 2.1.4 Applications at practically infinite distance with the view point located in the center of the cube. The perspective projection of the cube faces done by the graphics engine undoes the Stable Specular Highlights effects of projecting the environment to create the cube map, so that the observer experiences an illusion of be- Computer-aided design (CAD) programs use specular ing surrounded by the scene which was used to generate highlights as visual cues to convey a sense of surface the skybox. This technique has found a widespread use curvature when rendering 3D objects. However, many in video games since it allows designers to add complex CAD programs exhibit problems in sampling specular (albeit not explorable) environments to a game at almost highlights because the specular lighting computations are no performance cost. only performed at the vertices of the mesh used to rep- resent the object, and interpolation is used to estimate lighting across the surface of the object. Problems occur Skylight Illumination when the mesh vertices are not dense enough, resulting in insufficient sampling of the specular lighting. This in Cube maps can be useful for modelling outdoor illumi- turn results in highlights with brightness proportionate to nation accurately. Simply modelling sunlight as a single the distance from mesh vertices, ultimately compromis- infinite light oversimplifies outdoor illumination and re- ing the visual cues that indicate curvature. Unfortunately, sults in unrealistic lighting. Although plenty of light does this problem cannot be solved simply by creating a denser come from the sun, the scattering of rays in the atmo- mesh, as this can greatly reduce the efficiency of object sphere causes the whole sky to act as a light source (of- rendering. ten referred to as skylight illumination). However, by us- Cube maps provide a fairly straightforward and efficient ing a cube map the diffuse contribution from skylight il- solution to rendering stable specular highlights. Multiple lumination can be captured. Unlike environment maps specular highlights can be encoded into a cube map tex- where the reflection vector is used, this method accesses ture, which can then be accessed by interpolating across the cube map based on the surface normal vector to pro- the surface’s reflection vector to supply coordinates. Rel- vide a fast approximation of the diffuse illumination from ative to computing lighting at individual vertices, this the skylight. The one downside to this method is that method provides cleaner results that more accurately rep- computing cube maps to properly represent a skylight is resent curvature. Another advantage to this method is very complex; one recent process is computing the spher- that it scales well, as additional specular highlights can be ical harmonic basis that best represents the low frequency encoded into the texture at no increase in the cost of ren- diffuse illumination from the cube map. However, a con- 2.2. TEXTURE MAPPING 15 siderable amount of research has been done to effectively 2.1.5 Related model skylight illumination. A large set of free cube maps for experimentation: http: //www.humus.name/index.php?page=Textures Dynamic Reflection Mark VandeWettering took M. C. Escher’s famous self- portrait Hand with Reflecting Sphere and reversed the mapping to obtain these cube map images: left, right, up, down, back, front. Here is a three.js demo using these images (best viewed in wide browser window, and may need to refresh page to view demo): http://threejs.org/ examples/webgl_materials_cubemap_escher.html

2.1.6 References

[1] Fernando, R. & Kilgard M. J. (2003). The CG Tutorial: The Definitive Guide to Programmable Real-Time Graph- ics. (1st ed.). Addison-Wesley Longman Publishing Co., Cube-mapped reflections in action Inc. Boston, MA, USA. Chapter 7: Environment Map- ping Techniques Basic environment mapping uses a static cube map - al- though the object can be moved and distorted, the re- [2] Greene, N. 1986. Environment mapping and other ap- plications of world projections. IEEE Comput. Graph. flected environment stays consistent. However, a cube Appl. 6, 11 (Nov. 1986), 21-29. http://dx.doi.org/10. map texture can be consistently updated to represent a 1109/MCG.1986.276658 dynamically changing environment (for example, trees swaying in the wind). A simple yet costly way to gener- [3] Nvidia, Jan 2000. Technical Brief: Perfect Reflections ate dynamic reflections, involves building the cube maps and Specular Lighting Effects With Cube Environment at runtime for every frame. Although this is far less effi- Mapping cient than static mapping because of additional rendering steps, it can still be performed at interactive rates. 2.1.7 See also Unfortunately, this technique does not scale well when multiple reflective objects are present. A unique dynamic • Sphere mapping environment map is usually required for each reflective object. Also, further complications are added if reflective • Reflection mapping objects can reflect each other - dynamic cube maps can be recursively generated approximating the effects normally generated using raytracing. 2.2 Texture mapping

“Texture maps” redirects here. For the 2003 ambient al- bum, see Texture Maps: The Lost Pieces Vol. 3. Texture mapping[1][2][3] is a method for adding detail, An algorithm for global illumination computation at in- teractive rates using a cube-map data structure, was pre- sented at ICCVG 2002.

Projection textures

Another application which found widespread use in video games, projective texture mapping relies on cube maps to project images of an environment onto the surround- ing scene; for example, a point light source is tied to a cube map which is a panoramic image shot from inside a lantern cage or a window frame through which the light is filtering. This enables a game developers to achieve realistic lighting without having to complicate the scene geometry or resort to expensive real-time shadow volume 1 = 3D model without textures computations. 2 = 3D model with textures 16 CHAPTER 2. MATERIALS surface texture (a or raster image), or color to a clamped or wrapped. computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by in 1974. Originally a method that simply wrapped and mapped 2.2.1 Perspective correctness from a texture to a 3D surface - now more techni- cally called diffuse mapping to distinguish it from more complex mappings - in recent decades the advent of multi-pass rendering and complex mapping such as height mapping, , normal mapping, displacement mapping, reflection mapping, mipmaps, occlusion map- ping, and many other complex variations on the technique have made it possible to simulate near-photorealism in real time, by vastly reducing the number of polygons and lighting calculations needed to a realistic and Because affine texture mapping does not take into account the functional 3D scene. depth information about a polygon’s vertices, where the polygon is not perpendicular to the viewer it produces a noticeable defect.

Texture coordinates are specified at each vertex of a given triangle, and these coordinates are interpolated using an extended Bresenham’s line algorithm. If these texture co- ordinates are linearly interpolated across the screen, the result is affine texture mapping. This is a fast calcula- tion, but there can be a noticeable discontinuity between adjacent triangles when these triangles are at an angle to the plane of the screen (see figure at right – textures (the Examples of multitexturing (click for larger image); checker boxes) appear bent). 1: Untextured sphere, 2: Texture and bump maps, 3: Texture map only, 4: Opacity and texture maps. Perspective correct texturing accounts for the vertices’ positions in 3D space, rather than simply interpolating a A texture map[4][5] is applied (mapped) to the surface 2D triangle. This achieves the correct visual effect, but it of a shape or polygon.[6] This process is akin to apply- is slower to calculate. Instead of interpolating the texture ing patterned paper to a plain white box. Every vertex coordinates directly, the coordinates are divided by their in a polygon is assigned a texture coordinate (which in depth (relative to the viewer), and the reciprocal of the the 2d case is also known as a UV coordinate) either via depth value is also interpolated and used to recover the explicit assignment or by procedural definition. Image perspective-correct coordinate. This correction makes it sampling locations are then interpolated across the face so that in parts of the polygon that are closer to the viewer of a polygon to produce a visual result that seems to have the difference from pixel to pixel between texture coor- more richness than could otherwise be achieved with a dinates is smaller (stretching the texture wider), and in limited number of polygons. Multitexturing is the use parts that are farther away this difference is larger (com- of more than one texture at a time on a polygon.[7] For pressing the texture). instance, a light map texture may be used to light a sur- face as an alternative to recalculating that lighting every Affine texture mapping directly interpolates a time the surface is rendered. Another multitexture tech- texture coordinate uα between two endpoints nique is bump mapping, which allows a texture to directly u0 and u1 : control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appear- − ≤ uα = (1 α)u0 + αu1 where 0 ance of a complex surface, such as tree bark or rough con- α ≤ 1 crete, that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular Perspective correct mapping interpolates after in recent video games as graphics hardware has become dividing by depth z , then uses its interpolated powerful enough to accommodate it in real-time. reciprocal to recover the correct coordinate: The way the resulting pixels on the screen are calculated from the texels (texture pixels) is governed by texture fil- (1 − α) u0 + α u1 u = z0 z1 tering. The fastest method is to use the nearest-neighbour α (1 − α) 1 + α 1 interpolation, but bilinear interpolation or trilinear inter- z0 z1 polation between mipmaps are two commonly used alter- natives which reduce aliasing or jaggies. In the event of All modern 3D graphics hardware implements perspec- a texture coordinate being outside the texture, it is either tive correct texturing. 2.2. TEXTURE MAPPING 17

Doom renders vertical spans (walls) with affine texture mapping.

2.2.2 Development Screen space sub division techniques. Top left: Quake-like, top Classic texture mappers generally did only simple map- right: bilinear, bottom left: const-z ping with at most one lighting effect, and the perspec- tive correctness was about 16 times more expensive. To achieve two goals - faster arithmetic results, and keeping value of the last two drawn pixels to linearly extrapo- the arithmetic mill busy at all times - every triangle is fur- late the next value. The division is then done starting ther subdivided into groups of about 16 pixels. For per- from those values so that only a small remainder has to spective texture mapping without hardware support, a tri- be divided,[9] but the amount of bookkeeping makes this angle is broken down into smaller triangles for rendering, method too slow on most systems. Finally, some pro- which improves details in non-architectural applications. grammers extended the constant distance trick used for renderers generally preferred screen subdivision Doom by finding the line of constant distance for arbi- because it has less overhead. Additionally they try to trary polygons and rendering along it. do linear interpolation along a line of pixels to simplify the set-up (compared to 2d affine interpolation) and thus again the overhead (also affine texture-mapping does not 2.2.3 See also fit into the low number of registers of the x86 CPU; the • 68000 or any RISC is much more suited). For instance, 2.5D Doom restricted the world to vertical walls and horizon- • 3D computer graphics tal floors/ceilings. This meant the walls would be a con- stant distance along a vertical line and the floors/ceilings • Cube mapping would be a constant distance along a horizontal line. A fast affine mapping could be used along those lines be- • Mipmap cause it would be correct. A different approach was taken • Displacement mapping for Quake, which would calculate perspective correct co- ordinates only once every 16 pixels of a scanline and lin- • Environment mapping early interpolate between them, effectively running at the speed of linear interpolation because the perspective cor- • Image analogy rect calculation runs in parallel on the co-processor.[8] • List of arcade system boards The polygons are rendered independently, hence it may be possible to switch between spans and columns or diag- • Materials system onal directions depending on the orientation of the poly- gon normal to achieve a more constant z, but the effort • seems not to be worth it. • System 22 Another technique was subdividing the polygons into smaller polygons, like triangles in 3d-space or squares • Normal mapping in screen space, and using an affine mapping on them. • The distortion of affine mapping becomes much less no- Parametrization ticeable on smaller polygons. Yet another technique was • mapping approximating the perspective with a faster calculation, such as a polynomial. Still another technique uses 1/z • Relief mapping (computer graphics) 18 CHAPTER 2. MATERIALS

• Sprite (computer graphics) 2.3 Reflection mapping • Texture synthesis • Texture atlas • Texture artist • Texture splatting – a technique for combining tex- tures • UV Mapping • UVW Mapping • Virtual globe

2.2.4 References

[1] http://web.cse.ohio-state.edu/~{}whmin/courses/ cse5542-2013-spring/15-texture. An example of reflection mapping. [2] http://www.inf.pucrs.br/flash/tcg/aulas/texture/texmap. [1][2][3] pdf In computer graphics, environment mapping, or reflection mapping, is an efficient image-based lighting [3] http://www.cs.uregina.ca/Links/class-info/405/WWW/ technique for approximating the appearance of a reflec- Lab5/#References tive surface by means of a precomputed texture image. [4] http://www.microsoft.com/msj/0199/direct3d/direct3d. The texture is used to store the image of the distant envi- aspx ronment surrounding the rendered object.

[5] http://homepages.gac.edu/~{}hvidsten/courses/MC394/ Several ways of storing the surrounding environment are projects/project5/texture_map_guide.html employed. The first technique was sphere mapping, in which a single texture contains the image of the surround- [6] Jon Radoff, Anatomy of an MMORPG, http://radoff. ings as reflected on a mirror ball. It has been almost com/blog/2008/08/22/anatomy-of-an-mmorpg/ entirely surpassed by cube mapping, in which the envi- [7] Blythe, David. Advanced Graphics Programming Tech- ronment is projected onto the six faces of a cube and niques Using OpenGL. Siggraph 1999. (see: Multitexture) stored as six square textures or unfolded into six square regions of a single texture. Other projections that have [8] Abrash, Michael. Michael Abrash’s Graphics Program- some superior mathematical or computational properties ming Black Special Edition. The Coriolis Group, Scottsdale Arizona, 1997. ISBN 1-57610-174-6 (PDF) include the paraboloid mapping, the pyramid mapping, (Chapter 70, pg. 1282) the octahedron mapping, and the HEALPix mapping.

[9] US 5739818, “Apparatus and method for performing per- The reflection mapping approach is more efficient than spectively correct interpolation in computer graphics”, is- the classical ray tracing approach of computing the exact sued 1998-04-14 reflection by tracing a ray and following its optical path. The reflection color used in the shading computation at a pixel is determined by calculating the reflection vector 2.2.5 External links at the point on the object and mapping it to the in the environment map. This technique often produces re- • Introduction into texture mapping using C and SDL sults that are superficially similar to those generated by raytracing, but is less computationally expensive since the • Programming a textured terrain using radiance value of the reflection comes from calculating XNA/DirectX, from www.riemers.net the angles of incidence and reflection, followed by a tex- • Perspective correct texturing ture lookup, rather than followed by tracing a ray against the scene geometry and computing the radiance of the • Time Texturing Texture mapping with bezier lines ray, simplifying the GPU workload. • Polynomial Texture Mapping Interactive Relighting However in most circumstances a mapped reflection is for Photos only an approximation of the real reflection. Environ- • 3 Métodos de interpolación a partir de puntos (in ment mapping relies on two assumptions that are seldom spanish) Methods that can be used to interpolate a satisfied: texture knowing the texture coords at the vertices of 1) All radiance incident upon the object being shaded a polygon comes from an infinite distance. When this is not the case 2.3. 19 the reflection of nearby geometry appears in the wrong place on the reflected object. When this is the case, no parallax is seen in the reflection. 2) The object being shaded is convex, such that it con- tains no self-interreflections. When this is not the case the object does not appear in the reflection; only the en- vironment does. Reflection mapping is also a traditional image-based lighting technique for creating reflections of real-world backgrounds on synthetic objects. Environment mapping is generally the fastest method of rendering a reflective surface. To further increase the speed of rendering, the renderer may calculate the posi- tion of the reflected ray at each vertex. Then, the position is interpolated across polygons to which the vertex is at- tached. This eliminates the need for recalculating every pixel’s reflection direction. If normal mapping is used, each polygon has many face normals (the direction a given point on a polygon is fac- ing), which can be used in tandem with an environment map to produce a more realistic reflection. In this case, A diagram depicting an apparent reflection being provided by the angle of reflection at a given point on a polygon will cube mapped reflection. The map is actually projected onto the take the normal map into consideration. This technique surface from the point of view of the observer. Highlights which is used to make an otherwise flat surface appear textured, in raytracing would be provided by tracing the ray and deter- for example corrugated metal, or brushed aluminium. mining the angle made with the normal, can be 'fudged', if they are manually painted into the texture field (or if they already appear there depending on how the texture map was obtained), 2.3.1 Types from where they will be projected onto the mapped object along with the rest of the texture detail. Sphere mapping

Sphere mapping represents the sphere of incident illumi- and can be used independent of the viewpoint of the nation as though it were seen in the reflection of a reflec- often-virtual camera acquiring the map. Cube and other tive sphere through an orthographic camera. The texture polyhedron maps have since superseded sphere maps in image can be created by approximating this ideal setup, most computer graphics applications, with the exception or using a fisheye lens or via prerendering a scene with a of acquiring image-based lighting. Image-based Lighting spherical mapping. can be done with parallax-corrected cube maps.[4] The spherical mapping suffers from limitations that de- Generally, cube mapping uses the same skybox that is tract from the realism of resulting renderings. Because used in outdoor renderings. Cube mapped reflection is spherical maps are stored as azimuthal projections of the done by determining the vector that the object is being environments they represent, an abrupt point of singular- viewed at. This camera ray is reflected about the surface ity (a “black hole” effect) is visible in the reflection on normal of where the camera vector intersects the object. the object where texel colors at or near the edge of the This results in the reflected ray which is then passed to map are distorted due to inadequate resolution to repre- the cube map to get the texel which provides the radiance sent the points accurately. The spherical mapping also value used in the lighting calculation. This creates the wastes pixels that are in the square but not in the sphere. effect that the object is reflective. The artifacts of the spherical mapping are so severe that it is effective only for viewpoints near that of the virtual orthographic camera. HEALPix mapping

Cube mapping HEALPix environment mapping is similar to the other polyhedron mappings, but can be hierarchical, thus pro- Cube mapping and other polyhedron mappings address viding a unified framework for generating polyhedra that the severe distortion of sphere maps. If cube maps are better approximate the sphere. This allows lower distor- made and filtered correctly, they have no visible seams, tion at the cost of increased computation.[5] 20 CHAPTER 2. MATERIALS

[4] http://seblagarde.wordpress.com/2012/09/29/ image-based-lighting-approaches-and-parallax-corrected-cubemap/

[5] Tien-Tsin Wong, Liang Wan, Chi-Sing Leung, and Ping- Man Lam. Real-time Environment Mapping with Equal Solid-Angle Spherical Quad-Map, Shader X4: Lighting & Rendering, Charles River Media, 2006

[6] Heidrich, W., and H.-P. Seidel. “View-Independent En- vironment Maps.” Eurographics Workshop on Graphics Hardware 1998, pp. 39–45.

[7] Emil Praun and Hugues Hoppe. “Spherical parametrization and remeshing.” ACM Transactions on Graphics,22(3):340–349, 2003.

[8] Mauro Steigleder. “Pencil Light Transport.” A thesis pre- sented to the University of Waterloo, 2005.

Example of a three-dimensional model using cube mapped re- flection 2.3.5 External links

2.3.2 History • The Story of Reflection mapping by Paul Debevec

Precursor work in texture mapping had been established • NVIDIA’s paper about sphere & cube env. mapping by Edwin Catmull, with refinements for curved surfaces by James Blinn, in 1974. Blinn went on to further refine • Approximation of reflective and transparent objects his work, developing environment mapping by 1976. with environmental maps Gene Miller experimented with spherical environment mapping in 1982 at MAGI Synthavision. Wolfgang Heidrich introduced Paraboloid Mapping in 2.4 Normal mapping 1998.[6] Emil Praun introduced Octahedron Mapping in 2003.[7] Mauro Steigleder introduced Pyramid Mapping in 2005.[8] Tien-Tsin Wong, et al. introduced the existing HEALPix mapping for rendering in 2006.[5]

2.3.3 See also

• Skybox (video games)

• Cube mapping Normal mapping used to re-detail simplified meshes. • Sphere mapping In 3D computer graphics, normal mapping, or “Dot3 bump mapping”, is a technique used for faking the light- 2.3.4 References ing of bumps and dents – an implementation of bump mapping. It is used to add details without using more [1] http://www.pearsonhighered.com/samplechapter/ polygons. A common use of this technique is to greatly 0321194969.pdf enhance the appearance and details of a low polygon model by generating a normal map from a high polygon [2] http://web.cse.ohio-state.edu/~{}whmin/courses/ model or height map. cse5542-2013-spring/17-env.pdf Normal maps are commonly stored as regular RGB im- [3] http://www.ics.uci.edu/~{}majumder/VC/classes/ ages where the RGB components correspond to the X, Y, BEmap.pdf and Z coordinates, respectively, of the surface normal. 2.4. NORMAL MAPPING 21

2.4.1 History 128, 0} values, giving that kind of sky blue color seen in normal maps (blue (z) coordinate is perspective (deep- The idea of taking geometric details from a high poly- ness) coordinate and RG-xy flat coordinates on screen). gon model was introduced in “Fitting Smooth Surfaces {0.3, 0.4, –0.866} would be remapped to the ({0.3, 0.4, to Dense Polygon Meshes” by Krishnamurthy and Levoy, –0.866}/2+{0.5, 0.5, 0.5})*255={0.15+0.5, 0.2+0.5, Proc. SIGGRAPH 1996,[1] where this approach was used −0.433+0.5}*255={0.65, 0.7, 0.067}*255={166, 179, for creating displacement maps over nurbs. In 1998, two 17} values ( 0.32 + 0.42 + (−0.866)2 = 1 ). The sign of papers were presented with key ideas for transferring de- the z-coordinate (blue ) must be flipped to match tails with normal maps from high to low polygon meshes: the normal map’s normal vector with that of the eye (the “Appearance Preserving Simplification”, by Cohen et al. viewpoint or camera) or the light vector. Since negative SIGGRAPH 1998,[2] and “A general method for pre- z values mean that the vertex is in front of the camera serving attribute values on simplified meshes” by Cignoni (rather than behind the camera) this convention guaran- et al. IEEE '98.[3] The former introduced tees that the surface shines with maximum strength pre- the idea of storing surface normals directly in a texture, cisely when the light vector and normal vector are coin- rather than displacements, though it required the low- cident. detail model to be generated by a particular constrained simplification algorithm. The latter presented a simpler approach that decouples the high and low polygonal mesh 2.4.3 Calculating tangent space and allows the recreation of any attributes of the high- detail model (color, texture coordinates, displacements, In order to find the perturbation in the normal the tangent [4] etc.) in a way that is not dependent on how the low-detail space must be correctly calculated. Most often the nor- model was created. The combination of storing normals mal is perturbed in a fragment shader after applying the in a texture, with the more general creation process is still model and view matrices. Typically the geometry pro- used by most currently available tools. vides a normal and tangent. The tangent is part of the tan- gent plane and can be transformed simply with the linear part of the matrix (the upper 3x3). However, the normal 2.4.2 How it works needs to be transformed by the inverse transpose. Most applications will want cotangent to match the transformed geometry (and associated uv’s). So instead of enforcing the cotangent to be perpendicular to the tangent, it is gen- erally preferable to transform the cotangent just like the tangent. Let t be tangent, b be cotangent, n be normal, M3x3 be the linear part of model matrix, and V3x3 be the linear part of the view matrix.

Example of a normal map (center) with the scene it was calcu- ′ × × lated from (left) and the result when applied to a flat surface t = t M3x3 V3x3 (right). ′ b = b × M3x3 × V3x3 ′ × × −1T × −1T × −1T To calculate the Lambertian (diffuse) lighting of a sur- n = n (M3x3 V3x3) = n M3x3 V3x3 face, the unit vector from the shading point to the light source is dotted with the unit vector normal to that sur- 2.4.4 Normal mapping in video games face, and the result is the intensity of the light on that surface. Imagine a polygonal model of a sphere - you can Interactive normal map rendering was originally only pos- only approximate the shape of the surface. By using a 3- sible on PixelFlow, a parallel rendering machine built channel bitmap textured across the model, more detailed at the University of North Carolina at Chapel Hill. It normal vector information can be encoded. Each chan- was later possible to perform normal mapping on high- nel in the bitmap corresponds to a spatial dimension (X, Y end SGI using multi-pass rendering and and Z). These spatial dimensions are relative to a constant framebuffer operations[5] or on low end PC hardware with coordinate system for object-space normal maps, or to a some tricks using paletted textures. However, with the smoothly varying coordinate system (based on the deriva- advent of in personal computers and game con- tives of position with respect to texture coordinates) in soles, normal mapping became widely used in commer- the case of tangent-space normal maps. This adds much cial video games starting in late 2003. Normal mapping’s more detail to the surface of a model, especially in con- popularity for real-time rendering is due to its good qual- junction with advanced lighting techniques. ity to processing requirements ratio versus other methods Since a normal will be used in the dot product calcu- of producing similar effects. Much of this efficiency is lation for the diffuse lighting computation, we can see made possible by distance-indexed detail scaling, a tech- that the {0, 0, –1} would be remapped to the {128, nique which selectively decreases the detail of the normal 22 CHAPTER 2. MATERIALS map of a given texture (cf. mipmapping), meaning that • Normal Mapping more distant surfaces require less complex lighting simu- lation. • Normal Mapping with paletted textures using old OpenGL extensions. Basic normal mapping can be implemented in any hard- ware that supports palettized textures. The first game • Normal Map Photography Creating normal maps console to have specialized normal mapping hardware manually by layering digital photographs was the Sega . However, Microsoft’s was the first console to widely use the effect in retail • Normal Mapping Explained games. Out of the sixth generation consoles, only the PlayStation 2's GPU lacks built-in normal mapping sup- • Simple Normal Mapper Open Source normal map port. Games for the and the PlayStation 3 generator rely heavily on normal mapping and are beginning to im- plement . The 3DS has been shown to support normal mapping, as demonstrated by Revelations and Solid: Snake 2.5 Displacement mapping Eater.

2.4.5 See also

• Texture mapping

• Bump mapping

• Parallax mapping

• Displacement mapping

• Reflection (physics)

• Depth map

2.4.6 References

[1] Krishnamurthy and Levoy, Fitting Smooth Surfaces to Dense Polygon Meshes, SIGGRAPH 1996

[2] Cohen et al., Appearance-Preserving Simplification, SIG- GRAPH 1998 (PDF)

[3] Cignoni et al., A general method for preserving attribute values on simplified meshes, IEEE Visualization 1998 (PDF)

[4] Mikkelsen, Simulation of Wrinkled Surfaces Revisited, 2008 (PDF) Displacement mapping [5] Heidrich and Seidel, Realistic, Hardware-accelerated Shading and Lighting, SIGGRAPH 1999 (PDF) Displacement mapping is an alternative computer graphics technique in to bump mapping, normal mapping, and parallax mapping, using a (procedural-) 2.4.7 External links texture- or height map to cause an effect where the actual geometric position of points over the textured surface are • Normal Map Tutorial Per-pixel logic behind Dot3 displaced, often along the local surface normal, accord- Normal Mapping ing to the value the texture function evaluates to at each point on the surface. It gives surfaces a great sense of • NormalMap-Online Free Generator inside Browser depth and detail, permitting in particular self-occlusion, • Normal Mapping on sunandblackcat.com self-shadowing and silhouettes; on the other hand, it is the most costly of this class of techniques owing to the large • Introduction to Normal Mapping amount of additional geometry. 2.5. DISPLACEMENT MAPPING 23

For years, displacement mapping was a peculiarity of derer is able to deliver naturally. To distinguish between high-end rendering systems like PhotoRealistic Render- the crude pre-tessellation-based displacement these ren- Man, while realtime , like OpenGL and DirectX, derers did before, the term sub-pixel displacement was were only starting to use this feature. One of the reasons introduced to describe this feature. for this is that the original implementation of displace- Sub-pixel displacement commonly refers to finer re- ment mapping required an adaptive tessellation of the sur- tessellation of geometry that was already tessellated into face in order to obtain enough micropolygons whose size polygons. This re-tessellation results in micropolygons matched the size of a pixel on the screen. or often microtriangles. The vertices of these then get moved along their normals to achieve the displacement 2.5.1 Meaning of the term in different con- mapping. texts True micropolygon renderers have always been able to do what sub-pixel-displacement achieved only recently, but Displacement mapping includes the term mapping which at a higher quality and in arbitrary displacement direc- refers to a texture map being used to modulate the dis- tions. placement strength. The displacement direction is usu- Recent developments seem to indicate that some of the ally the local surface normal. Today, many renderers al- renderers that use sub-pixel displacement move towards low programmable shading which can create high qual- supporting higher level geometry too. As the vendors of ity (multidimensional) procedural textures and patterns at these renderers are likely to keep using the term sub-pixel arbitrary high frequencies. The use of the term mapping displacement, this will probably lead to more obfuscation becomes arguable then, as no texture map is involved any- of what displacement mapping really stands for, in 3D more. Therefore, the broader term displacement is often computer graphics. used today to refer to a super concept that also includes displacement based on a texture map. In reference to Microsoft’s proprietary High Level Shader Language, displacement mapping can be interpreted as a Renderers using the REYES algorithm, or similar ap- kind of “vertex-texture mapping” where the values of the proaches based on micropolygons, have allowed displace- texture map do not alter pixel colors (as is much more ment mapping at arbitrary high frequencies since they be- common), but instead change the position of vertices. came available almost 20 years ago. Unlike bump, normal and parallax mapping, all of which The first commercially available renderer to implement can be said to “fake” the behavior of displacement map- a micropolygon displacement mapping approach through ping, in this way a genuinely rough surface can be pro- REYES was 's PhotoRealistic RenderMan. Microp- duced from a texture. It has to be used in conjunction olygon renderers commonly tessellate geometry them- with adaptive tessellation techniques (that increases the selves at a granularity suitable for the image being ren- number of rendered polygons according to current view- dered. That is: the modeling application delivers high- ing settings) to produce highly detailed meshes. level primitives to the renderer. Examples include true NURBS- or subdivision surfaces. The renderer then tes- sellates this geometry into micropolygons at render time 2.5.2 See also using view-based constraints derived from the image be- • ing rendered. Texture mapping Other renderers that require the modeling application to • Bump mapping deliver objects pre-tessellated into arbitrary polygons or • even triangles have defined the term displacement map- Normal mapping ping as moving the vertices of these polygons. Often • Parallax mapping the displacement direction is also limited to the surface normal at the vertex. While conceptually similar, those • Relief mapping (computer graphics) polygons are usually a lot larger than micropolygons. The • quality achieved from this approach is thus limited by the Heightmap geometry’s tessellation density a long time before the ren- • Sculpted prim derer gets access to it. This difference between displacement mapping in mi- cropolygon renderers vs. displacement mapping in a non- 2.5.3 Further reading tessellating (macro)polygon renderers can often lead to • confusion in conversations between people whose expo- Blender Displacement Mapping sure to each technology or implementation is limited. • Relief Texture Mapping website Even more so, as in recent years, many non-micropolygon renderers have added the ability to do displacement map- • Parallax Occlusion Mapping in GLSL on sunand- ping of a quality similar to that which a micropolygon ren- blackcat.com 24 CHAPTER 2. MATERIALS

• Real-Time Relief Mapping on Arbitrary Polygonal 2.6.2 See also Surfaces paper • Cube mapping • Relief Mapping of Non-Height-Field Surface Details paper • Skybox (video games) • Steep Parallax Mapping website • Reflection mapping • State of the art of displacement mapping on the gpu • HEALPix, mapping with little distortion, arbitrary paper precision, and equal-sized fragments

2.6 Sphere mapping 2.7 UV mapping

In computer graphics, sphere mapping (or spherical en- vironment mapping) is a type of reflection mapping that approximates reflective surfaces by considering the envi- ronment to be an infinitely far-away spherical wall. This environment is stored as a texture depicting what a mir- rored sphere would look like if it were placed into the en- vironment, using an orthographic projection (as opposed to one with perspective). This texture contains reflective data for the entire environment, except for the spot di- rectly behind the sphere. (For one example of such an ob- ject, see Escher’s drawing Hand with Reflecting Sphere.) The application of a texture in the UV space related to the effect To use this data, the surface normal of the object, view in 3D. direction from the object to the camera, and/or reflected direction from the object to the environment is used to calculate a texture coordinate to look up in the aforemen- tioned texture map. The result appears like the environ- ment is reflected in the surface of the object that is being rendered.

2.6.1 Usage example

In the simplest case for generating texture coordinates, suppose:

• The map has been created as above, looking at the sphere along the z-axis. • The texture coordinate of the center of the map is A checkered sphere, without (left) and with (right) UV mapping (0,0), and the sphere’s image has radius 1. (3D checkered or 2D checkered). • We are rendering an image in the same exact situa- tion as the sphere, but the sphere has been replaced UV mapping is the process of making a 2D with a reflective object. image representation of a 3D model’s surface. • The image being created is orthographic, or the viewer is infinitely far away, so that the view direc- 2.7.1 UV mapping tion does not change as one moves across the image. This process projects a texture map onto a 3D object. At texture coordinate (x, y) , note that the depicted loca- The letters “U” and “V” denote the axes of the 2D √ [note 1] tion on the sphere is (x, y, z) (where z is 1 − x2 − y2 ), texture because “X”, “Y” and “Z” are already used and the normal at that location is also < x, y, z > . How- to denote the axes of the 3D object in model space. ever, we are given the reverse task (a normal for which UV texturing permits polygons that make up a 3D object we need to produce a texture map coordinate). So the to be painted with color from an image. The image is texture coordinate corresponding to normal < x, y, z > called a UV texture map,[1] but it’s just an ordinary image. is (x, y) . The UV mapping process involves assigning pixels in the 2.7. UV MAPPING 25

2.7.2 Finding UV on a sphere

For any point P on the sphere, calculate dˆ , that being the unit vector from P to the sphere’s origin.

Assuming that the sphere’s poles are aligned with the Y axis, UV coordinates in the range [0, 1] can then be calculated as follows:

arctan 2(dz ,dx) u = 0.5 + 2π − arcsin(dy ) v = 0.5 π A representation of the UV mapping of a cube. The flattened cube net may then be textured to texture the cube. 2.7.3 See also image to surface mappings on the polygon, usually done by “programmatically” copying a triangle shaped piece • Cartographic projection of the image map and pasting it onto a triangle on the • object.[2] UV is the alternative to XY, it only maps into Least squares conformal map a texture space rather than into the geometric space of • Mesh parameterization the object. But the rendering computation uses the UV texture coordinates to determine how to paint the three- • NURBS dimensional surface. In the example to the right, a sphere is given a checkered • texture, first without and then with UV mapping. With- • out UV mapping, the checkers tile XYZ space and the Sculpted prim texture is carved out of the sphere. With UV mapping, • the checkers tile UV space and points on the sphere map Texture mapping to this space according to their latitude and longitude. • UVW mapping When a model is created as a polygon mesh using a 3D modeler, UV coordinates can be generated for each vertex in the mesh. One way is for the 3D modeler to 2.7.4 Notes unfold the triangle mesh at the seams, automatically lay- ing out the triangles on a flat page. If the mesh is a UV [1] when using quaternions (which is standard), “W” is also sphere, for example, the modeler might transform it into used; cf. UVW mapping an equirectangular projection. Once the model is un- wrapped, the artist can paint a texture on each triangle individually, using the unwrapped mesh as a template. 2.7.5 References When the scene is rendered, each triangle will map to the appropriate texture from the "decal sheet". [1] Mullen, T (2009). Mastering Blender. 1st ed. In- dianapolis, Indiana: Wiley Publishing, Inc. ISBN A UV map can either be generated automatically by the 9780470496848 software application, made manually by the artist, or some combination of both. Often a UV map will be gen- [2] Murdock, K.L. (2008). 3ds Max 2009 Bible. 1st ed. erated, and then the artist will adjust and optimize it to Indianapolis, Indiana: Wiley Publishing, Inc. ISBN minimize seams and overlaps. If the model is symmetric, 9780470417584 the artist might overlap opposite triangles to allow paint- ing both sides simultaneously. UV coordinates are applied per face,[2] not per vertex. 2.7.6 External links This means a shared vertex can have different UV coor- • dinates in each of its triangles, so adjacent triangles can LSCM Mapping image with Blender be cut apart and positioned on different areas of the tex- • Blender UV Mapping Tutorial with Blender ture map. The UV Mapping process at its simplest requires three • Rare practical example of UV mapping from a blog steps: unwrapping the mesh, creating the texture, and ap- (not related to a specific product such as Maya or plying the texture.[1] Blender). 26 CHAPTER 2. MATERIALS

2.8 Mipmap 2.8.3 How it works

In 3D computer graphics, mipmaps (also MIP maps)[1][2][3] are pre-calculated, optimized sequence of textures that accompany a main texture, each of which is a progressively lower resolution representation of the same image. The height and width of each image, or level, in the mipmap is a power of two smaller than the previous level. Mipmaps do not have to be square. They are intended to increase rendering speed and reduce aliasing artifacts. A high-resolution mipmap image is used for objects that are close to the user. Lower-resolution images are used as the object appears farther away. This is a more efficient way of simulating perspective for textures. Rather than render a single texture at many resolutions, it is faster to use multiple An example of mipmap image storage: the principal image on the textures at varying resolutions. They are widely used left is accompanied by filtered copies of reduced size. in 3D computer games, flight simulators and other 3D imaging systems for texture filtering. Their use is known Each bitmap image of the mipmap set is a downsized du- as mipmapping. The letters “MIP” in the name are an plicate of the main texture, but at a certain reduced level acronym of the Latin phrase multum in parvo, meaning [4] of detail. Although the main texture would still be used “much in little”. Since mipmaps, by definition, are when the view is sufficient to render it in full detail, the pre-allocated, additional storage space is required to take renderer will switch to a suitable mipmap image (or in advantage of them. They also form the basis of wavelet fact, interpolate between the two nearest, if trilinear fil- compression. Mipmap textures are used in 3D scenes to tering is activated) when the texture is viewed from a dis- decrease the time required to render a scene. They also tance or at a small size. Rendering speed increases since improve the scene’s realism. However, they can require the number of texture pixels ("texels") being processed large amounts of memory. can be much lower with the simple textures. Artifacts are reduced since the mipmap images are effectively already anti-aliased, taking some of the burden off the real-time 2.8.1 Basic use renderer. Scaling down and up is made more efficient with mipmaps as well. Mipmaps are used for: If the texture has a basic size of 256 by 256 pixels, then the associated mipmap set may contain a series of 8 im- • Level of Detail(LOD)[5][6] ages, each one-fourth the total area of the previous one: • speeding up rendering times (smaller textures equate 128×128 pixels, 64×64, 32×32, 16×16, 8×8, 4×4, 2×2, to less memory usage); 1×1 (a single pixel). If, for example, a scene is rendering this texture in a space of 40×40 pixels, then either a scaled • improving the quality. Rendering large textures up version of the 32×32 (without trilinear interpolation) where only small subsets of points are used can eas- or an interpolation of the 64×64 and the 32×32 mipmaps ily produce moiré patterns; (with trilinear interpolation) would be used. The simplest way to generate these textures is by successive averaging; • reducing stress on GPU. however, more sophisticated algorithms (perhaps based on signal processing and Fourier transforms) can also be used. 2.8.2 Origin The increase in storage space required for all of these mipmaps is a third of the original texture, because the Mipmapping was invented by Lance Williams in 1983 [4] sum of the areas 1/4 + 1/16 + 1/64 + 1/256 + · · · con- and is described in his paper Pyramidal parametrics. verges to 1/3. In the case of an RGB image with three From the abstract: “This paper advances a 'pyramidal channels stored as separate planes, the total mipmap can parametric' prefiltering and sampling geometry which be visualized as fitting neatly into a square area twice as minimizes aliasing effects and assures continuity within large as the dimensions of the original image on each side and between target images.” The “pyramid” can be imag- (twice as large on each side is four times the original area ined as the set of mipmaps stacked on top of each other. - one plane of the original size for each of red, green and The origin of the term, mipmap, is an initialism of Latin blue makes three times the original area, and then since Multum In Parvo (much in a small space), and map, mod- the smaller textures take 1/3 of the original, 1/3 of three elled on bitmap.[7] is one, so they will take the same total space as just one 2.9. SKYBOX (VIDEO GAMES) 27

but the image tends to be clearer. If a lower resolution is used, the cache coherence is improved, but the image is overly blurry. Nonuniform mipmaps (also known as rip-maps)[8] can solve this problem, although they have no direct support on modern graphics hardware. With an 8×8 base tex- ture map, the rip-map resolutions are 8×8, 8×4, 8×2, 8×1; 4×8, 4×4, 4×2, 4×1; 2×8, 2×4, 2×2, 2×1; 1×8, 1×4, 1×2 and 1×1. In general, for a 2n × 2n base texture map, the rip-map resolutions are 2i × 2j for i and j from 0 to n.

2.8.5 Summed-area tables

Summed-area tables can conserve memory and provide more resolutions. However, they again hurt cache coher- ence, and need wider types to store the partial sums than the base texture’s word size. Thus, modern graphics hard- The original RGB image ware does not support them either.

2.8.6 References

[1] http://msdn.microsoft.com/en-us/library/windows/ desktop/bb206251(v=vs.85).aspx

[2] http://msdn.microsoft.com/en-us/library/aa921432.aspx

[3] http://graphics.ethz.ch/teaching/former/vc_master_06/ Downloads/Mipmaps_1.pdf

[4] http://staff.cs.psu.ac.th/iew/cs344-481/p1-williams.pdf

[5] http://people.cs.clemson.edu/~{}dhouse/courses/405/ notes/OpenGL-mipmaps.pdf

[6] http://msdn.microsoft.com/en-us/library/windows/ desktop/ff476207(v=vs.85).aspx

[7] http://en.wiktionary.org/wiki/mipmap

[8] http://www.cis.pku.edu.cn/vision/Visual&Robot/people/ pei%20yuru/acg09/ACG02.pdf In the case of an RGB image with three channels stored as sep- arate planes, the total mipmap can be visualized as fitting neatly into a square area twice as large as the dimensions of the original 2.8.7 See also image on each side. It also shows visually how using mipmaps requires 33% more memory. • Spatial anti-aliasing • Anisotropic filtering of the original red, green, or blue planes). This is the inspiration for the tag "multum in parvo". • Hierarchical modulation – similar technique in broadcasting 2.8.4 Anisotropic filtering • Scale space

Main article: Anisotropic filtering 2.9 Skybox (video games) When a texture is seen at a steep angle, the filtering should not be uniform in each direction (it should be anisotropic A skybox is a method of creating backgrounds to make rather than isotropic), and a compromise resolution is a computer and video games level look bigger than it re- used. If a higher resolution is used, the cache coherence ally is. When a skybox is used, the level is enclosed in a goes down, and the aliasing is increased in one direction, cuboid. The sky, distant mountains, distant buildings, and 28 CHAPTER 2. MATERIALS

for the skybox to remain stationary with respect to the viewer. This technique gives the skybox the illusion of being very far away, since other objects in the scene ap- pear to move, while the skybox does not. This imitates , where distant objects such as clouds, stars and even mountains appear to be stationary when the view- point is displaced by relatively small distances. Effec- tively, everything in a skybox will always appear to be infinitely distant from the viewer. This consequence of skyboxes dictates that designers should be careful not to carelessly include images of discrete objects in the tex- tures of a skybox, since the viewer may be able to perceive the inconsistencies of those objects’ sizes as the scene is traversed. Example of a texture that can be mapped to the faces of a cubic The source of a skybox can be any form of texture, skybox, with faces labelled including photographs, hand-drawn images, or pre- rendered 3D geometry. Usually, these textures are cre- ated and aligned in 6 directions, with viewing angles of 90 degrees (which covers up the 6 faces of the cube).

2.9.1 Advanced skyboxes

As technology progressed, it became clear that the de- fault skybox had severe disadvantages. It could not be animated, and all objects in it appeared to be infinitely distant, even if they were close-by. Starting in the late 1990s, some game designers built small amounts of 3D geometry to appear in the skybox to create a better illu- sion of depth, in addition to a traditional skybox for ob- jects very far away. This constructed skybox was placed in an unreachable location, typically outside the bounds of the playable portion of the level, to prevent players from touching the skybox. In older versions of this technology, such as the ones pre- Example of a texture for a hemispherical skydome sented in the game Unreal, this was limited to movements in the sky, such as the movements of clouds. Elements other unreachable objects are projected onto the cube’s could be changed from level to level, such as the positions faces (using a technique called cube mapping), thus creat- of stellar objects, or the color of the sky, giving the illu- ing the illusion of distant three-dimensional surroundings. sion of the gradual change from day to night. The skybox A skydome employs the same concept but uses either a in this game would still appear to be infinitely far away, sphere or a hemisphere instead of a cube. as the skybox, although containing 3D geometry, did not move the viewing point along with the player movement Processing of 3D graphics is computationally expensive, through the level. especially in real-time games, and poses multiple limits. Levels have to be processed at tremendous speeds, mak- Newer engines, such as the Source engine, continue on ing it difficult to render vast skyscapes in real time. Ad- this idea, allowing the skybox to move along with the ditionally, realtime graphics generally have depth buffers player, although at a different speed. Because depth is with limited -depth, which puts a limit on the amount perceived on the compared movement of objects, mak- of details that can be rendered at a distance. ing the skybox move slower than the level causes the sky- box to appear far away, but not infinitely so. It is also To compensate for these problems, games often employ possible, but not required, to include 3D geometry which skyboxes. Traditionally, these are simple cubes with up will surround the accessible playing environment, such as to 6 different textures placed on the faces. By careful unreachable buildings or mountains. They are designed alignment, a viewer in the exact middle of the skybox will and modeled at a smaller scale, typically 1/16th, then ren- perceive the illusion of a real 3D world around it, made dered by the engine to appear much larger. This results up of those 6 faces. in less CPU requirements than if they were rendered in As a viewer moves through a 3D scene, it is common full size. The effect is referred to as a “3D skybox”. 2.11. SHADER 29

In the game Half-Life 2, this effect was extensively used 2.11 Shader in showing The Citadel, a huge structure in the center of City 17. In the closing chapters of the game, the player travels through the city towards the Citadel, the skybox effect making it grow larger and larger progressively with the player movement, completely appearing to be a part of the level. As the player reaches the base of the Citadel, it is broken into two pieces. A small lower section is a part of the main map, while the upper section is in the 3D skybox. The two sections are seamlessly blended together to appear as a single structure. Shaders are most commonly used to produce lighting and shadow in 3D modeling. This image illustrates Phong shading, one of the 2.9.2 See also first computer shading models ever developed

• Cube mapping

• Parallax

2.9.3 External links

• http://developer.valvesoftware.com/wiki/Skybox_ %282D%29 Shaders can also be used for special effects. An example of a digital photograph from a webcam unshaded on the left, and the same image with a special effects shader applied on the right • Making a skybox in OpenGL 3.3 which replaces all light areas of the image with white and the dark areas with a brightly colored texture

2.10 Materials system In the field of computer graphics, a shader is a computer program that is used to do shading: the production of For other uses of “Materials system”, see Materials appropriate levels of color within an image, or, in the science. modern era, also to produce special effects or do video post-processing. A definition in layman's terms might be given as “a program that tells a computer how to draw Materials system is an advanced type of texture map- something in a specific and unique way”. ping that allows for objects in video games to simulate different types of materials in real life. This makes it Shaders calculate rendering effects on graphics hardware so that the texture not only contains graphical data, but with a high degree of flexibility. Most shaders are coded references for sound data and physics data (such as den- for a graphics processing unit (GPU), though this is not sity). For example, if a texture makes an object look like a strict requirement. Shading languages are usually used wood, it will sound like wood(if something hits it or its to program the programmable GPU rendering pipeline, scraped along a surface), break like wood, and even float which has mostly superseded the fixed-function pipeline like wood. If it was made of metal, it will sound like that allowed only common geometry transformation and metal, dent like metal, and sink like metal. This allows pixel-shading functions; with shaders, customized effects more flexibility when making objects in games. can be used. The position, hue, saturation, brightness, and contrast of all pixels, vertices, or textures used to A materials system allows a designer to think about ob- construct a final image can be altered on the fly, using jects in a different way. Instead of the object just being algorithms defined in the shader, and can be modified by a model with a texture applied to it, the object, or part of external variables or textures introduced by the program the object, is made up of a material. calling the shader. Currently there are these major materials: wood, concrete Shaders are used widely in cinema postprocessing, (or stone), metal, glass, dirt, water, and cloth (such as computer-generated imagery, and video games to pro- carpeting). duce a seemingly infinite range of effects. Beyond just simple lighting models, more complex uses include alter- ing the hue, saturation, brightness and/or contrast of an 2.10.1 References image, producing blur, light bloom, volumetric lighting, normal mapping for depth effects, bokeh, , • Valve’s developer Wiki on Materials system posterization, bump mapping, distortion, chroma keying 30 CHAPTER 2. MATERIALS

(so-called "bluescreen/ greenscreen" effects), edge detec- • The depth test is performed, fragments that pass will tion and motion detection, psychedelic effects, and a wide get written to the screen and might get blended into range of others. the frame buffer.

2.11.1 History The graphic pipeline uses these steps in order to trans- form three-dimensional (and/or two-dimensional) data The modern use of “shader” was introduced to the public into useful two-dimensional data for displaying. In gen- by Pixar with their “RenderMan Interface Specification, eral, this is a large pixel matrix or "frame buffer". Version 3.0” originally published in May, 1988. As graphics processing units evolved, major graphics software libraries such as OpenGL and Direct3D be- gan to support shaders. The first shader-capable GPUs 2.11.3 Types only supported pixel shading, but vertex shaders were quickly introduced once developers realized the power of There are three types of shaders in common use, with shaders. Geometry shaders were recently introduced with one more recently added. While older graphics cards uti- Direct3D 10 and OpenGL 3.2. lize separate processing units for each shader type, newer cards feature unified shaders which are capable of exe- cuting any type of shader. This allows graphics cards to 2.11.2 Design make more efficient use of processing power.

Shaders are simple programs that describe the traits of ei- ther a vertex or a pixel. Vertex shaders describe the traits (position, texture coordinates, colors, etc.) of a vertex, 2D Shaders while pixel shaders describe the traits (color, z-depth and alpha value) of a pixel. A vertex shader is called for each 2D shaders act on digital images, also called textures vertex in a primitive (possibly after tessellation); thus one in computer graphics work. They modify attributes of vertex in, one (updated) vertex out. Each vertex is then pixels. Currently the only 2D shader types are pixel rendered as a series of pixels onto a surface (block of shaders. memory) that will eventually be sent to the screen. Shaders replace a section of video hardware typically called the Fixed Function Pipeline (FFP), so-called be- Pixel shaders Pixel shaders, also known as fragment cause it performs lighting and texture mapping in a hard- shaders, compute color and other attributes of each “frag- coded manner. Shaders provide a programmable alterna- ment” - a technical term usually meaning a single pixel. [1] tive to this hard-coded approach. The simplest kinds of pixel shaders output one screen The basic graphics pipeline is as follows: pixel as a color value; more complex shaders with mul- tiple inputs/outputs are also possible. Pixel shaders range • The CPU sends instructions (compiled shading lan- from always outputting the same color, to applying a light- guage programs) and geometry data to the graphics ing value, to doing bump mapping, shadows, specular processing unit, located on the graphics card. highlights, translucency and other phenomena. They can alter the depth of the fragment (for Z-buffering), • Within the vertex shader, the geometry is trans- or output more than one color if multiple render tar- formed. gets are active. In 3D graphics, a pixel shader alone cannot produce very complex effects, because it oper- • If a geometry shader is in the graphic processing unit ates only on a single fragment, without knowledge of and active, some changes of the geometries in the a scene’s geometry. However, pixel shaders do have scene are performed. knowledge of the screen coordinate being drawn, and • If a tessellation shader is in the graphic processing can sample the screen and nearby pixels if the con- unit and active, the geometries in the scene can be tents of the entire screen are passed as a texture to subdivided. the shader. This technique can enable a wide variety of two-dimensional postprocessing effects, such as blur, • The calculated geometry is triangulated (subdivided or edge detection/enhancement for cartoon/cel shaders. into triangles). Pixel shaders may also be applied in intermediate stages • Triangles are broken down into fragment quads (one to any two-dimensional images—sprites or textures—in fragment quad is a 2 × 2 fragment primitive). the pipeline, whereas vertex shaders always require a 3D scene. For instance, a pixel shader is the only kind of • Fragment quads are modified according to the frag- shader that can act as a postprocessor or filter for a video ment shader. stream after it has been rasterized. 2.11. SHADER 31

3D Shaders as Hull Shaders) and Tessellation Evaluation Shaders (also known as Domain Shaders), which together allow 3D shaders act on 3D models or other geometry but for simpler meshes to be subdivided into finer meshes may also access the colors and textures used to draw the at run-time according to a mathematical function. The model or mesh. Vertex shaders are the oldest type of 3d function can be related to a variety of variables, most no- shader, generally modifying on a per-vertex basis. Ge- tably the distance from the viewing camera to allow ac- ometry shaders can generate new vertices from within the tive level-of-detail scaling. This allows objects close to shader. Tessellation shaders are newer 3d shaders that act the camera to have fine detail, while further away ones on batches of vertexes all at once to add detail - such as can have more coarse meshes, yet seem comparable in subdividing a model into smaller groups of triangles or quality. It also can drastically reduce mesh bandwidth by other primitives at runtime, to improve things like curves allowing meshes to be refined once inside the shader units and bumps, or change other attributes. instead of downsampling very complex ones from mem- ory. Some algorithms can upsample any arbitrary mesh, while others allow for “hinting” in meshes to dictate the Vertex shaders Vertex shaders are the most estab- most characteristic vertices and edges. lished and common kind of 3d shader and are run once for each vertex given to the graphics processor. The pur- pose is to transform each vertex’s 3D position in virtual 2.11.4 Parallel processing space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer). Ver- Shaders are written to apply transformations to a large tex shaders can manipulate properties such as position, set of elements at a time, for example, to each pixel in an color and texture coordinate, but cannot create new ver- area of the screen, or for every vertex of a model. This is tices. The output of the vertex shader goes to the next well suited to parallel processing, and most modern GPUs stage in the pipeline, which is either a geometry shader have multiple shader pipelines to facilitate this, vastly im- if present, or the rasterizer. Vertex shaders can enable proving computation throughput. powerful control over the details of position, movement, lighting, and color in any scene involving 3D models. 2.11.5 Programming Geometry shaders Geometry shaders are a relatively new type of shader, introduced in Direct3D 10 and The language in which shaders are programmed depends OpenGL 3.2; formerly available in OpenGL 2.0+ with on the target environment. The official OpenGL and the use of extensions.[2] This type of shader can generate OpenGL ES shading language is OpenGL Shading Lan- new graphics primitives, such as points, lines, and trian- guage, also known as GLSL, and the official Direct3D gles, from those primitives that were sent to the beginning shading language is High Level Shader Language, also of the graphics pipeline.[3] known as HLSL. However, Cg is a third-party shading language developed by Nvidia that outputs both OpenGL Geometry shader programs are executed after vertex and Direct3D shaders. Apple released its own shading shaders. They take as input a whole primitive, possibly language called Metal Shading Language as part of the with adjacency information. For example, when oper- Metal framework. ating on triangles, the three vertices are the geometry shader’s input. The shader can then emit zero or more primitives, which are rasterized and their fragments ulti- 2.11.6 See also mately passed to a pixel shader. • Typical uses of a geometry shader include point sprite GLSL generation, geometry tessellation, shadow volume extru- • GPGPU sion, and single pass rendering to a cube map. A typical real world example of the benefits of geometry shaders • HLSL would be automatic mesh complexity modification. A se- ries of line strips representing control points for a curve • List of common shading algorithms are passed to the geometry shader and depending on the • Shading language complexity required the shader can automatically gener- ate extra lines each of which provides a better approxi- mation of a curve. 2.11.7 References

Tessellation shaders As of OpenGL 4.0 and Di- [1] http://www.directx.com/shader/index.htm rect3D 11, a new shader class called a Tessellation Shader [2] Geometry Shader - OpenGL. Retrieved on 2011-12-21. has been added. It adds two new shader stages to the tra- ditional model. Tessellation Control Shaders (also known [3] msdn: Pipeline Stages (Direct3D 10) 32 CHAPTER 2. MATERIALS

2.11.8 Further reading

• Upstill, Steve. The RenderMan Companion: A Pro- grammer’s Guide to Realistic Computer Graphics. Addison-Wesley. ISBN 0-201-50868-0.

• Ebert, David S; Musgrave, F. Kenton; Peachey, Dar- wyn; Perlin, Ken; Worley, Steven. Texturing and modeling: a procedural approach. AP Professional. ISBN 0-12-228730-4. Example of flat shading vs. Phong shading interpolation. Phong shading is a more realistic shading technique, developed by Bui • Fernando, Randima; Kilgard, Mark. The Cg Tuong Phong in 1973. Tutorial: The Definitive Guide to Programmable Real-Time Graphics. Addison-Wesley Professional. 2.12.1 Drawing ISBN 0-321-19496-9.

• Rost, Randi J. OpenGL Shading Language. Addison-Wesley Professional. ISBN 0-321-19789- 5.

2.11.9 External links

• OpenGL geometry shader extension

• Riemer’s DirectX & HLSL Tutorial: HLSL Tutorial using DirectX with lots of sample code

• Pipeline Stages (Direct3D 10) Example of shading. Shading is used in drawing for depicting levels of dark- ness on paper by applying media more densely or with 2.12 Shading a darker shade for darker areas, and less densely or with a lighter shade for lighter areas. There are various tech- For other uses, see Shade (disambiguation). niques of shading including cross hatching where perpen- Shading refers to depicting in 3D dicular lines of varying closeness are drawn in a grid pat- tern to shade an area. The closer the lines are together, the darker the area appears. Likewise, the farther apart the lines are, the lighter the area appears. Light patterns, such as objects having light and shaded areas, help when creating the illusion of depth on paper.[1][2] Powder shading is a sketching shading method. In this style, the stumping powder and paper stumps are used to draw a picture. This can be in color. The stumping pow- der is smooth and doesn't have any shiny particles. The poster created with powder shading looks more beautiful than the original. The paper to be used should have small grains on it so that the powder remains on the paper.

2.12.2 Computer graphics

In computer graphics, shading refers to the process of al- tering the color of an object/surface/polygon in the 3D Gouraud shading, developed by Henri Gouraud in 1971, was one scene, based on its angle to lights and its distance from of the first shading techniques developed in computer graphics. lights to create a photorealistic effect. Shading is per- formed during the rendering process by a program called models or illustrations by varying levels of darkness. a shader. 2.12. SHADING 33

Angle to light source and color. This type of light source is mainly used to provide the scene with a basic view of the different objects Shading alters the colors of faces in a 3D model based on in it. This is the simplest type of lighting to implement the angle of the surface to a light source or light sources. and models how light can be scattered or reflected many The first image below has the faces of the box rendered, times producing a uniform effect. but all in the same color. Edge lines have been rendered Ambient lighting can be combined with ambient occlu- here as well which makes the image easier to see. sion to represent how exposed each point of the scene is, The second image is the same model rendered without affecting the amount of ambient light it can reflect. This edge lines. It is difficult to tell where one face of the box produces diffuse, non-directional lighting throughout the ends and the next begins. scene, casting no clear shadows, but with enclosed and sheltered areas darkened. The result is usually visually The third image has shading enabled, which makes the similar to an overcast day. image more realistic and makes it easier to see which face is which. Directional lighting A directional light source illumi- nates all objects equally from a given direction, like an Lighting area light of infinite size and infinite distance from the scene; there is shading, but cannot be any distance falloff.

Point lighting Light originates from a single point, and spreads outward in all directions.

Spotlight lighting Models a Spotlight. Light originates from a single point, and spreads outward in a cone.

Area lighting Light originates from a small area on a single plane. A more accurate model than a point light source.

Volumetric lighting Light originating from a small volume, an enclosed space lighting objects within that space. Shading is interpolated based on how the angle of these light sources reach the objects within a scene. Of course, these light sources can be and often are combined in a scene. The renderer then interpolates how these lights must be combined, and produces a 2d image to be dis- played on the screen accordingly.

Distance falloff

Shading effects from floodlight. Theoretically, two surfaces which are parallel, are illumi- nated the same amount from a distant light source, such Shading is also dependent on the lighting used. Usually, as the sun. Even though one surface is further away, your upon rendering a scene a number of different lighting eye sees more of it in the same space, so the illumination techniques will be used to make the rendering look more appears the same. realistic. Different types of light sources are used to give Notice in the first image that the color on the front faces different effects. of the two boxes is exactly the same. It appears that there is a slight difference where the two faces meet, but this Ambient lighting An ambient light source represents a is an optical illusion because of the vertical edge below fixed-intensity and fixed-color light source that affects all where the two faces meet. objects in the scene equally. Upon rendering, all objects Notice in the second image that the surfaces on the boxes in the scene are brightened with the specified intensity are bright on the front box and darker on the back box. 34 CHAPTER 2. MATERIALS

Also the floor goes from light to dark as it gets farther • Phong shading [4] away. This distance falloff effect produces images which appear Gouraud shading more realistic without having to add additional lights to achieve the same effect. 1. Determine the normal at each polygon vertex Distance falloff can be calculated in a number of ways: 2. Apply an illumination model to each vertex to cal- culate the vertex intensity • None - The light intensity received is the same re- gardless of the distance between the point and the 3. Interpolate the vertex intensities using bilinear inter- light source. polation over the surface polygon • Linear - For a given point at a distance x from the light source, the light intensity received is propor- Data structures tional to 1/x. • Sometimes vertex normals can be computed directly • Quadratic - This is how light intensity decreases in (e.g. height field with uniform mesh) reality if the light has a free path (i.e. no fog or any • other thing in the air that can absorb or scatter the More generally, need data structure for mesh light). For a given point at a distance x from the light • Key: which polygons meet at each vertex source, the light intensity received is proportional to 1/x2. Advantages • Factor of n - For a given point at a distance x from the light source, the light intensity received is pro- • Polygons, more complex than triangles, can also n portional to 1/x . have different colors specified for each vertex. In • Any number of other mathematical functions may these instances, the underlying logic for shading can also be used. become more intricate.

2.12.3 Flat shading Problems • Even the smoothness introduced by Gouraud shad- Flat shading is a lighting technique used in 3D computer ing may not prevent the appearance of the shading graphics to shade each polygon of an object based on the differences between adjacent polygons. angle between the polygon’s surface normal and the di- rection of the light source, their respective colors and • Gouraud shading is more CPU intensive and can be- the intensity of the light source. It is usually used for come a problem when rendering real time environ- high speed rendering where more advanced shading tech- ments with many polygons. niques are too computationally expensive. As a result • of flat shading all of the polygon’s vertices are colored T-Junctions with adjoining polygons can sometimes with one color, allowing differentiation between adja- result in visual anomalies. In general, T-Junctions cent polygons. Specular highlights are rendered poorly should be avoided. with flat shading: If there happens to be a large specular component at the representative vertex, that brightness is Phong shading drawn uniformly over the entire face. If a specular high- light doesn’t fall on the representative point, it is missed Phong shading, is similar to Gouraud shading except that entirely. Consequently, the specular reflection compo- the Normals are interpolated. Thus, the specular high- nent is usually not included in flat shading computation. lights are computed much more precisely than in the Gouraud shading model: 2.12.4 Smooth shading 1. Compute a normal N for each vertex of the polygon. In contrast to flat shading with smooth shading the color 2. From bilinear interpolation compute a normal, Ni changes from pixel to pixel. It assumes that the surfaces for each pixel. (This must be renormalized each are curved and uses interpolation techniques to calculate time) the values of pixels between the vertices of the polygons. 3. From Ni compute an intensity Ii for each pixel of Types of smooth shading include: the polygon. • Gouraud shading [3] 4. Paint pixel to shade corresponding to Ii. 2.13. SHADING LANGUAGE 35

Other Approaches 2.13 Shading language

Both Gouraud shading and Phong shading can be imple- A shading language is a graphics programming lan- mented using bilinear interpolation. Bishop and Weimer guage adapted to programming shader effects (charac- [5] proposed to use a Taylor series expansion of the result- terizing surfaces, volumes, and objects). Such language ing expression from applying an illumination model and forms usually consist of special data types, like “color” bilinear interpolation of the normals. Hence, second de- and "normal". Due to the variety of target markets for gree polynomial interpolation was used. This type of bi- 3D computer graphics, different shading languages have quadratic interpolation was further elaborated by Barrera been developed. et al.,[6] where one second order polynomial was used to interpolate the diffuse light of the Phong reflection model and another second order polynomial was used for the 2.13.1 Offline rendering specular light. Spherical Linear Interpolation (Slerp) was used by Kuij Shading languages used in offline rendering produce max- and Blake [7] for computing both the normal over the imum image quality. Material properties are totally ab- polygon as well as the vector in the direction to the light stracted, little programming skill and no hardware knowl- source. A similar approach was proposed by Hast,[8] edge is required. These kind of shaders are often devel- which uses Quaternion interpolation of the normals with oped by artists to get the right “look”, just as texture map- the advantage that the normal will always have unit length ping, lighting and other facets of their work. and the computationally heavy normalization is avoided. Processing such shaders is time-consuming. The compu- tational power required can be expensive because of their ability to produce photorealistic results. Most of the time, 2.12.5 Flat vs. smooth shading production rendering is run on large computer clusters. 2.12.6 See also RenderMan Shading Language • 3D computer graphics The RenderMan Shading Language (often referenced as • Shader RSL or SL, for short), which is defined in the RenderMan [1] • List of common shading algorithms Interface Specification is the most common shading lan- guage for production-quality rendering. It is also one of • Zebra striping (computer graphics) the first shading languages ever implemented. The language defines six major shader types: 2.12.7 References • Light source shaders compute the color of the light [1] “Drawing Techniques”. Drawing With Confidence. Re- emitted from a point on the light source towards a trieved 19 September 2012. point on the target surface. [2] “Shading Tutorial, How to Shade in Drawing”. Dueys- • Surface shaders model the optical properties of an drawings.com. 2007-06-21. Retrieved 2012-02-11. illuminated object. They output the final color and [3] Gouraud, Henri (1971). “Continuous shading of curved position of the point by considering the incoming surfaces”. IEEE Transactions on Computers C–20 (6): light and the object’s physical properties. 623–629. doi:10.1109/T-C.1971.223313. • Displacement shaders manipulate surface geometry [4] B. T. Phong, Illumination for computer generated pic- independent of color. tures, Communications of ACM 18 (1975), no. 6, 311– 317. • Deformation shaders transform the entire space of a [5] Gary Bishop and David M. Weimer. 1986. Fast Phong geometry. Only one RenderMan implementation, shading. SIGGRAPH Comput. Graph. 20, 4 (August the AIR renderer, implemented this shader type, 1986), 103-106. supporting only a single linear transformation ap- plied to the space (this was more like a Transfor- [6] T. Barrera, A. Hast, E. Bengtsson. Fast Near Phong- mation shader, if such a type existed). Quality Software Shading. WSCG'06, pp. 109-116. 2006 • Volume shaders manipulate the color of a light as it [7] Kuijk, A. A. M. and E. H. Blake, Faster Phong shad- ing via angular interpolation. Computer Graphics Forum passes through a volume. They create effects such 8(4):315-324. 1989 as fog.

[8] A. Hast. Shading by Quaternion Interpolation. WSCG'05. • Imager shaders describe a color transformation to pp. 53-56. 2005. final pixel values. This is much like an image filter, 36 CHAPTER 2. MATERIALS

however the imager shader operates on data prior to Historically, only few such languages were successful in quantization. Such data has a greater dynamic range both establishing themselves and maintaining strong mar- and color resolution than can be displayed on a typ- ket position; a short description of those languages fol- ical output device. lows below.

Houdini VEX Shading Language ARB assembly language

Houdini VEX (Vector Expressions) shading language The OpenGL Architecture Review Board established the (often abbreviated to “VEX”) is closely modeled after ARB assembly language in 2002 as a standard low-level RenderMan. However, its integration into a complete instruction set for programmable graphics processors. 3D package means that the shader writer can access the High-level OpenGL shading languages often compile to information inside the shader, a feature that is not usu- ARB assembly for loading and execution. Unlike high- ally available in a rendering context. The language dif- level shading languages, ARB assembly does not support ferences between RSL and VEX are mainly syntactic, in control flow or branching. However, it continues to be addition to differences regarding the names of several used when cross-GPU portability is required. shadeop names.[2]

OpenGL shading language Shading Language Also known as GLSL or glslang, this standardized[4] Gelato's[3] shading language, like Houdini’s VEX, is shading language is meant to be used with OpenGL. closely modeled after RenderMan. The differences be- The language unifies vertex and fragment processing in tween Gelato Shading Language and RSL are mainly syn- a single instruction set, allowing conditional loops and tactical — Gelato uses semicolons instead of commas (more generally) branches. Historically, GLSL was pre- to separate arguments in function definitions and a few ceded by the ARB assembly language. shadeops have different names and parameters.

Cg programming language Open Shading Language The programming language Cg, developed by Open Shading Language (OSL) was developed by Sony NVIDIA,[5] was designed for easy and efficient Pictures Imageworks for use in its Renderer. It is production pipeline integration. The language features also used by Blender's Cycles render engine. OSL’s sur- API independence and comes with a large variety of free face and volume shaders define how surfaces or volumes tools to improve asset management. Development of Cg scatter light in a way that allows for importance sampling; was stopped in 2012 and the language is now deprecated. thus, it is well suited for physically-based renderers that support ray tracing and global illumination. DirectX High-Level Shader Language

2.13.2 Real-time rendering The High-Level Shading Language (also called HLSL for short) is a C-style shader language for DirectX 9, 10, 11, Shading languages for real-time rendering are now Xbox, Xbox 360 and . It is similar to Nvidia’s widespread. They provide both higher hardware abstrac- Cg but is only supported by DirectX and Xbox game con- tion and a more flexible programming model than pre- soles. vious paradigms which hardcoded transformation and shading equations. This gives the programmer greater control over the rendering process and delivers richer Adobe Pixel Bender and Adobe Graphics Assembly content at lower overhead. Language

Quite surprisingly, shaders that are designed to be ex- Adobe Systems added Pixel Bender as part of the Adobe ecuted directly on the GPU at the proper point in the Flash 10 API. Pixel Bender could only process pixel pipeline for maximum performance, also scored suc- but not 3D-vertex data. Flash 11 introduced an entirely cesses in general processing because of their stream pro- new 3D API called , which uses its own shad- gramming model. ing language called Adobe Graphics Assembly Language This kind of shading language is usually bound to a graph- (AGAL), which offers full 3D acceleration support.[6][7] ics API, although some applications provide shading sub- GPU acceleration for Pixel Bender was removed in Flash languages. 11.8.[8][9] 2.14. OPENGL SHADING LANGUAGE 37

AGAL is a low-level but platform-independent shading 2. ^ For fragment shading nvparse is possibly language, which can be compiled, for example, to the the first shading language featuring high-level ARB assembly language or GLSL. abstraction based on NV_register_combiners, NV_register_combiners2 for pixel math and NV_texture_shader, NV_texture_shader2 and PlayStation Shader Language NV_texture_shader3 for texture lookups. ATI_fragment_shader did not even provide a Sony announced PSSL (PlayStation Shader Language) as “string oriented” parsing facility (although it has a platform-specific shading language similar to HLSL for been later added by ATI_text_fragment_shader). the PlayStation 4. ARB_fragment_program, has been very successful. NV_fragment_program and iOS Metal Shading Language NV_fragment_program2 are actually similar although the latter provides much more advanced For iOS 8, Apple announced a new low-level graphics functionality in respect to others. API called Metal. Metal introduces its own shading lan- 3. ^ Fx composer from NVIDIA home page, guage called Metal Shading Language, which is concep- http://developer.nvidia.com/object/fx_composer_ tually based on C++ 11 and implemented using clang and home.html LLVM.[10] 4. Rudy Cortes and Saty Raghavachary: The Render- Man Shading Language Guide, Course Technology 2.13.3 References PTR, 1 edition (December 27, 2007), ISBN 1- 59863-286-8 [1] Staff (1986–2012). “The RISpec”. Pixar. Pixar. Re- trieved 9 June 2012.

[2] Staff. “Houdini”. Side FX. Side Effects Software Inc. 2.14 OpenGL Shading Language Archived from the original on 22 July 2008. Retrieved 9 June 2012. geomery texture sound OpenGL commands and/or 3D shaders written in GLSL OpenGL data data data (vertex, tesselation control, tessellation evaluation, computer geometry, fragment and compute shaders) [3] NVIDIA Corporation (2003–2008). “Home”. NVIDIA game Gelato Zone. NVIDIA Corporation. Archived from the system calls subroutine original on 22 August 2008. Retrieved 9 June 2012. SDL calls 3D Library Subroutines [4] Staff (1997–2012). “OpenGL Shading Language”. GNU OpenGL. The . Retrieved 9 June 2012. C Library Subroutines DRM Library Subroutines

[5] Staff (2012). “Cg Toolkit”. NVIDIA Developer Zone. System Call Interface (SCI) NVIDIA Corporation. Retrieved 9 June 2012. other Process Memory kernel KMS DRM scheduler manager [6] Joseph Labrecque (2011). What’s New in Adobe AIR 3. subsystems O'Reilly Media, Inc. pp. 17–26. ISBN 978-1-4493- Linux kernel 1108-7. System Display Graphics Hardware CPU Devices GPU RAM Screen controller RAM [7] Remi Arnaud (2011). “3D in a Web Browser”. In Eric Lengyel. Game Engine Gems 2. CRC Press. pp. 207– Video games outsource rendering calculations to the GPU over 212. ISBN 978-1-56881-437-7. OpenGL in real-time. Shaders are written in OpenGL Shading Language and compiled. The compiled programs are executed [8] “Stage3D”. scratch.mit.edu. Retrieved 2014-08-05. on the GPU. [9] Player 11.8 - Bug 3591185: Pixel Ben- OpenGL Shading Language (abbreviated: GLSL or der shader performance drastically degraded in FP11.8. Closed as “NeverFix” GLslang), is a high-level shading language based on the syntax of the C programming language. It was cre- [10] Metal Shading Language Guide ated by the OpenGL ARB (OpenGL Architecture Re- view Board) to give developers more direct control of the graphics pipeline without having to use ARB assembly 2.13.4 Notes language or hardware-specific languages.

1. ^ Previous vertex shading languages (in no particu- lar order) for OpenGL include EXT_vertex_shader, 2.14.1 Background NV_vertex_program, the aforementioned ARB_vertex_program, NV_vertex_program2 With advances in graphics cards, new features have been and NV_vertex_program3. added to allow for increased flexibility in the rendering 38 CHAPTER 2. MATERIALS pipeline at the vertex and fragment level. Programmabil- level if they are inclined to do so. Many of these func- ity at this level is achieved with the use of fragment and tions are similar to those found in the math library of vertex shaders. the C programming language, such as exp() and abs(), Originally, this functionality was achieved by writing while others are specific to graphics programming, such shaders in ARB assembly language – a complex and un- as smoothstep() and texture(). intuitive task. The OpenGL ARB created the OpenGL Shading Language to provide a more intuitive method for 2.14.5 Compilation and execution programming the graphics processing unit while main- taining the open standards advantage that has driven GLSL shaders are not stand-alone applications; they OpenGL throughout its history. require an application that utilizes the OpenGL API, Originally introduced as an extension to OpenGL 1.4, which is available on many different platforms (e.g., GLSL was formally included into the OpenGL 2.0 core GNU/Linux, Mac OS X, Windows). There are language by the OpenGL ARB. It was the first major revision to bindings for C, C++, C#, Delphi, Java and many more. OpenGL since the creation of OpenGL 1.0 in 1992. GLSL shaders themselves are simply a set of strings that Some benefits of using GLSL are: are passed to the hardware vendor’s driver for compilation from within an application using the OpenGL API’s entry • Cross-platform compatibility on multiple operating points. Shaders can be created on the fly from within an systems, including GNU/Linux, Mac OS X and application, or read-in as text files, but must be sent to the Windows. driver in the form of a string. The set of APIs used to compile, link, and pass parame- • The ability to write shaders that can be used on any ters to GLSL programs are specified in three OpenGL ex- hardware vendor’s graphics card that supports the tensions, and became part of core OpenGL as of OpenGL OpenGL Shading Language. Version 2.0. The API was expanded with geometry • Each hardware vendor includes the GLSL compiler shaders in OpenGL 3.2, tessellation shaders in OpenGL in their driver, thus allowing each vendor to create 4.0 and compute shaders in OpenGL 4.3. These OpenGL code optimized for their particular graphics card’s APIs are found in the extensions: architecture. • ARB vertex shader

2.14.2 Versions • ARB fragment shader

GLSL versions have evolved alongside specific versions • ARB shader objects of the OpenGL API. It is only with OpenGL versions 3.3 • ARB geometry shader 4 and above that the GLSL and OpenGL major and minor version numbers match. These versions for GLSL and • ARB tessellation shader OpenGL are related in the following table: • ARB compute shader 2.14.3 Operators 2.14.6 Examples The OpenGL Shading Language provides many opera- tors familiar to those with a background in using the A sample trivial GLSL vertex shader C programming language. This gives shader develop- ers flexibility when writing shaders. GLSL contains the This transforms the input vertex the same way the fixed- operators in C and C++, with the exception of pointers. function pipeline would. Bitwise operators were added in version 1.30. void main(void) { gl_Position = ftransform(); }

2.14.4 Functions and control structures Note that ftransform() is no longer available since GLSL 1.40 and GLSL ES 1.0. Instead, the programmer has to Similar to the C programming language, GLSL supports manage the projection and modelview matrices explicitly loops and branching, including: if-else, for, do-while, in order to comply with the new OpenGL 3.1 standard. break, continue, etc. Recursion is forbidden, however. #version 140 uniform Transformation { mat4 projec- User-defined functions are supported, and a wide vari- tion_matrix; mat4 modelview_matrix; }; in vec3 vertex; ety of commonly used functions are provided built-in as void main(void) { gl_Position = projection_matrix * well. This allows the graphics card manufacturer the abil- modelview_matrix * vec4(vertex, 1.0); } ity to optimize these built-in functions at the hardware 2.14. OPENGL SHADING LANGUAGE 39

A sample trivial GLSL tessellation shader • 0 - color buffer number, associated with the variable; if you are not using multiple render targets, you must This is a simple pass-through Tessellation Control Shader write zero; for the position. • “MyFragColor” - name of the output variable in the #version 400 layout(vertices=3) out; void shader program, which is associated with the given main(void) { gl_out[gl_InvocationID].gl_Position buffer. = gl_in[gl_InvocationID].gl_Position; gl_TessLevelOuter[0] = 1.0; gl_TessLevelOuter[1] #version 150 out vec4 MyFragColor; void main(void) { = 1.0; gl_TessLevelOuter[2] = 1.0; gl_TessLevelInner[0] MyFragColor = vec4(1.0, 0.0, 0.0, 1.0); } = 1.0; gl_TessLevelInner[1] = 1.0; }

This is a simple pass-through Tessellation Evaluation Shader for the position. 2.14.7 See also #version 400 layout(triangles,equal_spacing) in; void • 3D computer graphics main(void) { vec4 p0 = gl_in[0].gl_Position; vec4 p1 • = gl_in[1].gl_Position; vec4 p2 = gl_in[2].gl_Position; Khronos Group vec3 p = gl_TessCoord.xyz; gl_Position = p0*p.x + • WebGL, an OpenGL-ES dialect for web browsers, p1*p.y + p2*p.z; } which uses GLSL for shaders

Other shading languages A sample trivial GLSL geometry shader • ARB assembly language, a low-level shading lan- This is a simple pass-through shader for the color and po- guage sition. • Cg, a high-level shading language for programming #version 120 #extension GL_EXT_geometry_shader4 : vertex and pixel shaders enable void main(void) { for (int i = 0; i < gl_VerticesIn; • HLSL, a high-level shading language for use with ++i) { gl_FrontColor = gl_FrontColorIn[i]; gl_Position Direct3D = gl_PositionIn[i]; EmitVertex(); } } • TGSI, a low-level intermediate language introduced Since OpenGL 3.2 with GLSL 1.50 geometry shaders by Gallium3D were adopted into core functionality which means there • AMDIL, a low-level intermediate language used in- is no need to use extensions. However, the syntax is a ternally at AMD bit different. This is a simple version 1.50 pass-through shader for vertex positions (of triangle primitives): #version 150 layout(triangles) in; layout(triangle_strip, 2.14.8 References max_vertices = 3) out; void main(void) { for (int i = 0; i < Notes gl_in.length(); ++i) { gl_Position = gl_in[i].gl_Position; EmitVertex(); } EndPrimitive(); } [1] “GLSL Language Specification, Version 1.10.59”.

[2] “GLSL Language Specification, Version 1.20.8”.

[3] “GLSL Language Specification, Version 1.30.10”. A sample trivial GLSL fragment shader [4] “GLSL Language Specification, Version 1.40.08”. This produces a red fragment. [5] “GLSL Language Specification, Version 1.50.11”. #version 120 void main(void) { gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); } [6] “GLSL Language Specification, Version 3.30.6”. [7] “GLSL Language Specification, Version 4.00.9”.

In GLSL 1.30 and later you can do [8] “GLSL Language Specification, Version 4.10.6”. glBindFragDataLocation(Program, 0, “MyFragColor”); [9] “GLSL Language Specification, Version 4.20.11”.

[10] “GLSL Language Specification, Version 4.30.8”. where: [11] “GLSL Language Specification, Version 4.40”. • Program - your shader program’s handle; [12] “GLSL Language Specification, Version 4.50”. 40 CHAPTER 2. MATERIALS

2.14.9 Books 2.15.1 History

• Rost, Randi J.: OpenGL Shading Language (3rd Edi- Phong shading and the Phong reflection model were de- tion), Addison-Wesley, July 30, 2009, ISBN 978-0- veloped at the University of Utah by Bui Tuong Phong, 321-63763-5 who published them in his 1973 Ph.D. dissertation.[3][4] Phong’s methods were considered radical at the time of • Kessenich, John, & Baldwin, David, & Rost, Randi. their introduction, but have since become the de facto The OpenGL Shading Language. Version 1.10.59. baseline shading method for many rendering applications. , Inc. Ltd. Phong’s methods have proven popular due to their gener- ally efficient use of computation time per rendered pixel.

2.14.10 External links 2.15.2 Phong interpolation • OpenGL Fragment Shader Specification

• OpenGL Vertex Shader Specification

• OpenGL Shader Objects Specification

• The official OpenGL website

IDE Phong shading interpolation example • RenderMonkey Phong shading improves upon Gouraud shading and pro- vides a better approximation of the shading of a smooth • OpenGL Shader Designer surface. Phong shading assumes a smoothly varying sur- face normal vector. The Phong interpolation method • Mac OpenGL Shader Builder User Guide works better than Gouraud shading when applied to a re- flection model that has small specular highlights such as • Shader Maker (GPL) the Phong reflection model. • Polydraw The most serious problem with Gouraud shading occurs when specular highlights are found in the middle of a • QuickShader large polygon. Since these specular highlights are absent from the polygon’s vertices and Gouraud shading inter- polates based on the vertex colors, the Debuggers will be missing from the polygon’s interior. This problem is fixed by Phong shading. • GLSL Debugger Unlike Gouraud shading, which interpolates colors across polygons, in Phong shading a normal vector is linearly interpolated across the surface of the polygon from the polygon’s vertex normals. The surface normal is interpo- 2.15 Phong shading lated and normalized at each pixel and then used in a re- flection model, e.g. the Phong reflection model, to obtain This article is about Phong’s normal-vector interpolation the final pixel color. Phong shading is more computation- technique for surface shading. For Phong’s illumination ally expensive than Gouraud shading since the reflection model, see Phong reflection model. model must be computed at each pixel instead of at each vertex. Phong shading refers to an interpolation technique for In modern graphics hardware, variants of this algorithm surface shading in 3D computer graphics. It is also are implemented using pixel or fragment shaders. called Phong interpolation[1] or normal-vector interpola- tion shading.[2] Specifically, it interpolates surface nor- mals across rasterized polygons and computes pixel colors 2.15.3 Phong reflection model based on the interpolated normals and a reflection model. Phong shading may also refer to the specific combination Main article: Phong reflection model of Phong interpolation and the Phong reflection model. 2.16. GOURAUD SHADING 41

Phong shading may also refer to the specific combination [4] University of Utah School of Computing, http://www.cs. of Phong interpolation and the Phong reflection model, utah.edu/school/history/#phong-ref which is an empirical model of local illumination. It describes the way a surface reflects light as a combina- tion of the diffuse reflection of rough surfaces with the 2.16 Gouraud shading specular reflection of shiny surfaces. It is based on Bui Tuong Phong's informal observation that shiny surfaces have small intense specular highlights, while dull surfaces have large highlights that fall off more gradually. The re- flection model also includes an ambient term to account for the small amount of light that is scattered about the entire scene.

Visual illustration of the Phong equation: here the light is white, Gouraud-shaded triangle mesh using the Phong reflection model the ambient and diffuse colors are both blue, and the specular color is white, reflecting a small part of the light hitting the sur- Gouraud shading, named after Henri Gouraud, is an face, but only in very narrow highlights. The intensity of the interpolation method used in computer graphics to pro- diffuse component varies with the direction of the surface, and duce continuous shading of surfaces represented by poly- the ambient component is uniform (independent of direction). gon meshes. In practice, Gouraud shading is most often used to achieve continuous lighting on triangle surfaces by computing the lighting at the corners of each triangle 2.15.4 See also and linearly interpolating the resulting colours for each pixel covered by the triangle. Gouraud first published the • List of common shading algorithms technique in 1971.[1][2][3] • Blinn–Phong shading model – Phong reflection model modified to trade precision with computing 2.16.1 Description efficiency Gouraud shading works as follows: An estimate to the • Flat shading – shading of polygons with a single surface normal of each vertex in a polygonal 3D model color is either specified for each vertex or found by averaging • Gouraud shading – shading of polygons by interpo- the surface normals of the polygons that meet at each ver- lating colors that are computed at vertices tex. Using these estimates, lighting computations based on a reflection model, e.g. the Phong reflection model, are • Phong reflection model – reflection model often used then performed to produce colour intensities at the ver- with Phong shading tices. For each screen pixel that is covered by the polygo- • Specular highlight – other specular lighting equa- nal mesh, colour intensities can then be interpolated from tions the colour values calculated at the vertices.

2.15.5 References 2.16.2 Comparison with other shading techniques [1] Watt, Alan H.; Watt, Mark (1992). Advanced Anima- tion and Rendering Techniques: Theory and Practice. Gouraud shading is considered superior to flat shading Addison-Wesley Professional. pp. 21–26. ISBN 978-0- and requires significantly less processing than Phong 201-54412-1. shading, but usually results in a faceted look. [2] Foley, James D.; van Dam, Andries; Feiner, Steven K.; In comparison to Phong shading, Gouraud shading’s Hughes, John F. (1996). Computer Graphics: Principles strength and weakness lies in its interpolation. If a mesh and Practice. (2nd ed. in C). Addison-Wesley Publishing covers more pixels in screen space than it has vertices, in- Company. pp. 738–739. ISBN 0-201-84840-6. terpolating colour values from samples of expensive light- [3] B. T. Phong, Illumination for computer generated pic- ing calculations at vertices is less processor intensive than tures, Communications of ACM 18 (1975), no. 6, 311– performing the lighting calculation for each pixel as in 317. Phong shading. However, highly localized lighting effects 42 CHAPTER 2. MATERIALS

[3] Gouraud, Henri (1998). “Continuous shading of curved surfaces”. In Rosalee Wolfe (ed.). Seminal Graphics: Pi- oneering efforts that shaped the field. ACM Press. ISBN 1-58113-052-X.

Comparison of flat shading and Gouraud shading.

(such as specular highlights, e.g. the glint of reflected light on the surface of an apple) will not be rendered cor- rectly, and if a highlight lies in the middle of a polygon, but does not spread to the polygon’s vertex, it will not be apparent in a Gouraud rendering; conversely, if a high- light occurs at the vertex of a polygon, it will be rendered correctly at this vertex (as this is where the lighting model is applied), but will be spread unnaturally across all neigh- boring polygons via the interpolation method. The problem is easily spotted in a rendering which ought to have a specular highlight moving smoothly across the surface of a model as it rotates. Gouraud shading will in- stead produce a highlight continuously fading in and out across neighboring portions of the model, peaking in in- tensity when the intended specular highlight passes over a vertex of the model. For clarity, note that the problem just described can be improved by increasing the density of vertices in the object (or perhaps increasing them just near the problem area), but of course, this solution applies to any shading paradigm whatsoever - indeed, with an “in- credibly large” number of vertices there would never be any need at all for shading concepts.

• Gouraud-shaded sphere - note the poor behaviour of the specular highlight.

• The same sphere rendered with a very high polygon count.

2.16.3 See also

• List of common shading algorithms

• Blinn–Phong shading model

• Phong shading

2.16.4 References

[1] Gouraud, Henri (1971). Computer Display of Curved Sur- faces, Doctoral Thesis. University of Utah.

[2] Gouraud, Henri (1971). “Continuous shading of curved surfaces”. IEEE Transactions on Computers C–20 (6): 623–629. doi:10.1109/T-C.1971.223313. Chapter 3

Other

3.1 WebGL of the working group is Ken Russell. Early applications of WebGL include Zygote Body.[9][10] WebGL (Web ) is a JavaScript API More recently, Autodesk ported most of their applica- for rendering interactive 3D computer graphics and tions to the cloud running on local WebGL clients. These 2D graphics within any compatible web browser with- applications included Fusion 360 and AutoCAD 360.[11] out the use of plug-ins.[2] WebGL is integrated com- Development of the WebGL 2 specification started in pletely into all the web standards of the browser allow- 2013.[12] This specification is based on OpenGL ES 3.0. ing GPU accelerated usage of physics and image pro- cessing and effects as part of the web page canvas. We- bGL elements can be mixed with other HTML elements 3.1.3 Support and composited with other parts of the page or page [3] background. WebGL programs consist of control code WebGL is widely supported in modern browsers. How- written in JavaScript and shader code that is executed on ever its availability is dependent on other factors like the a computer’s Graphics Processing Unit (GPU). WebGL GPU supporting it. The official WebGL website offers a is designed and maintained by the non-profit Khronos simple test page.[13] More detailed information (like what [4] Group. renderer the browser uses, and what extensions are avail- able) is provided at third-party websites.[14][15] 3.1.1 Design Desktop browsers WebGL is based on OpenGL ES 2.0 and provides an API for 3D graphics.[5] It uses the HTML5 canvas element • Chrome – WebGL has been enabled on and is accessed using Document Object Model interfaces. all platforms that have a capable graphics card Automatic memory management is provided as part of with updated drivers since version 9, released the JavaScript language.[4] in February 2011.[16][17] By default on Windows Chrome uses the ANGLE (Almost Native Graph- Like OpenGL ES 2.0, WebGL does not have the fixed- ics Layer Engine) renderer to translate OpenGL ES function APIs introduced in OpenGL 1.0 and deprecated to Direct X 9.0c or 11.0, which have better driver in OpenGL 3.0. This functionality can instead be pro- support.[18] On Linux and Mac OS X the default vided by the user in the JavaScript code space. renderer is OpenGL however.[19] It is also possible Shaders in WebGL are expressed directly in GLSL. to force OpenGL as the renderer on Windows.[18] Since September 2013, Chrome also has a newer Direct3D 11 renderer, which however requires a 3.1.2 History newer graphics card.[20][21] • WebGL evolved out of the Canvas 3D experiments Mozilla – WebGL has been enabled on all started by Vladimir Vukićević at Mozilla. Vukićević first platforms that have a capable graphics card with up- [22] demonstrated a Canvas 3D prototype in 2006. By the end dated drivers since version 4.0. Since 2013 Fire- of 2007, both Mozilla[6] and Opera[7] had made their own fox also uses ANGLE on the Windows platform via [18] separate implementations. DirectX. In early 2009, the non-profit technology consortium • Safari – Safari 6.0 and newer versions installed on Khronos Group started the WebGL Working Group, with OS X Mountain Lion, Mac OS X Lion and Safari 5.1 initial participation from Apple, Google, Mozilla, Opera, on Mac OS X Snow Leopard implemented support and others.[4][8] Version 1.0 of the WebGL specification for WebGL, which was disabled by default before was released March 2011.[1] As of March 2012, the chair Safari 8.0.[23][24][25][26][27]

43 44 CHAPTER 3. OTHER

• Opera – WebGL has been implemented in Opera 11 the popular industry formats is also not directly pro- and 12, although disabled by default.[28][29] vided for. JavaScript libraries have been built (or some- times ported to WebGL) to provide the additional func- • Internet Explorer – WebGL is partially supported in tionality. A non-exhaustive list of libraries that pro- Internet Explorer 11.[30][31][32][33] It initially failed vide many high-level features includes three.js, O3D, the majority of official WebGL conformance tests, OSG.JS, and GLGE. There also has been a rapid emer- but Microsoft later released several updates. The gence of game engines for WebGL,[42] including Unreal latest 0.94 WebGL engine currently passes ~97% of Engine 4 and 5.[43] The Stage3D/Flash-based Khronos tests. WebGL support can also be manu- high-level library also has a port to WebGL ally added to earlier versions of Internet Explorer via TypeScript.[20][44] A more light-weight utility library using third-party plugins such as IEWebGL.[34] that provides just the vector and matrix math utilities for shaders is sylvester.js.[45][46] It is sometimes used in conjunction with a WebGL specific extension called Mobile browsers glUtils.js.[45][47] • Android Browser - Basically unsupported, but the There are also some 2D libraries built on top of We- Sony Ericsson Xperia range of Android smart- bGL like -x or Pixi.js, which were implemented phones have had WebGL capabilities following a this way for performance reasons, in a move that par- firmware upgrade.[35] Samsung also allels what happened with the over have WebGL enabled (verified on Galaxy SII (4.1.2) Stage3D in the Flash world. The WebGL-based 2D li- and Galaxy Note 8.0 (4.2)). Supported in Google braries fall back to HTML5 canvas when WebGL is not [48] Chrome that replaced Android browser in many available. phones (but is not a new standard Android Browser). Removing the rendering bottleneck by giving almost direct access to the GPU also exposed performance • Internet Explorer - WebGL is available on Windows limitations in the JavaScript implementations. Some Phone 8.1 were addressed by asm.js. (Similarly, the introduc- • BlackBerry PlayBook – WebGL is available via tion of Stage3D exposed performance problems within WebWorks and browser in PlayBook OS 2.00[36] ActionScript, which were addressed by projects like CrossBridge.)[48] • Firefox for mobile – WebGL is available for An- Creating content for WebGL scenes often means using a [37] droid devices since Firefox 4. regular 3D content creation tool and exporting the scene to a format that is readable by the viewer or helper li- • Firefox OS brary. Desktop 3D authoring software such as Blender or Autodesk Maya can be used for this purpose, but there are • - WebGL is available for Android also some WebGL-specific software such as CopperCube devices since Google Chrome 25 and enabled by de- and the online WebGL-based editor Clara.io. Online fault since version 30.[38] platforms such as Sketchfab and Clara.io allow users to • Maemo - In Nokia N900, WebGL is available in the directly upload their 3D models and display them using a stock microB browser from the PR1.2 firmware up- hosted WebGL viewer. date onwards.[39] Additionally, Mozilla Firefox implemented built-in We- bGL tools starting with version 27 that allow editing ver- • - Opera Mobile 12 supports WebGL [49] [40] tex and fragment shaders. A number of other debug- (on Android only). ging and profiling tools have also emerged.[50] • Tizen X3D also made a project called X3DOM to make X3D and VRML content running on WebGL. The 3D model • Touch will in XML tag in HTML5 and interactive script will use JavaScript and DOM. BS Content Studio and In- • WebOS stantReality X3D exporter can exported X3D in HTML and running by WebGL. • iOS - Mobile Safari supports WebGL in iOS 8.[41]

3.1.4 Content creation and ecosystem 3.1.5 Security

The WebGL API may be too tedious to use directly 3.1.6 Similar technologies for 3D in a without some utility libraries, which for example set browser up typical view transformation shaders (e.g. for view frustum). Loading scene graphs and 3D objects in Java OpenGL is fairly similar layer to WebGL in the Java 3.1. WEBGL 45 world, whereas Stage3D is the equivalent layer in Adobe [22] “Mozilla Firefox 4 Release Notes”. Mozilla.com. 2011- Flash Player 11 and later. also sup- 03-22. Retrieved 2012-03-20. ports OpenGL ES 2.0.[51] [23] “New in OS X Lion: Safari 5.1 brings WebGL, Do Not Track and more”. Fairerplatform.com. 2011-05-03. Re- 3.1.7 References trieved 2012-03-20. [24] “Enable WebGL in Safari”. Ikriz.nl. 2011-08-23. Re- [1] “Khronos Releases Final WebGL 1.0 Specification”. Re- trieved 2012-03-20. trieved 2015-05-18. [25] “Getting a WebGL Implementation”. Khronos.org. 2012- [2] Gregg Tavares (2012-02-09). “WebGL Fundamentals”. 01-13. Retrieved 2012-03-20. HTML5 Rocks. [26] “Implementations/WebKit”. Khronos.org. 2011-09-03. [3] Tony Parisi (2012-08-15). “WebGL: Up and Running”. Retrieved 2012-03-20. O'Reilly Media, Incorporated. [27] “WebGL Now Available in WebKit Nightlies”. We- [4] “WebGL – OpenGL ES 2.0 for the Web”. Khronos.org. bkit.org. Retrieved 2012-03-20. Retrieved 2011-05-14. [28] “WebGL and ”. My.opera.com. [5] “WebGL Specification”. Khronos.org. Retrieved 2011- 2011-02-28. Archived from the original on 2011-03-03. 05-14. Retrieved 2012-03-20.

[6] “Canvas 3D: GL power, web-style”. Blog.vlad1.com. Re- [29] “Introducing Opera 12 alpha”. My.opera.com. 2011-10- trieved 2011-05-14. 13. Archived from the original on 2011-10-15. Retrieved 2012-03-20. [7] “Taking the canvas to another dimension”. My.opera.com. 2007-11-26. Archived from the [30] http://msdn.microsoft.com/en-US/library/ie/ original on 2007-11-17. Retrieved 2011-05-14. bg182648%28v=vs.85%29

[8] “Khronos Details WebGL Initiative to Bring Hardware- [31] “Internet Explorer 11 Preview guide for developers”. Mi- Accelerated 3D Graphics to the Internet”. Khronos.org. crosoft. 2013-07-17. Retrieved 2013-07-24. 2009-08-04. Retrieved 2011-05-14. [32] “WebGL”. Microsoft. 2013-07-17. Retrieved 2013-07- [9] “Google Body – Google Labs”. Body- 24. browser.googlelabs.com. Retrieved 2011-05-14. [33] “Internet Explorer 11 to support WebGL and MPEG [10] Bhanoo, Sindya N. (2010-12-23). “New From Google: Dash”. Engadget. 2013-06-26. Retrieved 2013-06-26. The Body Browser”. Well.blogs.nytimes.com. Retrieved 2011-05-14. [34] “IEWebGL”. Iewebgl. Retrieved 2014-08-14.

[11] “AUTODESK FUSION 360: THE FUTURE OF CAD, [35] “Xperia™ phones first to support WebGL™ – Developer PT. 1”. 3dcadworld.com. Retrieved 2013-08-21. World”. blogs.sonyericsson.com. The Sony Ericsson De- veloper Program. 2011-11-29. Retrieved 2011-12-05. [12] “WebGL 2 Specification”. khronos.org. 2013-09-26. Re- trieved 2013-10-28. [36] Halevy, Ronen. “PlayBook OS 2.0 Developer Beta In- cludes WebGL, Flash 11, & AIR 3.0”. BerryReview. Re- [13] WebGL test page trieved 2011-11-15. [14] http://webglreport.com/ [37] iclkevin (2011-11-12). “WebGL on Mobile Devices”. [15] http://www.browserleaks.com/webgl iChemLabs. Retrieved 2011-11-25.

[16] Paul Mah (February 8, 2011). “Google releases Chrome [38] Kersey, Jason. “Chrome Beta for Android Update”. 9; comes with Google Instant, WebGL – FierceCIO: Chrome Releases Blog. Google. Retrieved 2013-08-23. TechWatch". FierceCIO. Retrieved 2012-03-20. [39] suihkulokki (2010-06-07). “WebGL on N900”. Suihku- [17] http://learningwebgl.com/blog/?p=3103 lokki.blogspot.com. Retrieved 2011-05-14.

[18] http://www.geeks3d.com/20130611/ [40] “Opera Mobile 12”. Opera Software. Archived from the -how-to-enable-native--in-your-browser-windows/ original on 1 March 2012. Retrieved 27 February 2012.

[19] http://blog.chromium.org/2010/03/ [41] Cunningham, Andrew. “iOS 8, Thoroughly Reviewed”. introducing-angle-project.html Retrieved 2014-09-19.

[20] http://learningwebgl.com/blog/?p=5956 [42] Tony Parisi (13 February 2014). Programming 3D Ap- plications with HTML5 and WebGL: 3D Animation and [21] http://blog.tojicode.com/2013/09/ Visualization for Web . “O'Reilly Media, Inc.”. pp. at-last-chrome-d3d11-day-has-come.html 364–366. ISBN 978-1-4493-6395-6. 46 CHAPTER 3. OTHER

[43] http://www.anandtech.com/show/8354/ 3.2.1 Overview -k1-lands-in-acers-newest-chromebook Three.js allows the creation of GPU-accelerated 3D ani- [44] http://away3d.com/comments/away3d_typescript_4.1_ mations using the JavaScript language as part of a website alpha without relying on proprietary browser plugins.[3][4] This is possible thanks to the advent of WebGL.[5] [45] Alexey Boreskov; Evgeniy Shikin (2014). Computer High-level libraries such as Three.js or GLGE, SceneJS, Graphics: From Pixels to Programmable Graphics Hard- ware. CRC Press. p. 370. ISBN 978-1-4398-6730-3. PhiloGL or a number of other libraries make it possible to author complex 3D computer that display [46] Andreas Anyuru (2012). Professional WebGL Program- in the browser without the effort required for a traditional [6] ming: Developing 3D Graphics for the Web. John Wiley standalone application or a plugin: & Sons. p. 140. ISBN 978-1-119-94058-6. 3.2.2 History [47] Steve Fulton; Jeff Fulton (2013). HTML5 Canvas (2nd ed.). “O'Reilly Media, Inc.”. p. 624. ISBN 978-1-4493- 3588-5. Three.js was first released by Ricardo Cabello to GitHub in April 2010.[2] The origins of the library can be traced [48] http://typedarray.org/the-webgl-potential/ back to his involvement with the demoscene in the early . The code was first developed in ActionScript, then [49] https://hacks.mozilla.org/2013/11/ in 2009 ported to JavaScript. In Cabello’s mind, the two live-editing-webgl-shaders-with-firefox-developer-tools/ strong points for the transfer to JavaScript were not hav- ing to compile the code before each run and platform in- [50] http://www.realtimerendering.com/blog/ dependence. With the advent of WebGL, Paul Brunt was webgl-debugging-and-profiling-tools/ able to add the renderer for this quite easily as Three.js was designed with the rendering code as a module rather [7] [51] Remi Arnaud (2011). “3D in a Web Browser”. In Eric than in the core itself. Cabello’s contributions include Lengyel. Game Engine Gems 2. CRC Press. pp. 199– API design, CanvasRenderer, SVGRenderer and being 228. ISBN 978-1-56881-437-7. responsible for merging the commits by the various con- tributors into the project. The second contributor in terms of commits, Branislav 3.1.8 External links Ulicny started with Three.js in 2010 after having posted a number of WebGL demos on his own site. He wanted • Official website WebGL renderer capabilities in Three.js to exceed those of CanvasRenderer or SVGRenderer.[7] His major con- • WebGL /Canvas 3D Preview in WebKit r48331 tributions generally involve materials, shaders and post- processing. • WebGL Demo from Google Chromium (depre- Soon after the introduction of WebGL on Firefox 4 in cated) March 2011, Joshua Koo came on board. He built his first Three.js demo for 3D text in September 2011.[7] His • Mozilla Developer Network contributions frequently relate to geometry generation. There are over 390 contributors in total.[7] • Unofficial WebGL Games Community

• Matti Anttonen & Arto Salminen Building 3D We- 3.2.3 Features bGL Applications Three.js includes the following features:[8] • http://www.theregister.co.uk/2014/08/11/hell_ freezes_over_microsoft_joins_khronos/ • Effects: Anaglyph, cross-eyed and . • Scenes: add and remove objects at run-time; fog • Cameras: perspective and orthographic; controllers: 3.2 Three.js trackball, FPS, path and more • Animation: armatures, forward kinematics, inverse Three.js is a lightweight cross-browser JavaScript li- kinematics, morph and keyframe brary/API used to create and display animated 3D com- puter graphics on a Web browser. Three.js uses WebGL. • Lights: ambient, direction, point and spot lights; The source code is hosted in a repository on GitHub. shadows: cast and receive 3.2. THREE.JS 47

• Materials: Lambert, Phong, smooth shading, tex- requestAnimationFrame shim requestAnimationFrame( tures and more animate ); render(); } function render() { mesh.rotation.x += 0.01; mesh.rotation.y += 0.02; renderer.render( • Shaders: access to full OpenGL Shading Language scene, camera ); } (GLSL) capabilities: lens flare, depth pass and ex- tensive post-processing library • Objects: meshes, particles, sprites, lines, ribbons, 3.2.5 Selected Uses and Works bones and more - all with Level of detail • Geometry: plane, cube, sphere, torus, 3D text and The Three.js library is being used for a wide variety of more; modifiers: lathe, extrude and tube applications and purposes. The following lists identify selected uses and works. • Data loaders: binary, image, JSON and scene

• Utilities: full set of time and 3D math functions in- Mixed Media cluding frustum, matrix, quaternion, UVs and more • The Little Black Jacket, 2012, CHANEL's classic re- • Export and import: utilities to create Three.js- visited by Karl Lagerfeld and Carine Roitfeld. An compatible JSON files from within: Blender, online exhibition displaying 113 pictures of celebri- openCTM, FBX, Max, and OBJ ties photographed by Karl Lagerfeld.[12] • Support: API documentation is under construction, • Daftunes, 2012, an interactive sound visualizing public forum and wiki in full operation project.[13][14] • Examples: Over 150 files of coding examples plus • PlayPit, 2012[15] fonts, models, textures, sounds and other support files • Rome the album | 3 Dreams in Black the film, 2011, produced by Chris Milk. "'3 Dreams of Black' is • Debugging: Stats.js,[9] WebGL Inspector,[10] the trippiest WebGL interactive music video you've Three.js Inspector[11] seen all day”[16][17][18]

Three.js runs in all browsers supported by WebGL. • One Millionth Tower, 2011 - “It exists in a 3-D set- Three.js is made available under the MIT license.[1] ting made possible by a JavaScript library called three.js, which lets viewers walk around the high- rise neighborhood.” -[19] 3.2.4 Usage • Ellie Goulding's Lights, 12 October 2011, “an in- teractive & colorful music video experience using The Three.js library is a single JavaScript file. It can be webgl”[20][21][22] included within a web page by linking to a local or remote copy. • Hello Racer, 2011 - Awarded the FWA Site Of The [23][24] Day for today, June 5, 2011 • WebGL Reader, 2011[25] The following code creates a scene, adds a camera and a • cube to the scene, creates a renderer and adds The Wilderness Downtown, 2010 its viewport in the document.body element. Once loaded, the cube rotates about its X- and Y-axis. Model Visualization and Scene Creation Applica-

Web Analytics