<<

TESSELLATION DETAIL IMPLEMENTATION FOR SIMPLE SURFACES USING

OPENGL

A Project

Presented to the faculty of the Department of Computer Science

California State University, Sacramento

Submitted in partial satisfaction of the requirements for the degree of

MASTER OF SCIENCE

in

Computer Science

by

Matthew Thomas Anderson

SPRING 2020

© 2020

Matthew Thomas Anderson

ALL RIGHTS RESERVED

ii

TESSELLATION SHADER DETAIL IMPLEMENTATION FOR SIMPLE SURFACES USING

OPENGL

A Project

by

Matthew Thomas Anderson

Approved by:

______, Committee Chair Dr. V. Scott Gordon

______, Second Reader Dr. Pinar Muyan-Ozcelik

______Date

iii

Student: Matthew Thomas Anderson

I certify that this student has met the requirements for format contained in the University format manual, and this project is suitable for electronic submission to the library and credit is to be awarded for the project.

______, Graduate Coordinator ______Dr. Jinsong Ouyang Date

Department of Computer Science

iv

Abstract

of

TESSELLATION SHADER DETAIL IMPLEMENTATION FOR SIMPLE SURFACES USING

OPENGL

by

Matthew Thomas Anderson

Shader-based graphics programming can be utilized to create a standard, re-usable method of applying surface detail to a simple shape using tessellation. There is a need for instructional resources outlining how this is accomplished for arbitrary graphics models. The tessellation evaluation shader handles tasks such as vertex position modifications and perspective matrix processing. This implementation also demonstrates how to calculate texture coordinates for the vertices that are generated by the tessellator. The models used are a pyramid with distinct flat surfaces, and a sphere with a continuous curved surface. The benefit of additional surface detail is demonstrated by applying a height map to the models during the tessellation stage. With comparable polygon counts, the RAM usage and GPU usage are both lower with tessellation compared to without tessellation for dense models. The strength of this implementation is that it may be applied universally to pyramids and spheres to add additional surface detail through tessellation. This implementation can also be expanded by applying more performance enhancing tessellation techniques or realistic effects such as noise functions.

______, Committee Chair Dr. V. Scott Gordon

______Date v

ACKNOWLEDGEMENTS

I would like to thank Dr. Scott Gordon and Dr. Pinar Muyan-Ozcelik for their time and support for this project. When I returned to CSU, Sacramento for my graduate career, Dr. Muyan-Ozcelik suggested I ask Dr. Gordon to be my faculty advisor for my project. I am very thankful to Dr.

Gordon for agreeing to this role and for giving me the idea for this project. His authored book on programming has been an invaluable resource in completing this project.

vi

TABLE OF CONTENTS Page

Acknowledgements ...... vi

List of Tables ...... ix

List of Figures ...... x

Chapters

1. INTRODUCTION ...... 1

2. BACKGROUND ...... 3

2.1 Graphics ...... 3

2.2 OpenGL...... 3

2.3 Models ...... 4

2.4 ...... 4

2.5 Height mapping ...... 5

2.6 Tessellation ...... 6

3. IMPLEMENTATION ...... 8

3.1 Graphics program overview ...... 8

3.2 Defining vertices, texture coordinates, and vectors ...... 9

3.3 Tessellated vertex positioning and texture mapping ...... 11

3.4 Tessellated height mapping ...... 14

3.5 Differences between pyramid and sphere implementations ...... 15

4. RESULTS ...... 17

4.1 Rendered results ...... 17

4.2 System performance ...... 26

5. CONCLUSIONS ...... 29 vii

6. FUTURE WORK ...... 31

Appendix A. Source Code ...... 33

src/code/Code.java – JOGL program ...... 33

src/code/GLSLOptions.java – Holds runtime variables ...... 40

src/code/Camera.java – Simple move/rotate camera implementation ...... 42

src/code/Mouse.java – Simple click and drag mouse implementation ...... 43

src/code/Utils.java – Helper class for JOGL calls ...... 44

src/code//generic_vertShader.glsl – Vertex shader for pyramid and sphere ..... 51

src/code/shaders/pyramid_tessCShader.glsl – Pyramid tessellation control shader ..... 52

src/code/shaders/pyramid_tessEShader.glsl – Pyramid tessellation evaluation shader 53

src/code/shaders/sphere_tessCShader.glsl – Sphere tessellation control shader ...... 54

src/code/shaders/sphere_tessEShader.glsl – Sphere tessellation evaluation shader ..... 55

src/code/shaders/generic_fragShader.glsl – Fragment shader for pyramid and sphere 56

src/models/Pyramid.java – Pyramid model ...... 57

src/models/Sphere.java – Sphere model ...... 59

src/eventcommands/cmdMoveCamera.java – Move the camera ...... 62

src/eventcommands/cmdRotateCamera.java – Rotate the camera ...... 63

src/eventcommands/cmdCloseWindow.java – Close the window ...... 64

src/eventcommands/cmdSwitchModel.java – Toggle pyramid or sphere model ...... 65

src/eventcommands/cmdCycleTexture.java – Cycle through texture list ...... 66

src/eventcommands/cmdToggleDrawMode.java – Toggle polygon mode ...... 67

Appendix B. System Specifications ...... 68

References ...... 69

viii

LIST OF TABLES

Tables Page

1. Performance results with wireframe rendering and no animation ...... 27

2. Performance results with wireframe rendering and constant Y-axis rotation ...... 27

3. Performance results with painted rendering and no animation ...... 28

4. Performance results with painted rendering and constant Y-axis rotation ...... 28

ix

LIST OF FIGURES

Figures Page

1. Pyramid vertices definition ...... 9

2. Pyramid normal vectors calculations ...... 10

3. Sphere vertices definition ...... 10

4. Tessellation level definition ...... 11

5. Calculate tessellated vertex position ...... 12

6. Calculate tessellated texture coordinate ...... 12

7. Renders of: Pyramid without tessellation (top-left), Pyramid with tessellation (top-right),

Sphere without tessellation (bottom-left), Sphere with tessellation (bottom-right) ...... 13

8. Calculate tessellated normal vectors ...... 14

9. Exploded pyramid ...... 15

10. Pyramid using brick texture and height map ...... 18

11. Pyramid using brick texture and ripple height map ...... 19

12. Pyramid using checkers texture and height map ...... 20

13. Pyramid using earth texture and height map ...... 21

14. Sphere using brick texture and height map ...... 22

15. Sphere using brick texture and ripple height map...... 23

16. Sphere using checkers texture and height map ...... 24

17. Sphere using earth texture and height map ...... 25

18. Calculations for getting count of rendered of triangles ...... 26

x

1

1. INTRODUCTION

The goal of this project is to create a standard, re-usable way of applying surface detail to a simple shape using the tessellation shader using OpenGL. The product of this project is a complete documentation on how to accomplish this task. The main benefit of this product is that graphics programmers may utilize the documentation to implement the solution from start to finish, or to focus on a section that may solve an issue they were previously encountering. A secondary benefit is that the information in this document may be used as a resource for academics; a tool to help graphics programmers implement the solution and to help them understand one of the uses of the tessellation shader.

The motivation for this project comes from the lack of completeness in OpenGL tessellation detail solutions found in printed and online resources. Using the tessellation shader to add surface detail to a simple model has been accomplished before; however, a graphics programmer trying to implement this usage of tessellation will run into difficulty of following a complete guide due to scarcity. It is possible to find solutions to issues encountered by other programmers, such as through posts on StackOverflow.com, however, there is often not enough information included to reproduce a full solution for their own purpose [1]. Consulting a thorough guide for adding detail to a shape using the tessellation shader is highly desirable.

Adding surface detail to simple shapes using the tessellation shader is a process desired for multiple reasons. The main reason is to be able to dynamically manipulate the object’s surface to create protrusions which enhance the object’s legitimacy. In many situations, implementing this approach will require fewer system resources during runtime than alternative approaches. In addition, adding surface detail using the tessellation shader can be combined with other

2 tessellation techniques to improve graphics performance. To expand on these points, a background will have to be given on areas in graphics programming.

3

2. BACKGROUND

2.1

Computer graphics programming consists of code that is executed by a

(CPU) and a (GPU) resulting in an image rendered on a display. Most

3D computer graphics are a formation of thousands, or millions of triangles rendered by the GPU.

Each triangle is constructed of three vertices, each vertex defined at a specific x,y,z coordinate in

3D space. Defining, manipulating, and colorizing these vertices are tasks performed through the graphics pipeline. The graphics pipeline is processed by the GPU and consists of shader stages that each perform a function and process input/output elements. These shader stages consist of required and optional OpenGL Language (GLSL) programs. The required vertex shader is where vertices are defined that construct a primitive shape. Vertices can also be defined within

Java code and sent to the vertex shader as an input parameter. The optional tessellation stage consists of the tessellation control shader (TCS), the tessellation primitive generator (TPG), and the tessellation evaluation shader (TES). The TCS determines the subdivision of each primitive, the TPG generates additional vertices in each primitive, and the TES modifies these new vertices.

The optional geometry shader can remove or duplicate primitives. The required fragment shader assigns a color value to pixels on a screen. The graphics pipeline performs its steps as defined by the graphics programming code.

2.2 OpenGL

Open (OpenGL) is an API of which various programming languages can interface with to render graphics. According to Gordon, OpenGL makes native C calls which have been abstracted into various wrapper libraries for different languages, such as Java OpenGL

(JOGL) and Lightweight Java Game Library which utilize the Java programming language [2].

4

These libraries offer a clean way to make OpenGL calls from Java code. This project will utilize

Java and JOGL to interact with OpenGL for rendering.

2.3 Models

In computer graphics programming, a model is a collection of vertices created to represent a perceptible object. Models can be constructed by defining vertices by hand, procedurally, or by using a modeling tool such as Blender or Maya. Many flat shapes such as a pyramid can be defined by hand, but round shapes such as a sphere are usually procedurally developed. Complex objects such as a motorcycle are generally modeled using a modeling tool. Models generally have texture coordinates and normal vectors accompanied with vertex coordinates. Texture coordinates define how a 2D texture image would be placed on a 3D model. Texture coordinates need to go through a mapping process called texture mapping for a 2D image to appear painted on the model. A normal vector is a vector for a vertex which is used for lighting and height mapping in graphics programming. In this implementation, a pyramid and a sphere are used as the simple models for applying additional surface detail.

2.4 Texture mapping

Texture mapping is “the technique of overlaying an image across a rasterized model surface,” [2].

Texture mapping is performed by informally linking texture coordinates with vertex coordinates; each vertex coordinate will have a matching texture coordinate. A texture coordinate is a 2D coordinate on a texture image used to specify a pixel in the texture image. Texture coordinates range from (0,0) in the bottom-left of the image to (1,1) in the top-right of the image. These coordinates can be defined in Java code, loaded into Java code, or defined inside shader programs. According to Gordon, for simple models such as a pyramid, vertex coordinates and

5 texture coordinates can be defined in code and sent through the graphics pipeline [2]. However, for more complex models such as a sphere, the vertex coordinates and texture coordinates can be procedurally generated and sent through the pipeline. A common way for sending these coordinates to the pipeline is to define two buffers in Java code. One buffer is for vertex coordinates which is represented by an array of float values where every three values corresponds to x,y,z vertex coordinates. A second buffer is for texture coordinates which is also represented by an array of float values where every two values corresponds to s,t texture coordinates. According to Gordon, it is important to put texture coordinates in a vertex attribute so they are interpolated in the graphics pipeline [2]. In addition to coordinates, an image needs to be loaded in memory to be available throughout the graphics pipeline.

JOGL provides a TextureIO.newTexture method for loading an image file into memory and making that texture available throughout the pipeline [3]. OpenGL provides a texture method for sampling pixels in a texture image, or, texels, from a texture at a texture coordinate

[4]. Once these coordinates are received by the fragment shader, the texture coordinates for each vertex coordinate are interpolated along the texture image to paint each pixel appropriately according to the image.

2.5 Height mapping

Height mapping is a technique which utilizes a texture to displace a vertex along its normal by a desired magnitude. This is accomplished by inspecting the magnitude of the color value of a pixel in a texture image and using that amount to displace a vertex’s position along its normal. One would apply a height map to a model to dynamically change the geometry of the model. Height mapping modifies a model’s vertices, which can bring benefits over other techniques such as

6 bump mapping, which only make the model’s geometry appear to have been modified. According to Gordon, moving a model’s vertices allows the edge of the model to show its features and allow accurate shadows to be shown when applying shadowing techniques [2].

Height mapping can be performed in the vertex shader by applying a mathematical function to each vertex’s position. After passing the vertex position, a texture coordinate, the normal vector, and an image to the vertex shader, one could alter the position of the vertex at a magnitude of their definition. Programming this approach in the vertex shader may be effective for models that contain a high number of vertices, such as a sphere with high precision. However, the downside of this is that all the vertices must be processed by the CPU and stored in memory before being processed through the pipeline by the GPU. The intermediary step of storing in memory can be a bottleneck in some systems [5]. Depending on the model and where the camera is in the scene, a model containing a high number of vertices might have little positive effect, such as when the camera is too far away to perceive the detail of the model. Height mapping can also be performed during the tessellation stage of the pipeline, which allows for system performance enhancements such as dynamic reduction of the number of vertices as well as utilizing the superior graphical processing power of the GPU over the CPU.

2.6 Tessellation

Tessellation is defined as an arrangement of shapes closely fitted together, especially of polygons in a repeated pattern without gaps or overlapping [6]. Tessellation in computer graphics programming “refers to the generation and manipulation of large numbers of triangles for rendering complex shapes and surfaces, preferably in hardware.” [2]. In this case, the best

7 hardware for the task is the GPU as it is designed to perform simple mathematical calculations in parallel.

There are a few fundamental differences between graphics programs that implement tessellation and those that do not. One major difference is the vertices that are sent from the Java program to the vertex shader are not rendered; they are treated as control points for what is called a patch.

Another difference is that the work of the vertex shader is shifted to the TES for tasks such as vertex position modifications and perspective matrix processing. This is because the final location of vertices is normally handled by the TES when implementing tessellation. Thus, most calculations for texture mapping and height mapping are also performed in the TES so they may be applied to the entire model after new vertices are created.

Tessellation is performed by the GPU by executing a TCS program which defines the level of primitive subdivision, subdividing the primitives within the TPG, and executing the TES program which can modify the newly subdivided vertices. When implementing additional surface detail to a model using tessellation, the general steps are to first subdivide the patches and then apply a height map to the newly generated vertices. This allows a low-polygon model to be passed to the pipeline, task the TCS and TPG with creating more vertices in each patch, and then displacing the newly generated vertices along their normal according to a height map. The benefit of implementing a graphics program this way is that much of the processing is shifted off the CPU and moved to the GPU which is better suited for handling parallel mathematical operations. A secondary benefit is that the memory bottleneck from passing large amounts of vertices is avoided.

8

3. IMPLEMENTATION

Implementing additional surface detail using the tessellation shader is essentially a graphics program that applies a height map during the tessellation stage. This implementation of this technique is mostly universal for any type of model being used, however, only pyramid and sphere models will be detailed here. To visualize the implementation and the effect of tessellation, it is useful to render using the GL_LINE polygon mode of OpenGL which displays the wireframes of the models and only colors the primitive edges. There is an impact to performance using this over GL_FILL, but it is important to be able to see the primitive subdivision resulting from tessellation. Typically, GL_FILL would be used as the polygon mode in a final product so the entire polygon would be painted according to the texture.

3.1 Graphics program overview

In this implementation, Java/JOGL code is used to send data to the graphics pipeline. This data includes the patch vertices, patch texture coordinates, patch normal vectors, and textures used for texture mapping and height mapping. In addition, OpenGL values for depth rendering calculations, color rendering, primitives culling, and the draw mode are handled through the Java code.

Once the data is sent to the graphics pipeline, the vertex shader simply passes the vertices, texture coordinates, and normal vectors to the TCS. The TCS defines the subdivision and passes the coordinate values to the TPG. After tessellation occurs, the TES calculates the vertex position and texture coordinates for the newly created vertices through interpolation, height mapping, and texture mapping. Finally, the texture coordinates are sent to the fragment shader for colorization.

9

The steps for implementing surface detail using tessellation are included in the implementation of the texture mapping and height mapping.

3.2 Defining vertices, texture coordinates, and normal vectors

The Pyramid and Sphere classes handle the creation of their vertices, texture coordinates, and normal vectors. They also implement methods to pass arrays of the defined vertex, texture, and normal coordinates which are sent to the graphics pipeline.

The Pyramid class constructor accepts a length argument to determine the length of the sides.

The constructor then calls the initialize method which defines the vertices, texture coordinates, and normal vectors for the Pyramid model. The method defines 18 vertices to construct six triangles; four triangles for the pyramid’s top faces and two triangles to create the pyramid’s square base as shown in Figure 1.

vertexXyzValues = new float[] { -len, -len, len, len, -len, len, 0.0f, len, 0.0f, // front face len, -len, len, len, -len, -len, 0.0f, len, 0.0f, // right len, -len, -len, -len, -len, -len, 0.0f, len, 0.0f, // back -len, -len, -len, -len, -len, len, 0.0f, len, 0.0f, // left -len, -len, -len, len, -len, len, -len, -len, len, // bottom-left len, -len, len, -len, -len, -len, len, -len, -len // bottom-right };

Figure 1: Pyramid vertices definition

The texture coordinates are then defined to repeat the same subsection of the texture on the four top faces and span the entire texture over the bottom face. The normal vectors are calculated by using the surface normal calculation of a triangle as shown in Figure 2 [7, 8].

10

normalXyzValues = new float[vertexXyzValues.length]; for(int i = 0; i < normalXyzValues.length; i = i + 9) { // for each face of the pyramid Vector3f a = new Vector3f(vertexXyzValues[i], vertexXyzValues[i+1], vertexXyzValues[i+2]) .sub(vertexXyzValues[i+3], vertexXyzValues[i+4], vertexXyzValues[i+5]); Vector3f b = new Vector3f(vertexXyzValues[i], vertexXyzValues[i+1], vertexXyzValues[i+2]) .sub(vertexXyzValues[i+6], vertexXyzValues[i+7], vertexXyzValues[i+8]); Vector3f normal = a.cross(b); for(int j = 0; j < 3; j++) { normalXyzValues[(j*3) + i + 0] = normal.x(); normalXyzValues[(j*3) + i + 1] = normal.y(); normalXyzValues[(j*3) + i + 2] = normal.z(); } }

Figure 2: Pyramid normal vectors calculations

The Sphere class is a class which expands on Gordon’s Sphere class [2]. The Sphere class constructor accepts a precision argument to determine the amount of vertices in sphere.

Precision, in this case, specifically controls “the number of circular horizontal slices through the sphere” [2]. Since we are adding vertices during the tessellation stage, we can use a relatively low precision here, such as 12. Using a low precision means fewer vertices, texture coordinates, and normal vectors are stored in memory and sent to the graphics pipeline. We will instead use the

GPU to create additional vertices and calculate these values, usually resulting in better performance, especially when the model is instanced or is animated [9]. As shown in Figure 3, the constructor then calls the initialize method which procedurally generates the triangle vertices, texture coordinates, and normal vectors for the Sphere model.

for (int i = 0; i <= prec; i++) { for (int j = 0; j <= prec; j++) { float y = (float)cos(toRadians(180 - i * 180 / prec)); float x = -(float)cos(toRadians(j * 360 / (float)prec)) * (float)abs(cos(asin(y))); float z = (float)sin(toRadians(j * 360 / (float)prec)) * (float)abs(cos(asin(y))); vertices[i * (prec + 1) + j].set(x, y, z); texCoords[i * (prec + 1) + j].set((float) j / prec, (float) i / prec); normals[i * (prec + 1) + j].set(x, y, z); } }

Figure 3: Sphere vertices definition

11

Since a sphere model has a high number of triangles that share a vertex with another triangle, vertices are only defined once, and indices are utilized. Indices are used to identify a vertex when sending the vertices to the graphics pipeline. This is unlike the Pyramid class, where indices are not utilized because its flat-shaped nature means that the different instances of a particular vertex may each have different texture coordinates; thus, the pyramid is unable to benefit from our usage of indices. Texture coordinates and normal vectors for the sphere are also procedurally defined based on the precision value. Normal vectors are calculated by constructing a vector between the origin of the sphere and the vertex [2].

3.3 Tessellated vertex positioning and texture mapping

Since tessellation performs subdivision which creates additional vertices in a patch, the new vertices must have their positions calculated in an intelligent way. The level of subdivision is defined in the first stage of tessellation in the TCS. Subdivision is defined with a simple assignment for the outer and inner levels of tessellation. Since we are using triangles as the patch primitive, we must only define the first three outer and the first inner tessellation levels [10]. We only need to define the tessellation level once, so we make sure this code executes only on the first invocation of the TCS, as shown in Figure 4. The TCS does not need to alter any of the vertex positions, texture coordinates, or normal vectors, so these values are passed through the pipeline.

if(gl_InvocationID == 0) { const int LEVEL = 256; gl_TessLevelOuter[0] = LEVEL; gl_TessLevelOuter[1] = LEVEL; gl_TessLevelOuter[2] = LEVEL; gl_TessLevelInner[0] = LEVEL; }

Figure 4: Tessellation level definition

12

The TES is the shader program which calculates the final values for each vertex’s position and texture coordinate. In this implementation, we use the original patch vertices and the barycentric coordinates of the current generated vertex to determine the vertex’s new position and texture coordinate. To determine the vertex’s position, we take the positions of the patch vertices and interpolate them with the position of the current generated vertex using the interpolate3D function, as shown in Figure 5 [11].

vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } vec3 a = gl_in[0].gl_Position.xyz; vec3 b = gl_in[1].gl_Position.xyz; vec3 c = gl_in[2].gl_Position.xyz; vec3 position = interpolate3D(a, b, c);

Figure 5: Calculate tessellated vertex position

Similarly, to determine the vertex’s texture coordinate, we take the texture coordinates of the patch vertices and interpolate them with the position of the current generated vertex using the interpolate2D function, as shown in Figure 6 [11].

in vec2 tcs_out[]; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } tes_out = interpolate2D(tcs_out[0], tcs_out[1], tcs_out[2]);

Figure 6: Calculate tessellated texture coordinate

At this point, we can assign the value of the calculated vertex position to the gl_Position output variable, and the pyramid and sphere models would be displayed correctly and with additional surface detail due to the newly generated vertices. However, the additional detail would only be noticed when rendering wireframes and the resulting surface would be the same as the original model – a smooth surface. Figure 7 shows the geometry of a pyramid and sphere

13 before and after tessellation. To complete the implementation, we must apply a height map to the newly generated vertices to protrude the surface.

Figure 7: Renders of: Pyramid without tessellation (top-left), Pyramid with tessellation (top-right), Sphere without tessellation (bottom-left), Sphere with tessellation (bottom-right)

14

3.4 Tessellated height mapping

Applying a height map to newly generated vertices is the final step of this implementation of additional surface detail. To apply the height map, we use the OpenGL texture function to get the red color value of a texture at the specified texture coordinate. The return value of this function will represent the magnitude that the current vertex will be displaced along its normal. In this implementation, we are only using the red value of the RGB value, but any weighting of the color values will work. We also define a displacementFactor variable to manually control the magnitude of displacement. Repeating the process we followed for calculating the newly generated vertex’s position, we must calculate its normal. To determine the vertex’s normal, we take the normal vectors of the patch vertices and interpolate them with the position of the current generated vertex using the interpolate3D function and normalize the result. Figure 8 shows these steps in the TES.

in vec3 norm_es_in[]; vec3 norm = interpolate3D(norm_es_in[0], norm_es_in[1], norm_es_in[2]); norm = normalize(norm);

Figure 8: Calculate tessellated normal vectors

After calculating the vertex’s normal and a displacement value, we can displace the vertex along its normal by performing vector addition to modify the vertex position [11, 12]. A third variable displacementFactor is used to further control the magnitude of the vertex displacement, as follows:

position += norm * displacement * displacementFactor;

15

Finally, we multiply the height mapped vertex position with the model-view-projection matrix and assign the value to the gl_Position output variable. The result is the pyramid and sphere models would be rendered with additional surface detail, as shown:

gl_Position = mvp * vec4(position, 1);

3.5 Differences between pyramid and sphere implementations

Both the pyramid and sphere models use the same vertex shader and fragment shader for this implementation. However, there is one small difference between the implementations of these two models. The pyramid model must ensure that the edges between the faces of the pyramid remain intact. If this precaution is not taken, a graphical issue may occur where the pyramid appears to be “exploded” outwards due to the height mapping as evident in Figure 9.

Figure 9: Exploded pyramid

This behavior is observed in the pyramid and not the sphere because the pyramid is a flat shape and the sphere is a round shape. To correct this behavior, each pyramid vertex must be displaced

16 a negative amount along its normal to re-join its edges. To accomplish this, we add a final step to the pyramid TCS implementation to calculate the magnitude displacement from height mapping for this patch before texture coordinate interpolation is performed in the TES. This is performed through the statements:

out float displacementOffset[]; displacementOffset[0] = texture(heightTexture, tcs_out[0].xy).r;

This displacement offset value is passed to the TES and used in the equation to displace the vertex along its normal, as follows:

in float displacementOffset[]; float displacement = textureDisplacement - displacementOffset[0];

This value must be stored in an array as the TCS and TES only allow the passing of arrays.

17

4. RESULTS

The results of this implementation are shown through rendered pictures and runtime system performance data. The results for this implementation can be reproduced and re-tested by running the code found in Appendix A with a similar system as detailed in Appendix B.

4.1 Rendered results

The results for this implementation were gathered using four different color textures and height maps for the pyramid and sphere models. Some textures are better suited to be rendered on a pyramid instead of a sphere, and vice versa. These results are rendered without lighting in the scene. Lighting would improve the render quality for the painted renders to be able to see the generated surface detail as we can see in the wireframe renders. The pyramid renders were captured using a pyramid with 18 vertices and a tessellation level of 256. Each collection of pyramid renders consists of a zoomed-in render in wireframe mode, a zoomed-out render in painted mode, the texture used for texture mapping, and the texture used for height mapping.

These sphere renders were captured using a sphere generated with a precision value of 12 and a tessellation level of 32. Each collection of sphere renders consists of a render in wireframe mode, a render in painted mode, the texture used for texture mapping, and the texture used for height mapping.

18

Figure 10: Pyramid using brick texture and height map

Figure 10 shows renders of a brick texture and height map on the pyramid. This texture makes it easy to see the additional surface detail in wireframe mode, but difficult to see in painted mode.

The brick texture is a repeating texture, so it applies well to the pyramid model.

19

Figure 11: Pyramid using brick texture and ripple height map

Figure 11 shows renders of a brick texture and a ripple height map on the pyramid. The ripple texture makes it easy to see the additional surface detail in wireframe mode, but difficult to see in painted mode. The ripple texture is not a repeating texture, so it does not apply very well to the pyramid model, as evidenced by the separation of top face edges. These renders prove the implementation works with different textures for painting and height mapping for the pyramid.

20

Figure 12: Pyramid using checkers texture and height map

Figure 12 shows renders of a checkered texture and height map on the pyramid. The checkered texture makes it easy to see the additional surface detail in wireframe mode, but hard to tell what the render is. This is easier to see in painted mode. The checkered texture is repeating, but the fact that the white squares are next to the black squares makes the pyramid edges unmatched after applying the height map.

21

Figure 13: Pyramid using earth texture and height map

Figure 13 shows renders of an earth texture and height map on the pyramid. The earth texture makes it easy to see the additional surface detail in both wireframe mode and painted mode. The earth texture is not repeating, but still works well on the pyramid model.

22

Figure 14: Sphere using brick texture and height map

Figure 14 shows renders of a brick texture and height map on the sphere. This texture makes the additional surface detail adequately noticeable in wireframe mode and painted mode. The brick texture is a repeating texture, so it applies well to the sphere model.

23

Figure 15: Sphere using brick texture and ripple height map

Figure 15 shows renders of a brick texture and a ripple height map on the sphere. The ripple texture makes it easy to see the additional surface detail in wireframe mode, but difficult to see in painted mode. The ripple texture is not a repeating texture, but it applies well to the sphere model.

These renders prove the implementation works with different textures for painting and height mapping for the sphere.

24

Figure 16: Sphere using checkers texture and height map

Figure 16 shows renders of a checkered texture and height map on the sphere. The checkered texture makes it easy to see the additional surface detail in wireframe mode and in painted mode.

The checkered texture is repeating and applies well to the sphere model.

25

Figure 17: Sphere using earth texture and height map

Figure 17 shows renders of an earth texture and height map on the sphere. The earth texture is makes it easy to see the additional surface detail in wireframe mode and painted mode. The earth texture is not repeating but works very well on the sphere model. This was the best application of adding surface detail using tessellation most likely because this render represents a real-life globe object.

26

4.2 System performance

To analyze the system performance of this implementation, we can inspect the system hardware usage of the Java runtime process. The system performance results were collected when rendering a sphere because we can use the Sphere class’s precision parameter and the TCS’s tessellation levels to easily swap between no tessellation and high tessellation. The triangle count is included to show the similarity in rendered triangle counts using varying levels of precision and tessellation. The formulae used to determine the total number of rendered triangles is shown in

Figure 18 [13]. The values bolded in the tables are to bring attention to the highest value in that column. These system performance results were gathered using the system configuration detailed in Appendix B.

private int calculateNumberOfTriangles(int n) { if (n < 0) return 1; if (n == 0) return 0; return ((2 * n - 2) * 3) + calculateNumberOfTriangles(n - 2); } Sphere sphere = new Sphere(256); int tessLevel = 1; int numTessellatedTriangles = calculateNumberOfTriangles(tessLevel); int numSphereTriangles = sphere.getNumVertices() / 3; int totalTriangles = numTessellatedTriangles * numSphereTriangles;

Figure 18: Calculations for getting count of rendered of triangles

Table 1 shows the performance results for an earth texture and height map, wireframe polygon mode (GL_LINE), with no model animation. Table 2 shows the performance results for an earth texture and height map, wireframe polygon mode, with the model in constant rotation about its Y- axis. Comparing these two tables, there are no noticeable differences in performance when applying a rotation before every call to glDrawArrays. However, the results in each table show similarities and differences between comparable triangle counts for varying levels of precision and tessellation. With comparable triangle counts, the CPU usage is equal for the precision and tessellation runs. The difference is noticed for RAM usage and GPU usage. RAM

27 usage and GPU usage are both lower in all cases for the tessellation runs compared to the precision runs.

Table 1: Performance results with wireframe rendering and no animation

Precision Tessellation Triangle CPU Usage RAM Usage GPU Usage Level Count (%) (MB) (%) 256 1 131,072 0.5 270 8.2 12 18 139,968 0.5 257 7.8 512 1 524,288 0.5 314 19.1 12 32 442,368 0.5 257 12.0 1024 1 2,097,152 0.5 768 58.6 12 64 1,769,472 0.5 257 26.4 12 256 28,311,552 0.5 257 27.1

Table 2: Performance results with wireframe rendering and constant Y-axis rotation

Precision Tessellation Triangle CPU Usage RAM Usage GPU Usage Level Count (%) (MB) (%) 256 1 131,072 0.5 270 8.4 12 18 139,968 0.5 257 7.8 512 1 524,288 0.5 314 19.1 12 32 442,368 0.5 257 12.3 1024 1 2,097,152 0.5 768 58.9 12 64 1,769,472 0.5 257 26.6 12 256 28,311,552 0.5 257 27.0

Table 3 shows the performance results for an earth texture and height map, painted polygon mode

(GL_FILL), with no model animation. Table 4 shows the performance results for an earth texture and height map, painted polygon mode, with the model in constant rotation about its Y-axis.

Comparing these two tables, there are no noticeable differences in performance when applying a rotation before every call to glDrawArrays. However, the results in each table show similarities and differences between comparable triangle counts for varying levels of precision and tessellation. With comparable triangle counts, the CPU usage is equal for the precision and tessellation runs. RAM usage and GPU usage are both higher in all cases for the precision runs

28 over the tessellation runs. When comparing these two tables to Table 1 and Table 2, we can see the GPU performance improvement when rendering painted polygons instead of wireframe polygons.

Table 3: Performance results with painted rendering and no animation

Precision Tessellation Triangle CPU Usage RAM Usage GPU Usage Level Count (%) (MB) (%) 256 1 131,072 0.5 270 8.2 12 18 139,968 0.5 257 4.0 512 1 524,288 0.5 314 19.1 12 32 442,368 0.5 257 5.4 1024 1 2,097,152 0.5 768 58.6 12 64 1,769,472 0.5 257 10.5 12 256 28,311,552 0.5 257 10.7

Table 4: Performance results with painted rendering and constant Y-axis rotation

Precision Tessellation Triangle CPU Usage RAM Usage GPU Usage Level Count (%) (MB) (%) 256 1 131,072 0.5 270 8.2 12 18 139,968 0.5 257 4.0 512 1 524,288 0.5 314 19.1 12 32 442,368 0.5 257 5.5 1024 1 2,097,152 0.5 768 58.6 12 64 1,769,472 0.5 257 10.5 12 256 28,311,552 0.5 257 10.8

29

5. CONCLUSIONS

This project describes how to use the tessellation shader to implement additional surface detail on simple surfaces. To accomplish this, a Java/JOGL program was written with a graphics pipeline consisting of a vertex shader, a TCS, a TES, and a fragment shader for a pyramid and a sphere.

The strengths of this implementation are that it may be applied universally to pyramids and spheres to add additional surface detail through tessellation. The fact that the pyramid model was constructed with only 18 vertices and can be tessellated to having a detailed height map shows the success of the implementation for a pyramid. The same strategy for a sphere, excluding the explosion effect fix in tessellation for pyramids, can be used which also shows success of the implementation for a sphere. It is also important to note that the success is noticeable without lighting in the scene mainly due to rendering in wireframe mode. Rendering in wireframe mode allows us to see the exact geometry of the model and see the additional detail on the models according to the textures used. In addition to the visual success of this implementation, the system performance results show less RAM usage and less GPU usage when dynamically tessellating additional detail compared to preemptively defining the vertices for a highly detailed model.

The main weakness of this implementation is that after configuring the graphics project as detailed, the texture coordinates and tessellation shader programs may still need to be modified for the best result. This weakness is illustrated by the disjointed edges of the pyramid model after applying the ripple height map. This could potentially be fixed by modifying the texture coordinates of the pyramid to preserve the original model’s edges after applying the height map.

In addition, certain implementations might want different values for the inner and outer tessellation levels in the TCS to achieve a different result. Since texture coordinates and tessellation levels are required to be defined, modifying the pyramid’s texture coordinates or the

30 sphere’s procedural construction of texture coordinates may be necessary to achieve the desired rendered result.

31

6. FUTURE WORK

This topic can be further proven through applying different graphics programming techniques.

The most obvious method to prove the efficacy of this implementation would be to add lighting and shadows to the graphics program. Adding lighting and casting shadows will prove that the modified geometry of the models results in the shadows of the models to display the modified geometry along the edges. Applying lighting and shadows will allow us to forego rendering in wireframe mode so we may display the painted model, as most finished graphics products will do. In addition to lighting and shadows, the implementation can be proven with other simple shapes and models, such as a cube or an imported model developed in Blender or Maya.

This implementation has an issue of edges potentially becoming disjointed on the pyramid model as evidenced in Figure 9 and Figure 11. An elegant solution to preserve edges might be to have the TES identify if a vertex is an original model vertex or on a direct line between two original model vertices, and if so, do not apply height mapping to that vertex [14]. This could solve the disjointed edges issue but might introduce an undesired result depending on the height map. This issue could be tested with other flat shapes such as cuboids to explore if a universal solution exists.

This implementation can also be expanded by applying more performance enhancing techniques.

Tessellation shaders can dynamically increase or decrease the level of detail (LOD) by calculating a value for the tessellation levels instead of defining a static value. The LOD is usually calculated based on the distance between the camera and the model. The closer the camera is to the object, the higher the tessellation level should be set to display with additional surface detail. In contrast, the farther the camera is to the object, the lower the tessellation level

32 should be as additional detail is hard to perceive at farther distances. Dynamically lowering tessellation levels in this implementation will keep the height map applied on the model and will result in better system performance.

Lastly, this tessellated height map implementation can be altered or expanded to apply different visual effects. For example, instead of applying a static height map based on a texture, a height map that has a time component and passed through a trigonometric equation can produce aquatic waves [15]. Also, when applying a noise and a time component, more realistic looking graphics can be rendered, such as a continuously morphing fiery explosion [16]. Future work to enhance this implementation with tessellation performance boosts such as LOD and more creative applications can produce endless results.

33

Appendix A. Source Code src/code/Code.java – JOGL program package code; import static com.jogamp..GL.GL_ARRAY_BUFFER; import static com.jogamp.opengl.GL.GL_CCW; import static com.jogamp.opengl.GL.GL_COLOR_BUFFER_BIT; import static com.jogamp.opengl.GL.GL_CULL_FACE; import static com.jogamp.opengl.GL.GL_DEPTH_BUFFER_BIT; import static com.jogamp.opengl.GL.GL_DEPTH_TEST; import static com.jogamp.opengl.GL.GL_FLOAT; import static com.jogamp.opengl.GL.GL_FRONT_AND_BACK; import static com.jogamp.opengl.GL.GL_LEQUAL; import static com.jogamp.opengl.GL.GL_STATIC_DRAW; import static com.jogamp.opengl.GL.GL_TEXTURE0; import static com.jogamp.opengl.GL.GL_TEXTURE1; import static com.jogamp.opengl.GL.GL_TEXTURE_2D; import static com.jogamp.opengl.GL2GL3.GL_FILL; import static com.jogamp.opengl.GL2GL3.GL_LINE; import static com.jogamp.opengl.GL3ES3.GL_PATCHES; import static com.jogamp.opengl.GL3ES3.GL_PATCH_VERTICES; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import java.awt.event.MouseMotionListener; import java.nio.FloatBuffer; import javax..ActionMap; import javax.swing.InputMap; import javax.swing.JComponent; import javax.swing.JFrame; import javax.swing.KeyStroke; import javax.swing.WindowConstants; import org.joml.Matrix4f; import org.joml.Vector3f; import com.jogamp.common.nio.Buffers; import com.jogamp.opengl.GL4; import com.jogamp.opengl.GLAutoDrawable; import com.jogamp.opengl.GLContext; import com.jogamp.opengl.GLEventListener; import com.jogamp.opengl.awt.GLCanvas; import com.jogamp.opengl.util.Animator; import eventcommands.cmdCloseWindow; import eventcommands.cmdCycleTexture; import eventcommands.cmdMoveCamera; import eventcommands.cmdRotateCamera; import eventcommands.cmdSwitchModel; import eventcommands.cmdToggleDrawMode; import models.Pyramid; import models.Sphere;

34

@SuppressWarnings("serial") public class Code extends JFrame implements GLEventListener, ActionListener, MouseListener, MouseMotionListener {

// OpenGL canvas private GLCanvas myCanvas;

// Custom objects private GLSLOptions options = new GLSLOptions(); private Camera camera = new Camera(); private Mouse mouse = new Mouse();

// Display variables private int vbo[] = new int[6]; private FloatBuffer mvpBuffer = Buffers.newDirectFloatBuffer(16); private Vector3f modelPosition; private Matrix4f pMat = new Matrix4f(); // projection matrix private Matrix4f vMat = new Matrix4f(); // view matrix private Matrix4f mMat = new Matrix4f(); // model matrix private Matrix4f mvpMat = new Matrix4f(); // model-view-projection matrix private int mvpLoc; private float aspect;

// Models private Pyramid pyramid; private Sphere sphere;

public static void main(String[] args) { new Code(); }

public Code() { initializeFrame(); }

// Initialize window and event bindings private void initializeFrame() {

// Set window attributes this.setTitle("Tessellation - Final"); this.setSize(800, 800); myCanvas = new GLCanvas(); myCanvas.addGLEventListener(this); this.add(myCanvas); this.setVisible(true); this.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);

// Set keybindings /* * W - move camera forward * S - move camera backward * A - move camera left * D - move camera right * 1 - Toggle between pyramid and sphere * 2 - Cycle through textures (see init method) * 3 - Toggle between OpenGL lines or fill draw mode * ctrl+W - exit */ KeyStroke wKey = KeyStroke.getKeyStroke('w'); KeyStroke sKey = KeyStroke.getKeyStroke('s'); KeyStroke aKey = KeyStroke.getKeyStroke('a'); KeyStroke dKey = KeyStroke.getKeyStroke('d'); KeyStroke ctrlWKey = KeyStroke.getKeyStroke("control W");

35

KeyStroke oneKey = KeyStroke.getKeyStroke("1"); KeyStroke twoKey = KeyStroke.getKeyStroke("2"); KeyStroke threeKey = KeyStroke.getKeyStroke("3");

int inputMapType = JComponent.WHEN_IN_FOCUSED_WINDOW; InputMap iMap = this.getRootPane().getInputMap(inputMapType); iMap.put(wKey, "cmdMoveCameraForward"); iMap.put(sKey, "cmdMoveCameraBackward"); iMap.put(aKey, "cmdMoveCameraLeft"); iMap.put(dKey, "cmdMoveCameraRight"); iMap.put(ctrlWKey, "cmdCloseWindow"); iMap.put(oneKey, "cmdSwitchModel"); iMap.put(twoKey, "cmdCycleTexture"); iMap.put(threeKey, "cmdToggleDrawMode"); ActionMap aMap = this.getRootPane().getActionMap(); aMap.put("cmdMoveCameraForward", new cmdMoveCamera( camera, "N", -1 )); aMap.put("cmdMoveCameraBackward", new cmdMoveCamera( camera, "N", 1 )); aMap.put("cmdMoveCameraLeft", new cmdMoveCamera( camera, "U", -1 )); aMap.put("cmdMoveCameraRight", new cmdMoveCamera( camera, "U", 1 )); aMap.put("cmdCloseWindow", new cmdCloseWindow(this)); aMap.put("cmdSwitchModel", new cmdSwitchModel(options)); aMap.put("cmdCycleTexture", new cmdCycleTexture(options)); aMap.put("cmdToggleDrawMode", new cmdToggleDrawMode(options));

// Set mouse and event listeners mouse = new Mouse(); myCanvas.addMouseListener(this); myCanvas.addMouseMotionListener(this);

// Set animator final Animator animator = new Animator(myCanvas); animator.start(); }

// Define shaders, textures, setup projection matrix, call setupModels public void init(GLAutoDrawable drawable) { // Create rendering programs options.setPyramidRenderingProgram(Utils.createShaderProgram( "src/code/shaders/generic_vertShader.glsl", "src/code/shaders/pyramid_tessCShader.glsl", "src/code/shaders/pyramid_tessEShader.glsl", "src/code/shaders/generic_fragShader.glsl")); options.setSphereRenderingProgram(Utils.createShaderProgram( "src/code/shaders/generic_vertShader.glsl", "src/code/shaders/sphere_tessCShader.glsl", "src/code/shaders/sphere_tessEShader.glsl", "src/code/shaders/generic_fragShader.glsl"));

// Create color and height map textures. Arrays should contain equal number of elements. options.setColorsTextureArray(new int[] { Utils.loadTexture("assets/earth.jpg"), Utils.loadTexture("assets/brick.jpg"), Utils.loadTexture("assets/checker.png"), Utils.loadTexture("assets/dog.jpg"), Utils.loadTexture("assets/colors.jpg"), Utils.loadTexture("assets/face_sphere_lowres.jpg") }); options.setHeightsTextureArray(new int[] { Utils.loadTexture("assets/earth_displacement.png"), Utils.loadTexture("assets/brick_displacement.png"), Utils.loadTexture("assets/checker.png"), Utils.loadTexture("assets/dog_displacement.png"), Utils.loadTexture("assets/halos_displacement.png"), Utils.loadTexture("assets/face_sphere_lowres.jpg") });

// Set up projection matrix setupProjectionMatrix();

36

// Instantiate models setupModels(); }

// Set up projection matrix private void setupProjectionMatrix() { aspect = (float) myCanvas.getWidth() / (float) myCanvas.getHeight(); pMat.identity().setPerspective((float) Math.toRadians(60.0f), aspect, 0.1f, 1000.0f); }

// Set up pyramid and sphere models private void setupModels() { GL4 gl = (GL4) GLContext.getCurrentGL(); gl.glGenBuffers(vbo.length, vbo, 0);

// Model position modelPosition = new Vector3f(0.0f, 0.0f, 0.0f);

// Pyramid model pyramid = new Pyramid(1f);

gl.glBindBuffer(GL_ARRAY_BUFFER, vbo[0]); FloatBuffer pyrBuf = Buffers.newDirectFloatBuffer(pyramid.getVertexXyzValues()); gl.glBufferData(GL_ARRAY_BUFFER, pyrBuf.limit()*4, pyrBuf, GL_STATIC_DRAW);

gl.glBindBuffer(GL_ARRAY_BUFFER, vbo[1]); FloatBuffer pyrNormBuf = Buffers.newDirectFloatBuffer(pyramid.getNormalXyzValues()); gl.glBufferData(GL_ARRAY_BUFFER, pyrNormBuf.limit()*4, pyrNormBuf, GL_STATIC_DRAW);

gl.glBindBuffer(GL_ARRAY_BUFFER, vbo[2]); FloatBuffer texBuf = Buffers.newDirectFloatBuffer( pyramid.getTextureCoordinatesStValues()); gl.glBufferData(GL_ARRAY_BUFFER, texBuf.limit()*4, texBuf, GL_STATIC_DRAW);

// Sphere model sphere = new Sphere(12);

gl.glBindBuffer(GL_ARRAY_BUFFER, vbo[3]); FloatBuffer sphBuf = Buffers.newDirectFloatBuffer(sphere.getVertexXyzValues()); gl.glBufferData(GL_ARRAY_BUFFER, sphBuf.limit()*4, sphBuf, GL_STATIC_DRAW);

gl.glBindBuffer(GL_ARRAY_BUFFER, vbo[4]); FloatBuffer sphNormBuf = Buffers.newDirectFloatBuffer(sphere.getNormalXyzValues()); gl.glBufferData(GL_ARRAY_BUFFER, sphNormBuf.limit()*4, sphNormBuf, GL_STATIC_DRAW);

gl.glBindBuffer(GL_ARRAY_BUFFER, vbo[5]); FloatBuffer texBuf2 = Buffers.newDirectFloatBuffer(sphere.getTextureCoordinateStValues()); gl.glBufferData(GL_ARRAY_BUFFER, texBuf2.limit()*4, texBuf2, GL_STATIC_DRAW); }

// Canvas display method public void display(GLAutoDrawable drawable) { GL4 gl = (GL4) GLContext.getCurrentGL(); gl.glUseProgram(options.getRenderingProgram());

// GLSL uniforms mvpLoc = gl.glGetUniformLocation(options.getRenderingProgram(), "mvp");

37

// View matrix vMat.identity(); vMat.setTranslation(-camera.getPosition().x(), -camera.getPosition().y(), -camera.getPosition().z()); vMat.setRotationXYZ(camera.getRotationX(), camera.getRotationY(), camera.getRotationZ());

// Model matrix mMat.identity().setTranslation(modelPosition.x(), modelPosition.y(), modelPosition.z());

// Model-View-Projection matrix mvpMat.identity(); mvpMat.mul(pMat); mvpMat.mul(vMat); mvpMat.mul(mMat);

// Uniform location gl.glUniformMatrix4fv(mvpLoc, 1, false, mvpMat.get(mvpBuffer));

// Buffer location #0 - Vertices parameter gl.glBindBuffer(GL_ARRAY_BUFFER, (options.getShapeMode() == GLSLOptions.SHAPE_PYRAMID ? vbo[0] : vbo[3])); gl.glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0); gl.glEnableVertexAttribArray(0);

// Buffer location #1 – Normal vectors parameter gl.glBindBuffer(GL_ARRAY_BUFFER, (options.getShapeMode() == GLSLOptions.SHAPE_PYRAMID ? vbo[1] : vbo[4])); gl.glVertexAttribPointer(1, 3, GL_FLOAT, false, 0, 0); gl.glEnableVertexAttribArray(1);

// Buffer location #2 - Texture coordinates parameter gl.glBindBuffer(GL_ARRAY_BUFFER, (options.getShapeMode() == GLSLOptions.SHAPE_PYRAMID ? vbo[2] : vbo[5])); gl.glVertexAttribPointer(2, 2, GL_FLOAT, false, 0, 0); gl.glEnableVertexAttribArray(2);

// Texture - color gl.glActiveTexture(GL_TEXTURE0); gl.glBindTexture(GL_TEXTURE_2D, options.getColorTexture());

// Texture - height map gl.glActiveTexture(GL_TEXTURE1); gl.glBindTexture(GL_TEXTURE_2D, options.getHeightTexture());

// Depth & Color gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear color and depth buffers gl.glEnable(GL_DEPTH_TEST); gl.glDepthFunc(GL_LEQUAL);

// Primitive faces gl.glEnable(GL_CULL_FACE); gl.glFrontFace(GL_CCW);

// Draw mode gl.glPatchParameteri(GL_PATCH_VERTICES, 3); gl.glPolygonMode(GL_FRONT_AND_BACK, (options.getDrawMode() == GLSLOptions.DRAW_LINES ? GL_LINE : GL_FILL)); gl.glDrawArrays(GL_PATCHES, 0, (options.getShapeMode() == GLSLOptions.SHAPE_PYRAMID ? pyramid.getNumVertices() : sphere.getNumVertices())); }

38 public void reshape(GLAutoDrawable drawable, int x, int y, int width, int height) { setupProjectionMatrix(); }

@Override public void mouseDragged(MouseEvent e) {

if( mouse.isPressing() == true ) {

float dragThreshold = Camera.ROTATETHREHOLD;

float currentX = (float) e.getPoint().getX(); float currentY = (float) e.getPoint().getY(); float deltaX = currentX - mouse.getX(); float deltaY = currentY - mouse.getY();

// rotate by deltaX if( deltaX < -dragThreshold ) { cmdRotateCamera rotateLeftCommand = new cmdRotateCamera( camera, "V", -1 ); // rotate left rotateLeftCommand.actionPerformed( new ActionEvent(this, ActionEvent.ACTION_PERFORMED, "rotateCameraLeft")); } else if( deltaX > dragThreshold ) { cmdRotateCamera rotateRightCommand = new cmdRotateCamera( camera, "V", 1 ); // rotate right rotateRightCommand.actionPerformed( new ActionEvent(this, ActionEvent.ACTION_PERFORMED, "rotateCameraRight")); }

// rotate by deltaY if( deltaY < -dragThreshold ) { cmdRotateCamera rotateUpCommand = new cmdRotateCamera( camera, "U", -1 ); // rotate up rotateUpCommand.actionPerformed( new ActionEvent(this, ActionEvent.ACTION_PERFORMED, "rotateCamereUp")); } else if( deltaY > dragThreshold ) { cmdRotateCamera rotateDownCommand = new cmdRotateCamera( camera, "U", 1 ); // rotate down rotateDownCommand.actionPerformed( new ActionEvent(this, ActionEvent.ACTION_PERFORMED, "rotateCameraDown")); }

// Set new mouse x,y mouse.setX(currentX); mouse.setY(currentY); } }

@Override public void mousePressed(MouseEvent e) {

float currentX = (float) e.getPoint().getX(); float currentY = (float) e.getPoint().getY();

mouse.setX(currentX); mouse.setY(currentY); mouse.setPressing(true); }

39

@Override public void mouseReleased(MouseEvent e) {

mouse.setPressing(false); }

@Override public void mouseMoved(MouseEvent e) { }

@Override public void mouseClicked(MouseEvent e) { }

@Override public void mouseEntered(MouseEvent e) { }

@Override public void mouseExited(MouseEvent e) { }

@Override public void actionPerformed(ActionEvent e) { }

public void dispose(GLAutoDrawable drawable) {}

}

40 src/code/GLSLOptions.java – Holds runtime variables package code;

/** * Holds values of variables that may change during runtime. * @author Matthew Anderson * */ public class GLSLOptions {

public static final int DRAW_LINES = 0; public static final int DRAW_FILL = 1;

public static final int SHAPE_PYRAMID = 0; public static final int SHAPE_SPHERE = 1;

private int drawMode; private int shapeMode;

private int pyramidRenderingProgram; private int sphereRenderingProgram;

private int[] colorsTextureArray; private int[] heightsTextureArray; private int textureIndex;

public GLSLOptions() {

shapeMode = SHAPE_PYRAMID; textureIndex = 0; drawMode = DRAW_LINES; }

public int getDrawMode() { return drawMode; }

public void setDrawMode(int drawMode) { this.drawMode = drawMode; }

public int getShapeMode() { return shapeMode; }

public void setShapeMode(int shapeMode) { this.shapeMode = shapeMode; }

41

public int getRenderingProgram() { return shapeMode == SHAPE_PYRAMID ? pyramidRenderingProgram : sphereRenderingProgram; }

public void setPyramidRenderingProgram(int pyramidRenderingProgram) { this.pyramidRenderingProgram = pyramidRenderingProgram; }

public void setSphereRenderingProgram(int sphereRenderingProgram) { this.sphereRenderingProgram = sphereRenderingProgram; }

public int getTextureCount() { return colorsTextureArray.length; }

public int getColorTexture() { return colorsTextureArray[textureIndex]; }

public void setColorsTextureArray(int[] colorsTextureArray) { this.colorsTextureArray = colorsTextureArray; }

public int getHeightTexture() { return heightsTextureArray[textureIndex]; }

public void setHeightsTextureArray(int[] heightsTextureArray) { this.heightsTextureArray = heightsTextureArray; }

public int getTextureIndex() { return textureIndex; }

public void setTextureIndex(int textureIndex) { this.textureIndex = textureIndex; } }

42 src/code/Camera.java – Simple move/rotate camera implementation package code; import org.joml.Vector3f; public class Camera {

public static float MOVEDELTA = 0.075f; public static float ROTATEDELTA = .045f; public static float ROTATETHREHOLD = 1f;

private Vector3f position; private float rotationX; private float rotationY; private float rotationZ;

public Camera() { position = new Vector3f(0.0f, 0.0f, 3.5f); rotationX = 0.0f; rotationY = 0.0f; rotationZ = 0.0f; }

public Vector3f getPosition() { return position; }

public void setPosition(Vector3f position) { this.position = position; }

public float getRotationX() { return rotationX; }

public void setRotationX(float rotationX) { this.rotationX = rotationX; }

public float getRotationY() { return rotationY; }

public void setRotationY(float rotationY) { this.rotationY = rotationY; }

public float getRotationZ() { return rotationZ; }

public void setRotationZ(float rotationZ) { this.rotationZ = rotationZ; }

}

43 src/code/Mouse.java – Simple click and drag mouse implementation package code; public class Mouse {

private boolean pressing; private float x; private float y;

public Mouse() { pressing = false; x = 0f; y = 0f; }

public boolean isPressing() { return pressing; }

public void setPressing(boolean pressing) { this.pressing = pressing; }

public float getX() { return x; }

public void setX(float x) { this.x = x; }

public float getY() { return y; }

public void setY(float y) { this.y = y; }

}

44 src/code/Utils.java – Helper class for JOGL calls

Class authored by V. Scott Gordon [2]. package code; import static com.jogamp.opengl.GL.GL_CLAMP_TO_EDGE; import static com.jogamp.opengl.GL.GL_LINEAR_MIPMAP_LINEAR; import static com.jogamp.opengl.GL.GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT; import static com.jogamp.opengl.GL.GL_NO_ERROR; import static com.jogamp.opengl.GL.GL_RGBA; import static com.jogamp.opengl.GL.GL_RGBA8; import static com.jogamp.opengl.GL.GL_TEXTURE_2D; import static com.jogamp.opengl.GL.GL_TEXTURE_CUBE_MAP; import static com.jogamp.opengl.GL.GL_TEXTURE_CUBE_MAP_NEGATIVE_X; import static com.jogamp.opengl.GL.GL_TEXTURE_CUBE_MAP_NEGATIVE_Y; import static com.jogamp.opengl.GL.GL_TEXTURE_CUBE_MAP_NEGATIVE_Z; import static com.jogamp.opengl.GL.GL_TEXTURE_CUBE_MAP_POSITIVE_X; import static com.jogamp.opengl.GL.GL_TEXTURE_CUBE_MAP_POSITIVE_Y; import static com.jogamp.opengl.GL.GL_TEXTURE_CUBE_MAP_POSITIVE_Z; import static com.jogamp.opengl.GL.GL_TEXTURE_MAX_ANISOTROPY_EXT; import static com.jogamp.opengl.GL.GL_TEXTURE_MIN_FILTER; import static com.jogamp.opengl.GL.GL_TEXTURE_WRAP_S; import static com.jogamp.opengl.GL.GL_TEXTURE_WRAP_T; import static com.jogamp.opengl.GL.GL_UNSIGNED_BYTE; import static com.jogamp.opengl.GL2ES2.GL_COMPILE_STATUS; import static com.jogamp.opengl.GL2ES2.GL_FRAGMENT_SHADER; import static com.jogamp.opengl.GL2ES2.GL_INFO_LOG_LENGTH; import static com.jogamp.opengl.GL2ES2.GL_LINK_STATUS; import static com.jogamp.opengl.GL2ES2.GL_TEXTURE_WRAP_R; import static com.jogamp.opengl.GL2ES2.GL_VERTEX_SHADER; import static com.jogamp.opengl.GL3ES3.GL_GEOMETRY_SHADER; import static com.jogamp.opengl.GL3ES3.GL_TESS_CONTROL_SHADER; import static com.jogamp.opengl.GL3ES3.GL_TESS_EVALUATION_SHADER; import java.awt.Graphics2D; import java.awt.color.ColorSpace; import java.awt.geom.AffineTransform; import java.awt.image.BufferedImage; import java.awt.image.ComponentColorModel; import java.awt.image.DataBuffer; import java.awt.image.DataBufferByte; import java.awt.image.Raster; import java.awt.image.WritableRaster; import java.io.File; import java.io.IOException; import java.nio.ByteBuffer; import java.util.Scanner; import java.util.Vector; import javax.imageio.ImageIO; import com.jogamp.common.nio.Buffers; import com.jogamp.opengl.GL; import com.jogamp.opengl.GL4; import com.jogamp.opengl.GLContext; import com.jogamp.opengl.glu.GLU; import com.jogamp.opengl.util.texture.Texture; import com.jogamp.opengl.util.texture.TextureIO;

45 public class Utils { public Utils() {}

public static int createShaderProgram(String vS, String tCS, String tES, String gS, String fS) { GL4 gl = (GL4) GLContext.getCurrentGL(); int vShader = prepareShader(GL_VERTEX_SHADER, vS); int tcShader = prepareShader(GL_TESS_CONTROL_SHADER, tCS); int teShader = prepareShader(GL_TESS_EVALUATION_SHADER, tES); int gShader = prepareShader(GL_GEOMETRY_SHADER, gS); int fShader = prepareShader(GL_FRAGMENT_SHADER, fS); int vtgfprogram = gl.glCreateProgram(); gl.glAttachShader(vtgfprogram, vShader); gl.glAttachShader(vtgfprogram, tcShader); gl.glAttachShader(vtgfprogram, teShader); gl.glAttachShader(vtgfprogram, gShader); gl.glAttachShader(vtgfprogram, fShader); finalizeProgram(vtgfprogram); return vtgfprogram; }

public static int createShaderProgram(String vS, String tCS, String tES, String fS) { GL4 gl = (GL4) GLContext.getCurrentGL(); int vShader = prepareShader(GL_VERTEX_SHADER, vS); int tcShader = prepareShader(GL_TESS_CONTROL_SHADER, tCS); int teShader = prepareShader(GL_TESS_EVALUATION_SHADER, tES); int fShader = prepareShader(GL_FRAGMENT_SHADER, fS); int vtfprogram = gl.glCreateProgram(); gl.glAttachShader(vtfprogram, vShader); gl.glAttachShader(vtfprogram, tcShader); gl.glAttachShader(vtfprogram, teShader); gl.glAttachShader(vtfprogram, fShader); finalizeProgram(vtfprogram); return vtfprogram; }

public static int createShaderProgram(String vS, String gS, String fS) { GL4 gl = (GL4) GLContext.getCurrentGL(); int vShader = prepareShader(GL_VERTEX_SHADER, vS); int gShader = prepareShader(GL_GEOMETRY_SHADER, gS); int fShader = prepareShader(GL_FRAGMENT_SHADER, fS); int vgfprogram = gl.glCreateProgram(); gl.glAttachShader(vgfprogram, vShader); gl.glAttachShader(vgfprogram, gShader); gl.glAttachShader(vgfprogram, fShader); finalizeProgram(vgfprogram); return vgfprogram; }

public static int createShaderProgram(String vS, String fS) { GL4 gl = (GL4) GLContext.getCurrentGL(); int vShader = prepareShader(GL_VERTEX_SHADER, vS); int fShader = prepareShader(GL_FRAGMENT_SHADER, fS); int vfprogram = gl.glCreateProgram(); gl.glAttachShader(vfprogram, vShader); gl.glAttachShader(vfprogram, fShader); finalizeProgram(vfprogram); return vfprogram; }

public static int finalizeProgram(int sprogram) { GL4 gl = (GL4) GLContext.getCurrentGL(); int[] linked = new int[1]; gl.glLinkProgram(sprogram); checkOpenGLError(); gl.glGetProgramiv(sprogram, GL_LINK_STATUS, linked, 0);

46

if (linked[0] != 1) { System.out.println("linking failed"); printProgramLog(sprogram); } return sprogram; } private static int prepareShader(int shaderTYPE, String shader) { GL4 gl = (GL4) GLContext.getCurrentGL(); int[] shaderCompiled = new int[1]; String shaderSource[] = readShaderSource(shader); int shaderRef = gl.glCreateShader(shaderTYPE); gl.glShaderSource(shaderRef, shaderSource.length, shaderSource, null, 0); gl.glCompileShader(shaderRef); checkOpenGLError(); gl.glGetShaderiv(shaderRef, GL_COMPILE_STATUS, shaderCompiled, 0); if (shaderCompiled[0] != 1) { if (shaderTYPE == GL_VERTEX_SHADER) System.out.print("Vertex "); if (shaderTYPE == GL_TESS_CONTROL_SHADER) System.out.print("Tess Control "); if (shaderTYPE == GL_TESS_EVALUATION_SHADER) System.out.print("Tess Eval "); if (shaderTYPE == GL_GEOMETRY_SHADER) System.out.print("Geometry "); if (shaderTYPE == GL_FRAGMENT_SHADER) System.out.print("Fragment "); System.out.println("shader compilation error."); printShaderLog(shaderRef); } return shaderRef; } private static String[] readShaderSource(String filename) { Vector lines = new Vector(); Scanner sc = null; String[] program; try { sc = new Scanner(new File(filename)); while (sc.hasNext()) { lines.addElement(sc.nextLine()); } program = new String[lines.size()]; for (int i = 0; i < lines.size(); i++) { program[i] = (String) lines.elementAt(i) + "\n"; } } catch (IOException e) { System.err.println("IOException reading file: " + e); return null; } finally { if( sc != null ) { sc.close(); } } return program; } private static void printShaderLog(int shader) { GL4 gl = (GL4) GLContext.getCurrentGL(); int[] len = new int[1]; int[] chWrittn = new int[1]; byte[] log = null;

// determine the length of the shader compilation log gl.glGetShaderiv(shader, GL_INFO_LOG_LENGTH, len, 0); if (len[0] > 0) { log = new byte[len[0]]; gl.glGetShaderInfoLog(shader, len[0], chWrittn, 0, log, 0); System.out.println("Shader Info Log: ");

47

for (int i = 0; i < log.length; i++) { System.out.print((char) log[i]); } } } public static void printProgramLog(int prog) { GL4 gl = (GL4) GLContext.getCurrentGL(); int[] len = new int[1]; int[] chWrittn = new int[1]; byte[] log = null;

// determine length of the program compilation log gl.glGetProgramiv(prog, GL_INFO_LOG_LENGTH, len, 0); if (len[0] > 0) { log = new byte[len[0]]; gl.glGetProgramInfoLog(prog, len[0], chWrittn, 0, log, 0); System.out.println("Program Info Log: "); for (int i = 0; i < log.length; i++) { System.out.print((char) log[i]); } } } public static boolean checkOpenGLError() { GL4 gl = (GL4) GLContext.getCurrentGL(); boolean foundError = false; GLU glu = new GLU(); int glErr = gl.glGetError(); while (glErr != GL_NO_ERROR) { System.err.println("glError: " + glu.gluErrorString(glErr)); foundError = true; glErr = gl.glGetError(); } return foundError; } public static int loadTexture(String textureFileName) { GL4 gl = (GL4) GLContext.getCurrentGL(); int finalTextureRef; Texture tex = null; try { tex = TextureIO.newTexture(new File(textureFileName), false); } catch (Exception e) { e.printStackTrace(); } finalTextureRef = tex.getTextureObject();

// building a mipmap and use anisotropic filtering gl.glBindTexture(GL_TEXTURE_2D, finalTextureRef); gl.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); gl.glGenerateMipmap(GL.GL_TEXTURE_2D); if (gl.isExtensionAvailable("GL_EXT_texture_filter_anisotropic")) { float anisoset[] = new float[1]; gl.glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, anisoset, 0); gl.glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, anisoset[0]); } return finalTextureRef; } public static int loadTextureAWT(String textureFileName) { GL4 gl = (GL4) GLContext.getCurrentGL(); BufferedImage textureImage = getBufferedImage(textureFileName); byte[ ] imgRGBA = getRGBAPixelData(textureImage, true); ByteBuffer rgbaBuffer = Buffers.newDirectByteBuffer(imgRGBA);

int[ ] textureIDs = new int[1]; // array to hold generated texture IDs gl.glGenTextures(1, textureIDs, 0); int textureID = textureIDs[0]; // ID for the 0th texture object gl.glBindTexture(GL_TEXTURE_2D, textureID);

48

// MIPMAP Level, number of color components, // image size, border (ignored), // pixel format, data type, buffer holding texture data gl.glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureImage.getWidth(), textureImage.getHeight(), 0, GL_RGBA, GL_UNSIGNED_BYTE, rgbaBuffer);

// build a mipmap and use anisotropic filtering if available gl.glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); gl.glGenerateMipmap(GL.GL_TEXTURE_2D);

if (gl.isExtensionAvailable("GL_EXT_texture_filter_anisotropic")) { float anisoset[] = new float[1]; gl.glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, anisoset, 0); gl.glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, anisoset[0]); } return textureID; } public static int loadCubeMap(String dirName) { GL4 gl = (GL4) GLContext.getCurrentGL();

String topFile = dirName + File.separator + "yp.jpg"; String leftFile = dirName + File.separator + "xn.jpg"; String backFile = dirName + File.separator + "zn.jpg"; String rightFile = dirName + File.separator + "xp.jpg"; String frontFile = dirName + File.separator + "zp.jpg"; String bottomFile = dirName + File.separator + "yn.jpg";

BufferedImage topImage = getBufferedImage(topFile); BufferedImage leftImage = getBufferedImage(leftFile); BufferedImage frontImage = getBufferedImage(frontFile); BufferedImage rightImage = getBufferedImage(rightFile); BufferedImage backImage = getBufferedImage(backFile); BufferedImage bottomImage = getBufferedImage(bottomFile);

byte[] topRGBA = getRGBAPixelData(topImage, false); byte[] leftRGBA = getRGBAPixelData(leftImage, false); byte[] frontRGBA = getRGBAPixelData(frontImage, false); byte[] rightRGBA = getRGBAPixelData(rightImage, false); byte[] backRGBA = getRGBAPixelData(backImage, false); byte[] bottomRGBA = getRGBAPixelData(bottomImage, false);

ByteBuffer topWrappedRGBA = ByteBuffer.wrap(topRGBA); ByteBuffer leftWrappedRGBA = ByteBuffer.wrap(leftRGBA); ByteBuffer frontWrappedRGBA = ByteBuffer.wrap(frontRGBA); ByteBuffer rightWrappedRGBA = ByteBuffer.wrap(rightRGBA); ByteBuffer backWrappedRGBA = ByteBuffer.wrap(backRGBA); ByteBuffer bottomWrappedRGBA = ByteBuffer.wrap(bottomRGBA);

int[] textureIDs = new int[1]; gl.glGenTextures(1, textureIDs, 0); int textureID = textureIDs[0];

checkOpenGLError();

gl.glBindTexture(GL_TEXTURE_CUBE_MAP, textureID); gl.glTexStorage2D(GL_TEXTURE_CUBE_MAP, 1, GL_RGBA8, 1024, 1024);

49

// attach the image texture to each face of the currently active OpenGL texture ID gl.glTexSubImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, 0, 0, 1024, 1024, GL_RGBA, GL.GL_UNSIGNED_BYTE, rightWrappedRGBA); gl.glTexSubImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, 0, 0, 1024, 1024, GL_RGBA, GL.GL_UNSIGNED_BYTE, leftWrappedRGBA); gl.glTexSubImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, 0, 0, 1024, 1024, GL_RGBA, GL.GL_UNSIGNED_BYTE, bottomWrappedRGBA); gl.glTexSubImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, 0, 0, 1024, 1024, GL_RGBA, GL.GL_UNSIGNED_BYTE, topWrappedRGBA); gl.glTexSubImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, 0, 0, 1024, 1024, GL_RGBA, GL.GL_UNSIGNED_BYTE, frontWrappedRGBA); gl.glTexSubImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, 0, 0, 1024, 1024, GL_RGBA, GL.GL_UNSIGNED_BYTE, backWrappedRGBA);

gl.glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); gl.glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); gl.glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);

checkOpenGLError(); return textureID; } private static BufferedImage getBufferedImage(String fileName) { BufferedImage img; try { img = ImageIO.read(new File(fileName)); // assumes GIF, JPG, PNG, BMP } catch (IOException e) { System.err.println("Error reading '" + fileName + '"'); throw new RuntimeException(e); } return img; } private static byte[] getRGBAPixelData(BufferedImage img, boolean flip) { int height = img.getHeight(null); int width = img.getWidth(null);

// create an (empty) BufferedImage with a suitable Raster and ColorModel WritableRaster raster = Raster.createInterleavedRaster( DataBuffer.TYPE_BYTE, width, height, 4, null);

// convert to a color model that OpenGL understands ComponentColorModel colorModel = new ComponentColorModel( ColorSpace.getInstance(ColorSpace.CS_sRGB), new int[] { 8, 8, 8, 8 }, // bits true, // hasAlpha false, // isAlphaPreMultiplied ComponentColorModel.TRANSLUCENT, DataBuffer.TYPE_BYTE);

BufferedImage newImage = new BufferedImage(colorModel, raster, false, null); Graphics2D g = newImage.createGraphics();

if (flip) // flip image vertically { AffineTransform gt = new AffineTransform(); gt.translate(0, height); gt.scale(1, -1d); g.transform(gt); } g.drawImage(img, null, null); // draw original image into new image g.dispose();

// now retrieve the underlying byte array from the raster data buffer DataBufferByte dataBuf = (DataBufferByte) raster.getDataBuffer(); return dataBuf.getData(); }

50

// GOLD material - ambient, diffuse, specular, and shininess public static float[] goldAmbient() { return (new float [] {0.2473f, 0.1995f, 0.0745f, 1} ); }

public static float[] goldDiffuse() { return (new float [] {0.7516f, 0.6065f, 0.2265f, 1} ); }

public static float[] goldSpecular() { return (new float [] {0.6283f, 0.5559f, 0.3661f, 1} ); }

public static float goldShininess() { return 51.2f; }

// SILVER material - ambient, diffuse, specular, and shininess public static float[] silverAmbient() { return (new float [] {0.1923f, 0.1923f, 0.1923f, 1} ); }

public static float[] silverDiffuse() { return (new float [] {0.5075f, 0.5075f, 0.5075f, 1} ); }

public static float[] silverSpecular() { return (new float [] {0.5083f, 0.5083f, 0.5083f, 1} ); }

public static float silverShininess() { return 51.2f; }

// BRONZE material - ambient, diffuse, specular, and shininess public static float[] bronzeAmbient() { return (new float [] {0.2125f, 0.1275f, 0.0540f, 1} ); }

public static float[] bronzeDiffuse() { return (new float [] {0.7140f, 0.4284f, 0.1814f, 1} ); }

public static float[] bronzeSpecular() { return (new float [] {0.3936f, 0.2719f, 0.1667f, 1} ); }

public static float bronzeShininess() { return 25.6f; } }

51 src/code/shaders/generic_vertShader.glsl – Vertex shader for pyramid and sphere

#version 430 layout (location = 0) in vec3 vertPos; layout (location = 1) in vec3 vertNormal; layout (location = 2) in vec2 texCoord; out vec2 tc; out vec3 norm_cs_in; uniform mat4 mvp; layout (binding=0) uniform sampler2D colorTexture; layout (binding=1) uniform sampler2D heightTexture; void main(void) {

gl_Position = vec4(vertPos, 1.0f); // vertex position from Java norm_cs_in = vertNormal; // normal vector for height map tc = texCoord; // texture coordinate from Java

}

52 src/code/shaders/pyramid_tessCShader.glsl – Pyramid tessellation control shader

#version 430 in vec2 tc[]; out vec2 tcs_out[]; in vec3 norm_cs_in[]; out vec3 norm_es_in[]; out float displacementOffset[]; uniform mat4 mvp; layout (vertices = 3) out; layout (binding=0) uniform sampler2D colorTexture; layout (binding=1) uniform sampler2D heightTexture; void main(void) { if(gl_InvocationID == 0) {

const int LEVEL = 256;

// Triangles only use these inner/outer tessellation levels gl_TessLevelOuter[0] = LEVEL; gl_TessLevelOuter[1] = LEVEL; gl_TessLevelOuter[2] = LEVEL; gl_TessLevelInner[0] = LEVEL; }

gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;

norm_es_in[gl_InvocationID] = norm_cs_in[gl_InvocationID];

tcs_out[gl_InvocationID] = tc[gl_InvocationID];

// Used by TES to fix the disjointed pyramid seams displacementOffset[0] = texture(heightTexture, tcs_out[0].xy).r;

}

53 src/code/shaders/pyramid_tessEShader.glsl – Pyramid tessellation evaluation shader

#version 430 layout (triangles, equal_spacing, ccw) in; uniform mat4 mvp; layout (binding=0) uniform sampler2D colorTexture; layout (binding=1) uniform sampler2D heightTexture; in vec2 tcs_out[]; out vec2 tes_out; in vec3 norm_es_in[]; in float displacementOffset[]; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2); vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2);

void main (void) {

/* * gl_in[] -> The control point that PG is generating vertices in. * tcs_out[] -> The texture coordinates passed from vertex shader to TCS to TES. * gl_TessCoord -> The barycentric coordinates of the current generated vertex (from PG). * Use it to interpolate all attributes of the new vertex. * gl_position -> The newly calculated point that will be sent to FS. */

// Interpolate 3D vertex points vec3 a = gl_in[0].gl_Position.xyz; vec3 b = gl_in[1].gl_Position.xyz; vec3 c = gl_in[2].gl_Position.xyz; vec3 position = interpolate3D(a, b, c);

// Interpolate 2D texture coordinates tes_out = interpolate2D(tcs_out[0], tcs_out[1], tcs_out[2]);

// Displace the vertex along the normal float displacementFactor = 0.50f; float textureDisplacement = texture(heightTexture, tes_out.xy).r; float displacement = textureDisplacement - displacementOffset[0];

vec3 norm = interpolate3D(norm_es_in[0], norm_es_in[1], norm_es_in[2]); norm = normalize(norm);

position += norm * displacement * displacementFactor;

// Set new vertex position gl_Position = mvp * vec4(position, 1); } vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; }

54 src/code/shaders/sphere_tessCShader.glsl – Sphere tessellation control shader

#version 430 in vec2 tc[]; out vec2 tcs_out[]; in vec3 norm_cs_in[]; out vec3 norm_es_in[]; uniform mat4 mvp; layout (vertices = 3) out; layout (binding=0) uniform sampler2D colorTexture; layout (binding=1) uniform sampler2D heightTexture; void main(void) { if(gl_InvocationID == 0) {

const int LEVEL = 32;

// Triangles only use these inner/outer tessellation levels gl_TessLevelOuter[0] = LEVEL; gl_TessLevelOuter[1] = LEVEL; gl_TessLevelOuter[2] = LEVEL; gl_TessLevelInner[0] = LEVEL; }

gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;

norm_es_in[gl_InvocationID] = norm_cs_in[gl_InvocationID];

tcs_out[gl_InvocationID] = tc[gl_InvocationID];

}

55 src/code/shaders/sphere_tessEShader.glsl – Sphere tessellation evaluation shader

#version 430 layout (triangles, equal_spacing, ccw) in; uniform mat4 mvp; layout (binding=0) uniform sampler2D colorTexture; layout (binding=1) uniform sampler2D heightTexture; in vec2 tcs_out[]; out vec2 tes_out; in vec3 norm_es_in[]; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2); vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2);

#define PI 3.14159265359 void main (void) {

/* * gl_in[] -> The control point that PG is generating vertices in. * tcs_out[] -> The texture coordinates passed from vertex shader to TCS to TES. * gl_TessCoord -> The barycentric coordinates of the current generated vertex (from PG). * Use it to interpolate all attributes of the new vertex. * gl_position -> The newly calculated point that will be sent to FS. */

// Interpolate 3D vertex points vec3 a = gl_in[0].gl_Position.xyz; vec3 b = gl_in[1].gl_Position.xyz; vec3 c = gl_in[2].gl_Position.xyz; vec3 position = interpolate3D(a, b, c);

// Interpolate 2D texture coordinates tes_out = interpolate2D(tcs_out[0], tcs_out[1], tcs_out[2]);

// Displace the vertex along the normal float displacementFactor = 0.70f; float displacement = texture(heightTexture, tes_out.xy).r;

vec3 norm = interpolate3D(norm_es_in[0], norm_es_in[1], norm_es_in[2]); norm = normalize(norm);

position += norm * displacement * displacementFactor;

// Set new vertex position gl_Position = mvp * vec4(position, 1); } vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; }

56 src/code/shaders/generic_fragShader.glsl – Fragment shader for pyramid and sphere

#version 430 in vec2 tes_out; out vec4 color; uniform mat4 mvp; layout (binding=0) uniform sampler2D colorTexture; layout (binding=1) uniform sampler2D heightTexture; void main(void) { color = texture(colorTexture, tes_out); // color from texture at texture coordinate }

57 src/models/Pyramid.java – Pyramid model package models; import org.joml.Vector3f;

/** * Creates and supplies vertices, texture coordinates, and normals to GLSL shaders. * @author Matthew Anderson * */ public class Pyramid { private float len; private int numVertices; private float[] vertexXyzValues; private float[] textureCoordinatesStValues; private float[] normalXyzValues;

public Pyramid() { this.len = 1f; initialize(); }

public Pyramid(float length) { this.len = length; initialize(); }

private void initialize() { float half = len / 2; // half length

// Vertices vertexXyzValues = new float[] { -len, -len, len, len, -len, len, 0.0f, len, 0.0f, // front len, -len, len, len, -len, -len, 0.0f, len, 0.0f, // right len, -len, -len, -len, -len, -len, 0.0f, len, 0.0f, // back -len, -len, -len, -len, -len, len, 0.0f, len, 0.0f, // left -len, -len, -len, len, -len, len, -len, -len, len, // bottom-left len, -len, len, -len, -len, -len, len, -len, -len // bottom-right };

// Number of vertices numVertices = vertexXyzValues.length / 3;

// Texture coordinates textureCoordinatesStValues = new float[] { 0.0f, 0.0f, len, 0.0f, half, len, 0.0f, 0.0f, len, 0.0f, half, len, 0.0f, 0.0f, len, 0.0f, half, len, 0.0f, 0.0f, len, 0.0f, half, len, 0.0f, 0.0f, len, len, 0.0f, len, len, len, 0.0f, 0.0f, len, 0.0f };

58

// Normals normalXyzValues = new float[vertexXyzValues.length]; // for each face of the pyramid for(int i = 0; i < normalXyzValues.length; i = i + 9) { // a = topvertex - bottomvertex1 Vector3f a = new Vector3f(vertexXyzValues[i], vertexXyzValues[i+1], vertexXyzValues[i+2]) .sub(vertexXyzValues[i+3], vertexXyzValues[i+4], vertexXyzValues[i+5]); // b = topvertex - bottomvertex2 Vector3f b = new Vector3f(vertexXyzValues[i], vertexXyzValues[i+1], vertexXyzValues[i+2]) .sub(vertexXyzValues[i+6], vertexXyzValues[i+7], vertexXyzValues[i+8]); Vector3f normal = a.cross(b); for(int j = 0; j < 3; j++) { normalXyzValues[(j*3) + i + 0] = normal.x(); normalXyzValues[(j*3) + i + 1] = normal.y(); normalXyzValues[(j*3) + i + 2] = normal.z(); }

} }

public int getNumVertices() { return numVertices; }

public void setNumVertices(int numVertices) { this.numVertices = numVertices; }

public float getLen() { return len; }

public void setLen(float len) { this.len = len; }

public float[] getVertexXyzValues() { return vertexXyzValues; }

public void setVertexXyzValues(float[] vertexXyzValues) { this.vertexXyzValues = vertexXyzValues; }

public float[] getTextureCoordinatesStValues() { return textureCoordinatesStValues; }

public void setTextureCoordinatesStValues(float[] textureCoordinatesStValues) { this.textureCoordinatesStValues = textureCoordinatesStValues; }

public float[] getNormalXyzValues() { return normalXyzValues; }

public void setNormalXyzValues(float[] normalXyzValues) { this.normalXyzValues = normalXyzValues; }

}

59 src/models/Sphere.java – Sphere model

Class authored by V. Scott Gordon [2]. package models; import org.joml.*; import static java.lang.Math.*;

/** * Expanded Sphere class from V. Scott Gordon book Computer Graphics Programming * in OpenGL with Java 2nd Ed., Chapter 6-1.
* Re-structured a bit to match Pyramid class. Expanded by adding three methods * to supply vertices, texture coordinates, and normal vectors to GLSL shaders. *

    *
  • getVertexXyzValues
  • *
  • getTextureCoordinateStValues
  • *
  • getNormalXyzValues
  • *
*/ public class Sphere { private int numVertices, numIndices, prec; private int[] indices; private Vector3f[] vertices; private Vector2f[] texCoords; private Vector3f[] normals; private float[] vertexXyzValues; private float[] textureCoordinatesStValues; private float[] normalXyzValues;

public Sphere() { prec = 48; initialize(); }

public Sphere(int p) { prec = p; initialize(); }

private void initialize() { numVertices = (prec + 1) * (prec + 1); numIndices = prec * prec * 6; indices = new int[numIndices]; vertices = new Vector3f[numVertices]; texCoords = new Vector2f[numVertices]; normals = new Vector3f[numVertices]; vertexXyzValues = new float[indices.length*3]; textureCoordinatesStValues = new float[indices.length*2]; normalXyzValues = new float[indices.length*3];

for (int i = 0; i < numVertices; i++) { vertices[i] = new Vector3f(); texCoords[i] = new Vector2f(); normals[i] = new Vector3f(); }

60

// calculate triangle vertices, tex coords, and normals for (int i = 0; i <= prec; i++) { for (int j = 0; j <= prec; j++) { float y = (float) cos(toRadians(180 - i * 180 / prec)); float x = -(float) cos(toRadians(j * 360 / (float) prec))*(float) abs(cos(asin(y))); float z = (float) sin(toRadians(j * 360 / (float) prec))*(float) abs(cos(asin(y))); vertices[i * (prec + 1) + j].set(x, y, z); texCoords[i * (prec + 1) + j].set((float) j / prec, (float) i / prec); normals[i * (prec + 1) + j].set(x, y, z); } }

// calculate triangle indices for (int i = 0; i < prec; i++) { for (int j = 0; j < prec; j++) { indices[6 * (i * prec + j) + 0] = i * (prec + 1) + j; indices[6 * (i * prec + j) + 1] = i * (prec + 1) + j + 1; indices[6 * (i * prec + j) + 2] = (i + 1) * (prec + 1) + j; indices[6 * (i * prec + j) + 3] = i * (prec + 1) + j + 1; indices[6 * (i * prec + j) + 4] = (i + 1) * (prec + 1) + j + 1; indices[6 * (i * prec + j) + 5] = (i + 1) * (prec + 1) + j; } }

// store values to send to GLSL shaders for (int i = 0; i

}

/** * Gets the vertex coordinate values in X,Y,Z,X,Y,Z... format to send to GLSL shaders. * @return Array to send to GLSL shaders */ public float[] getVertexXyzValues() { return vertexXyzValues; }

/** * Gets the texture coordinate values in S,T,S,T... format to send to GLSL shaders. * @return Array to send to GLSL shaders */ public float[] getTextureCoordinateStValues() { return textureCoordinatesStValues; }

/** * Gets the normal vectors values in X,Y,Z,X,Y,Z... format to send to GLSL shaders. * @return Array to send to GLSL shaders */ public float[] getNormalXyzValues() { return normalXyzValues; }

61

public int getNumIndices() { return numIndices; }

public int getNumVertices() { return numIndices; }

public int[] getIndices() { return indices; }

public Vector3f[] getVertices() { return vertices; }

public Vector2f[] getTexCoords() { return texCoords; }

public Vector3f[] getNormals() { return normals; } }

62 src/eventcommands/cmdMoveCamera.java – Move the camera package eventcommands; import java.awt.event.ActionEvent; import javax.swing.AbstractAction; import org.joml.Vector3f; import code.Camera; public class cmdMoveCamera extends AbstractAction {

private static final long serialVersionUID = 1L; private Camera eye; private String ; // N = z-plane, U = x-plane, V = y-plane private float value;

public cmdMoveCamera( Camera camera, String plane, float value ) { super("Move Camera"); this.eye = camera; this.plane = plane; this.value = value; }

@Override public void actionPerformed(ActionEvent e) {

float ex = eye.getPosition().x(); float ey = eye.getPosition().y(); float ez = eye.getPosition().z(); float exr = eye.getRotationX(); float eyr = eye.getRotationY(); float ezr = eye.getRotationZ();

if( plane.equals("N") ) { // z-plane "move forward/backward" ez += Math.cos(Math.toRadians(eyr)) * Math.cos(Math.toRadians(-exr)) * (Camera.MOVEDELTA*value); ex += Math.sin(Math.toRadians(-eyr)) * Math.cos(Math.toRadians(ezr)) * (Camera.MOVEDELTA*value); ey += Math.sin(Math.toRadians(exr)) * Math.cos(Math.toRadians(ezr)) * (Camera.MOVEDELTA*value); }

if( plane.equals("U") ) { // x-plane "strafe left/right" ex += Math.cos(Math.toRadians(eyr)) * Math.cos(Math.toRadians(-ezr)) * (Camera.MOVEDELTA*value); ez += Math.sin(Math.toRadians(eyr)) * Math.cos(Math.toRadians(exr)) * (Camera.MOVEDELTA*value); ey += Math.sin(Math.toRadians(exr)) * Math.sin(Math.toRadians(ezr)) * (Camera.MOVEDELTA*value); }

if( plane.equals("V") ) { // y-plane "rise/lower" // not implemented }

Vector3f newPos = new Vector3f( ex, ey, ez ); eye.setPosition( newPos ); }

}

63 src/eventcommands/cmdRotateCamera.java – Rotate the camera package eventcommands; import java.awt.event.ActionEvent; import javax.swing.AbstractAction; import code.Camera; public class cmdRotateCamera extends AbstractAction {

private static final long serialVersionUID = 1L; private Camera eye; private String plane; // N = z-plane, U = x-plane, V = y-plane private float value;

public cmdRotateCamera(Camera camera, String plane, float value) { super("Rotate Camera"); this.eye = camera; this.plane = plane; this.value = value; }

@Override public void actionPerformed(ActionEvent e) {

float rotX = eye.getRotationX(); float rotY = eye.getRotationY(); float rotZ = eye.getRotationZ();

if( plane == "N" ) { // z-plane "tilt left/right" // not implemented }

if( plane == "U" ) { // x-plane "tilt down/up" rotX += Math.cos(Math.toRadians(rotY)) * (Camera.ROTATEDELTA*value); rotZ += Math.sin(Math.toRadians(rotY)) * (Camera.ROTATEDELTA*value); }

if( plane == "V" ) { // y-plane "rotate left/right" rotY += Math.cos(Math.toRadians(rotZ)) * (Camera.ROTATEDELTA*value); rotX += Math.sin(Math.toRadians(rotZ)) * (Camera.ROTATEDELTA*value); }

eye.setRotationX( rotX ); eye.setRotationY( rotY ); eye.setRotationZ( rotZ ); }

}

64 src/eventcommands/cmdCloseWindow.java – Close the window package eventcommands; import java.awt.event.ActionEvent; import java.awt.event.WindowEvent; import javax.swing.AbstractAction; import javax.swing.JFrame; public class cmdCloseWindow extends AbstractAction {

private static final long serialVersionUID = 1L; private JFrame frame;

public cmdCloseWindow(JFrame frame) { super("Close Window"); this.frame = frame; }

@Override public void actionPerformed(ActionEvent e) {

frame.dispatchEvent(new WindowEvent(frame, WindowEvent.WINDOW_CLOSING)); }

}

65 src/eventcommands/cmdSwitchModel.java – Toggle pyramid or sphere model package eventcommands; import java.awt.event.ActionEvent; import javax.swing.AbstractAction; import code.GLSLOptions; public class cmdSwitchModel extends AbstractAction {

private static final long serialVersionUID = 1L; private GLSLOptions options;

public cmdSwitchModel(GLSLOptions options) { this.options = options; }

@Override public void actionPerformed(ActionEvent e) {

options.setShapeMode(options.getShapeMode() == GLSLOptions.SHAPE_PYRAMID ? GLSLOptions.SHAPE_SPHERE : GLSLOptions.SHAPE_PYRAMID); }

}

66 src/eventcommands/cmdCycleTexture.java – Cycle through texture list package eventcommands; import java.awt.event.ActionEvent; import javax.swing.AbstractAction; import code.GLSLOptions; public class cmdCycleTexture extends AbstractAction {

private static final long serialVersionUID = 1L; private GLSLOptions options;

public cmdCycleTexture(GLSLOptions options) { this.options = options; }

@Override public void actionPerformed(ActionEvent e) {

int index = options.getTextureIndex(); int count = options.getTextureCount();

index = ++index % count;

options.setTextureIndex(index); }

}

67 src/eventcommands/cmdToggleDrawMode.java – Toggle polygon mode package eventcommands; import java.awt.event.ActionEvent; import javax.swing.AbstractAction; import code.GLSLOptions; public class cmdToggleDrawMode extends AbstractAction {

private static final long serialVersionUID = 1L; private GLSLOptions options;

public cmdToggleDrawMode(GLSLOptions options) { this.options = options; }

@Override public void actionPerformed(ActionEvent e) {

options.setDrawMode(options.getDrawMode() == GLSLOptions.DRAW_LINES ? GLSLOptions.DRAW_FILL : GLSLOptions.DRAW_LINES); }

}

68

Appendix B. System Specifications

The system components and versions used to generate the results for this project are listed here.

• CPU: AMD Ryzen 7 1700 Eight-Core Processor 3.39GHz

• RAM: 16.0 GB

• GPU: AMD R9 Fury X

• VRAM: 4.0 GB

• Operating System: Windows 10

• Java version: 1.8.0_231

• JOGL version: 2.3.2

• JOML version: 1.9.11

• OpenGL version: 4.6

69

References

1. B. Vagabond, "GLSL Tessellation Displacement Mapping", Stack Overflow, 2019. [Online].

Available: https://stackoverflow.com/questions/24166446/glsl-tessellation-displacement-

mapping [Accessed: February 10, 2019].

2. V. S. Gordon and J. Clevenger, Computer graphics programming in OpenGL with Java, 2nd

ed. Dulles, VA: Mercury Learning and Information, 2019, pp xi-xii, pp. 107, pp. 110, pp.

112, pp. 134, pp. 136, pp. 254, pp. 273, Code 6-1, Code 12-1.

3. texture - OpenGL 4 Reference Pages. [Online]. Available:

https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/texture.xhtml [Accessed:

March 11, 2020].

4. TextureIO (JOGL, NativeWindow and NEWT ), 2014. [Online]. Available:

https://jogamp.org/deployment/v2.1.5/javadoc/jogl/javadoc/com/jogamp/opengl/util/texture/T

extureIO.html [Accessed: March 11, 2020].

5. M. Nießner, B. Keinert, M. Fisher, M. Stamminger, C. Loop and H. Schäfer, "Real-Time

Rendering Techniques with Hardware Tessellation", Computer Graphics Forum, vol. 35, no.

1, pp. 113-137, 2015.

6. "tessellation | Definition of tessellation in English by Oxford Dictionaries", Oxford

Dictionaries | English, 2019. [Online]. Available:

https://en.oxforddictionaries.com/definition/tessellation [Accessed: January 25, 2019].

7. "How to find surface normal of a triangle", Mathematics Stack Exchange, 2020. [Online].

Available: https://math.stackexchange.com/a/305914 [Accessed: May 17, 2019].

8. "Normal (geometry)", En.wikipedia.org, 2020. [Online]. Available:

https://en.wikipedia.org/wiki/Normal_(geometry) [Accessed: May 17, 2019].

70

9. "How does tessellation increase performance?", Stack Overflow, 2020. [Online]. Available:

https://stackoverflow.com/a/30312887 [Accessed: March 13, 2020].

10. "Tessellation - OpenGL Wiki", Khronos.org, 2020. [Online]. Available:

https://www.khronos.org/opengl/wiki/Tessellation#Triangles [Accessed: March 13, 2020].

11. "Tutorial 30 - Basic Tessellation", Ogldev.atspace.co.uk, 2020. [Online]. Available:

http://ogldev.atspace.co.uk/www/tutorial30/tutorial30.html [Accessed: April 18, 2019].

12. J. Flick, "Surface Displacement", Catlikecoding.com, 2020. [Online]. Available:

https://catlikecoding.com/unity/tutorials/advanced-rendering/surface-displacement/

[Accessed: March 12, 2020].

13. "Formula for inner and outer tessellation factors given number of triangles", Stack Overflow,

2020. [Online]. Available: https://stackoverflow.com/q/47521068 [Accessed: March 13,

2020].

14. V. S. Gordon, private communication, March 2020.

15. W. Hinds, "Tessellation Shaders for Wave Effect", YouTube, 2020. [Online]. Available:

https://www.youtube.com/watch?v=f9_OAjW37HE [Accessed: March 14, 2020].

16. "Vertex displacement with a noise function using GLSL and three.js - Blog -

Clicktorelease", Clicktorelease.com, 2020. [Online]. Available:

https://www.clicktorelease.com/blog/vertex-displacement-noise-3d-webgl-glsl-three-js/

[Accessed: March 13, 2020].