<<

UNIVERSITY OF CINCINNATI

Date:______

I, ______, hereby submit this work as part of the requirements for the degree of: in:

It is entitled:

This work and its defense approved by:

Chair: ______

DEFINING THE ART OF INTERACTIVE REAL-TIME 3D

A thesis submitted to the

Division of Research and Advanced Studies of the University of Cincinnati

In partial fulfillment of the requirements for the degree of

MASTER OF DESIGN

in the School of Design of the College of Design, Architecture, Art and Planning

2004

by

Thomas Dunne

B.S.Des, University of Cincinnati, 2000

Committee Chair: Marty Plumbo M. Des Dennis Puhalla PhD (candidate) Aaron Rucker BS Eng

Abstract

Interactive real-time 3D is a form of digital design that allows users to explore virtual three-dimensional environments and experience content in a very immersive fashion. Unlike many other media, interactive real-time 3D requires the user to take an active role in the exploration process, acting and reacting with digital 3D constructs in an immediate, “real time fashion.” This thesis studies the nature of interactive real-time 3D, particularly as it appears in video games, the foremost genre within the medium. Beginning with a technical analysis, the basic attributes of real-time 3D content are defined, using game content as examples to illustrate the concepts. This is followed by a study of the evolution of video games, from the primitive arcade systems of the 1970s to the most advance computer gaming systems available today. Particular emphasis is put on the hardware used to run these games, as the technology of each period is largely responsible for determining the limitations of the medium. Once the elements and origins of interactive real-time 3D have been established, it is possible to determine the aesthetic principles which help determine the success or failure of real-time 3D art in an interactive system. As true real-time 3D content is not yet a decade old, this exploration of fundamental aesthetic values with respect to the medium is largely unprecedented. The goal is a better understanding of a new form of design, explained in reference to common aesthetics principles as well as technical definitions. As with the historical section, concepts are illustrated with examples from an assortment of recent video games and technologies. The thesis concludes with a brief look into the future of interactive real-time 3D. Due to the medium’s relatively recent origins and rapid pace of development, no explicit forecast is possible, but an analysis of recent trends within the field allow for some potential paths to be explored.

1

Table of Contents

List of Illustrations 2

Introduction 3 Interactive Real-Time 3D 5 Low-poly Modeling 7 Low-poly Texturing 9 Low-poly 11

The History of 3D in Interactive Gaming 14 The Arcade Days 14 Consoles Return 20 Consoles Go 3D 22 The Home Computer Catches Up 25 The Advent of Real 3D on the PC 29 The Hardware That Makes It Happen 37

Aesthetics of Interactive Real-Time 3D 41 Shape + Form 44 Silhouettes 45 Abstraction of Detail 46 Proportion in Construction 48 Order and Chaos 50 Variety and Unity 52 Color + Light 52 Balancing Contrast with Consistency 53 Painting vs. Photography 56 Incorporating New Technology 58 Motion + Interaction 59 Perspective Dynamics and Composition 60 Bringing Low-Poly to Life 62 Exaggeration of Movement 64 Secondary Motion 65

The Future of the Art 67 Interactive Real-Time Cinema 78 Time and Money 72 Conclusion 75

Bibliography 77

2

List of Illustrations

1) Use of cinematic 3D in film 4 2) Film to game comparison 6 3) Low-polygon modeling 8 4) Low-polygon texturing 10 5) Low-polygon animation 13 6) Night Driver 16 7) 17 8) 18 9) I, Robot 19 10) 20 11) Super 64 24 12) Tag Tournament comparison 25 13) Ultima Underworld 27 14) Wolfenstein 3D 28 15) Doom 29 16) Quake 30 17) Half-Life 32 18) Quake III Arena 34 19) Unreal Tournament 2003 36 20) Representation of characters through real-time 3D 43 21) Silhouettes 46 22) Reduction of detail 48 23) Edge loops 49 24) Organic geometry 51 25) Colored lighting 54 26) Photo-source textures 57 27) Procedural shaders 59 28) Dynamic perspective 61 29) Secondary motion 65 30) Problems with deformation 66 31) Normal mapping 70 32) High definition range imaging 71

3

Introduction

Three-dimensional art and design has been a part of human culture since the

earliest days of civilization. Initially, this took the form of simple pottery and crude earthen sculpture, generally the only viable forms of creative volumetric expression for a primitive people. As societal advances brought technological advancement, the arts followed. Elaborate, often massive works of sculpture, carved in metal and stone, soon surpassed those early works in illustrating man’s achievements in understanding of space and volume. Architecture was made significant as well, as the construction of human spaces gained importance for its artistic contributions as well as basic utilitarian value. In recent decades, with the advent of television and film, the concepts of three-dimensional aesthetics have again been translated into new media. Most notable is the migration of these concepts into the virtual space of computers, bringing with it the ability to create digital imagery all but indistinguishable from real life.

Digital 3D has been quickly absorbed into the mainstream of modern art and

design, particularly in film. Beginning in the 1980’s, rendered 3D graphics became a

viable option for cinematic visual effects. In 1984, the film The Last Starfighter* from

Universal Pictures became the first movie to use entirely rendered 3D sequences cut in with traditional film. While still crude by modern standards, this new form of digital 3D in film was an extraordinary benchmark at the time. Seven years later, the bar would be raised dramatically once again with Terminator 2: Judgment Day. This science fiction masterpiece from famed director James Cameron not only used 3D imagery in the film, but actually created a digital character that appeared to interact with its surroundings. 4

The technology required to achieve this effect was enormous, but it paved the way for what is commonplace in cinema today. Films like the Lord of the Rings trilogy use entirely digital actors in combination with digital scenes and live-action elements, creating a thoroughly believable synthesis between the real and the impossible. It is the application of incredibly realistic 3D graphics that allows fantastic and imaginary things to appear on screen and appear to blend in seamlessly with the real world.

(fig.1) Digitally created scenes from The Last Starfighter, Terminator 2: Judgment Day and The Lord of the Rings: Return of the .

All of these different 3D forms cover a wide range of genres and styles, and generally do not share many common traits beyond a relationship to basic spatial aesthetics. A carved wooden figure, a gothic cathedral and a digitally animated 3D film each work within the same bounds of three-dimensional volumes despite being quite different in terms of both construction and intent. However, they often share another attribute that is not generally recognized, in that they are also non-interactive. While there are some exceptions, most three-dimensional works of art are passive in terms of the experience they offer. A viewer can walk up to Michelangelo’s legendary sculpture of David or pace through the exquisitely crafted halls of the Palace at Versailles, yet cannot really interact with them. Similarly, the amazing imagery seen in the latest 5

effects-laden cinematic marvel is a one way process, as the film may have an impact on

the viewer but the viewer cannot have any effect on the film. This point about non-

interactivity in three-dimensional art and design seems intuitive enough, as it remains a

self-evident truth through virtually all of the various three-dimensional media. That is, it

is self-evident with the notable exception of interactive real-time 3D.

Interactive Real-Time 3D

Interactive real-time 3D is a digital medium that allows users to directly interact

within virtual three-dimensional environments and instantly see changes based on those

interactions. This makes the medium very unique amongst most form of art or design, as

it actually encourages the user to have input, to make changes, to become a part of the

experience. The fact that it happens in real-time, with changes in the experience occurring as the user initiates them, allows that experience to potentially be far more immersive than the passive communication common in other media. By using input devices like a mouse or keyboard, users might be able to explore a virtual world, learn to fly in a computer-generated aircraft, see how a car engine works by disassembling a fully accurate 3D model, or even share experiences with the digital representations of other users anywhere else in the world. Most commonly, interactive real-time 3D appears in the form of video games, but it is also found in web-based applications and other forms

of interactive technology experiences.

Visually, this form of digital 3D content is often quite different from the

animation seen in feature films or on television. While both are created using computers, 6 interactive real-time 3D is limited by the technology that allows the user to interface with it. Unlike cinematic 3D, in which complicated and heavily-detailed frames may take hours to render out for inclusion in film, real-time 3D requires that the scenes be rendered many times per second, in order to constantly update any changes in the environment. As a digital 3D character walks across a room, that event must be rapidly rendered many times over to show the action as it happens in real-time – a rate of sixty frames per second is usually considered ideal to achieve fluid animation and user interaction. As a result of this, interactive real-time 3D looks significantly different from its less dynamic counterparts. In general, the content looks blockier, less refined and more abstract, and it frequently lacks a lot of the details that make a cinematic 3D scene so convincing. This is because the graphics hardware required to produce real-time 3D imagery simply is not yet capable of rendering complex objects and effects fast enough to maintain the frame rates necessary for user interaction. Scenes with too much detail cannot be drawn on screen quickly enough, and what should be an immersive interactive experience is reduced to an awkward point-and-click slideshow.

(fig.2) A side-by-side comparison of the film Toy Story and a video game based upon the sequel, Toy Story 2 (1999).

7

This technical limitation of interactive real-time 3D gives it a very distinctive visual identity, which in turn has given rise to a number of unique aesthetic principles which govern the medium. Whereas cinematic 3D is often comprised of a wide range of creation techniques, real-time content is essentially restricted to certain fundamental elements best suited to its limitations. The analysis of these elements can be distilled into three main components: models, textures and animation.

Low-poly Modeling

The basic building blocks of all interactive real-time 3D models are called polygons. A polygon is simply a triangular plane that exists with the virtual space of the

3D environment. All objects, be they animals or trees or cars or anything else, are made up of these polygons, with the different planes and angles used to define the object’s surface. Because each polygon in a 3D scene takes time to calculate and render, the number of polygons used to create real-time content often cannot go very high, with the actual limit defined by the demands of the project. For example, it would be common in a modern video game to have a human figure created from approximately 2000 polygons.

This means that everything about that character that defines its volume needs to be comprised of about 2000 triangular planes. This is just a small fraction of the polygons which would be needed to create a fully realistic character, so this style of 3D content is often referred to as low-poly modeling. 8

(fig.3) A character model with outlined polygon wire frame, from Unreal Tournament 2003.

It is the low-poly nature of interactive real-time 3D that gives it such a blocky, angular look. Since objects cannot be realized in full detail, compromises have to be made in which less significant elements of an object are overlooked in favor of refining the model’s overall form. Low-poly objects rarely look as smooth and natural as they would in real life. Modeling in this way requires often requires techniques that are unnecessary for highly detailed 3D content. Most notably, a low-poly modeler needs to take great care in optimizing models, so that each polygon is used as efficiently as possible. If a character model has been created using 2500 polygons and the software demands that it cannot exceed 2000, it is up to the modeler to find effective but aesthetically sound ways to reduce the excess 500 polygons while retaining the overall 9

form of the model. This can be a particularly challenging task, but it is one that has

become somewhat easier in recent years as advances in graphics technology have

increased the number of polygons allowed in a typical real-time 3D scene and somewhat

easing the strict requirements of low-poly modeling.

Low-poly Texturing

Just as there are certain technical limitations to constructing the basic volumetric forms for interactive real-time 3D, there are also certain constraints on how those forms are colored or textures. Since the models are composed entirely of triangular planes, it is necessary to create 2D images of the relevant materials and literally map them onto the model. This process is akin to peeling an orange and laying its skin out flat, to create a

two-dimensional surface that can be wrapped around a spherical three-dimensional shape.

This process, called , can be complicated, since it requires all of the

myriad curves of an object be flattened out onto the texture, but retain some sort of

recognizable contiguity after it has been adapted into the 2D format. This can be

particularly challenging with organic or irregular 3D models, as the large number of

curves that may need to be carefully mapped can require a long and meticulous process to

successfully complete. 10

(fig.4) The textured character model, from Unreal Tournament 2003.

Additionally, the textures used for real-time 3D content have to help make up for the lack of detail in a low-poly model. It is not enough to simply apply a denim texture to the legs of a character model wearing blue jeans. Highlights, shadows, wrinkles and other small surface details all need to be digitally painted onto the texture and then mapped onto the model. Again, this calls for a unique set of skills, as real-time 3D texture artists need to work in two dimensions but think in three, finding ways to create believably detailed elements that successfully compensate for the low-poly limitations.

Artists have to interpret the low-poly 3D form and create accordingly, as well as take into account the inevitable warping and distortions caused by unwrapping the model’s surface 11

onto a two-dimensional plane. And while most textures for real-time 3D applications are generated simply as 32-bit colored images, it is also possible to use procedural shaders to generate some effects. Shaders are a form of visual effects textures that are controlled mathematically by the real-time 3D application itself. These textural effects can be specifically programmed by the artists to achieve certain dynamic visual results, like

smoke or sparks or shimmering liquids. Implementing these effects often requires the

artists to be skilled to a certain degree at programming within their real-time

environment, in order to generate precisely the effects they need.

Low-poly Animation

The final basic element of interactive real-time 3D art is the animation. In

principle, the animation process works with low-poly content largely as it does with high-

end cinematic 3D. Objects are translated, rotated and scaled through space, manipulated

in the same fashion by the animator. Even complicated animation processes, like making

a human figure move around, are handled in a very similar fashion, using a virtual

skeleton to move the figure around.

The challenge with low-poly animation is compensating for the dynamic but often

repetitive nature of an interactive program. When animating digital characters for a film,

each step a character takes is mapped out in advance, and the animator can ensure that

each of those steps is placed exactly as it should be, relative to the character and the

environment it is moving through. In a real-time 3D program, where a character might

go and what route it might take to get there is very often up the individual using the 12 program and not the animator who helped create it. As a result, a limited body of animated sequences is created that the program can play back dynamically as is required by the user’s input. Consider a scenario in which the user makes a real-time character run and jump. The animator would need to create a looping animated sequence of the figure running, one which can seamlessly be repeated over and over, as there is no way to determine in advance just how long the user will make the character run. An animation of the figure jumping also needs to be made, showing the figure leaping and landing on the ground, and possibly in several different variations depending on whether or not the figure is running when made to jump or standing still or any number of other possible combinations. With these basic animation assets of a running figure and a jumping one, the user can run and jump with relative freedom as the program responds to the user’s commands and displays the appropriate animated sequence. This sort of generic and interchangeable format to low-poly animation creates certain difficulties in creating transitions between movements that are both realistic and appropriate to the situation.

Often, interactive real-time 3D animation is somewhat imprecise as the do not perfectly match the environment. An animation for running on ground might look peculiar when used with a character running uphill, but the only other solution, creating unique animations for every possible user-initiated scenario, is simply too large a task to contemplate for all but the simplest interactive programs. This is simply recognized as one of the limitations inherent to the medium. 13

(fig.5) The textured character model posed for animation, from Unreal Tournament 2003, and the virtual skeleton used to animate it.

Animation can also be difficult to create when combined with low-poly models.

Since the objects used in real-time 3D are built from a small number of surfaces, they can become obviously distorted when made to animate in certain ways. A human figure made to bend at the knees might have angular, pointed deformations at the knee joints as certain polygons are stretched and others compressed. A well-built model usually takes its animation requirements into account, and high flexibility areas like knees or shoulders are often created with more polygons to help improve the deformation process. This is largely the responsibility of the modeler, as there is very little an animator can do to improve things once the model has been built. Nonetheless, a good low-poly animator 14 understands this difficulty and ideally can animate the character is a fashion that conceals or minimizes any awkward deformations that result. Again, this is an aspect of the art that is recognized as a limitation of the medium, but is one that becomes less of a problem as the number of polygons used in creating objects continues to rise.

The History of 3D in Interactive Gaming

The quest for interactive real-time 3D experiences began long before graphics technology was capable of actually providing it. As has long been the case, that quest is most often driven by video games. The steep demands of real-time 3D games continue to push graphics technology to new heights with each new generation of hardware, and every generation opens the door to new and ever more immersive 3D experiences. This symbiotic relationship between software and hardware creates an intimately linked history between interactive real-time 3D games and the technology used to power them.

The Arcade Days

While personal computers and video game consoles are common household devices today, the reality was far different in the early days of 3D graphics. Three decades ago, when entertainment electronics first began to appear, powerful computer systems cost tens of thousands of dollars and were generally limited to government, corporations and universities with the resources to acquire them. Home game consoles were in their infancy, and systems capable of real-time 3D at the consumer level were 15

still in the distant future. In these days, from the mid 1970’s through the 1980’s, video

game arcades are where the most impressive real-time graphics engines were often found.

The earliest arcade games were still quite rudimentary, with blocky graphics and

just a few colors, and games themselves were still fairly primitive. The steep technology

requirements at the time prevented games from being too elaborate, as well as the

limitations on the user’s accessibility; since the games had to be enjoyed in arcades and

parlors rather than in the user’s home, no given game experience could realistically be

expected to last more than a few hours, and often much less than that. Nonetheless, while

these restrictions limited what game developers were able to create, the desire to create

3D environments and interactions remained.

The very earliest games to attempt real-time 3D weren’t three-dimensional at all.

Instead, they used two-dimensional graphics and methods of scaling, translation and occlusion to give the user a sense of depth and motion. A good example of this technique is Night Driver, a racing game from Atari released in arcades in 1976. Night Driver was a very simple first-person game in which the user’s point of view is roughly behind the steering wheel of the car. There is no environment to speak of, just two-dimensional sprites (plain white rectangles) that move and scale across the screen to define the course and give the illusion of speed and direction. This technology is archaic by modern standards, but was a cutting edge use of pseudo-3D technology at that time. 16

(fig.6) Night Driver (1976) from Atari, one of the earliest arcade games to attempt a 3D perspective.

Fully rendered real-time 3D graphics were still nearly a decade away at the time

Night Driver was released, but other techniques designed to emulate 3D graphics were also used during this period. In the 1982, released the arcade classic Zaxxon, a forced perspective 3D shooter. Graphics processors were much more powerful than in

Night Driver’s day, and the game was much more visually impressive, but the game was still limited to a three-quarter overhead view. The user’s control was effectively restricted to lateral movement as the game perpetually scrolled forward. The visual tricks used in Zaxxon were more or less common amongst games of that era which wished to emulate a sense of 3D immersion. 17

(fig.7) Isometric forced perspective was used to simulate 3D in the arcade shooter Zaxxon (1982).

One notable game of that era which used significantly different methods to create a real-time 3D setting was the legendary Star Wars , release in 1983. The actual was very limited, as the game’s sole intent was to create the film’s climactic space battle between the rebel X-Wing and imperial TIE Fighter space ships around the space station. While the successful licensing of a popular film property was important in its own right, the most important aspect of the game was its true 3D graphics. Rather than using tricks with two-dimensional images to fake depth,

Star Wars used vector graphics to recreate 3D volumes. While the vector-based objects were just single-colored, outlined forms of the objects, players had no trouble immersing themselves in the game. Additionally, the game offered a degree of movement uncommon in most pseudo-3D games of the day, allowing the user to control the X-Wing on multiple axes, a significant step in achieving interactive freedom. 18

(fig.8) The vector graphics of Star Wars showed a different approach to real-time 3D graphics in 1983.

The time between 1983 and 1984 saw an important change in the development of video games. While arcades were still going strong and leading the charge toward interactive real-time 3D, the home business came crashing down.

Atari, which had dominated both the home and arcade gaming fronts, brought the console industry to its knees through a series of bad business decisions, particularly in saturating the market with a glut of truly awful games. As the consoles were no match for the power of arcade machines, the evolution of 3D graphics perhaps was not significantly set back by this, and arcades thoroughly dominated the gaming scene for the next several years.

In 1984, polygon graphics were finally featured in an arcade game with Atari’s I,

Robot. While the gameplay was simple, as the user moved and jumped the character around different parts of a colored platform, buttons could be used to alter the user’s 19 perspective. This game was a real landmark in terms of interactive real-time 3D graphics, as games made twenty years later still use solid polygons as the basic building materials for 3D content. It still took a number of years for solid polygon artwork to become prevalent in arcade graphics, but I, Robot illustrated that the technology was finally ready.

(fig.9) I, Robot (1984) from Atari became the first arcade game to feature colored polygon models.

By the early 1990’s, 3D graphics had become a common sight in arcade games and began the transformation of many genres. In 1993, Sega released the first polygon

3D , Virtua Fighter. Prior to its release, games of this sort were almost invariably 2D side-scrolling affairs, without any actual depth to the interaction. With

Virtua Fighter, the user’s perspective was no longer fixed in place, nor was the user constricted simple to forward-and-back movement. While 2D fighting games retained their popularity and still appear in arcades today, real-time 3D advanced the genre, becoming the standard format for fighters. Driving games, adventure games, shooting 20 games and sports simulations all underwent a similar transformation, and nearly all modern arcade games incorporate some form of real-time 3D content.

(fig.10) Sega’s Virtua Fighter (1993) was the first real-time 3D fighting game in arcades. Today, most games of this genre are developed with real- time 3D graphics.

By the end of that decade, however, the dominance of arcade games had passed.

Home systems, both game consoles and personal computers, were able to match arcade machines in graphics and generally surpass them in terms of gameplay and interactivity.

While arcade games still remain and are a popular format for certain game genres, more frequently the games use arcade popularity as a stepping stone to successful console sales.

Video Game Consoles Return

With the console market crash in 1983, the industry had only endured a temporary setback. By 1986, a pair of new home consoles had debuted, the Sega and the Entertainment System. While both systems were built with 8-bit processors, Sega’s console was actually considered the more technically advanced. 21

However, through clever marketing and a stable of quality game franchises, it was the new gaming system from Nintendo that revitalized the home console market and, in time, altered the course of digital entertainment.

In terms of real-time 3D graphics, the Nintendo Entertainment System was

incapable of generating anything like the contemporary arcade games were. The

difficulty for console system engineers is that they needed to create a machine capable of

playing a wide variety of games and do so at a price acceptable for the typical consumer.

Arcade hardware developers had advantages in both areas; arcade graphics hardware

could be engineered specifically to enhance one type of game or set of visual effects and

that hardware could cost several thousand dollars per unit as they were rented or sold to arcades rather than individual consumers. Despite this, the console craze was reignited in

1986, and home gaming hardware would soon make strides towards real interactive real- time 3D content.

As arcade games did in the late 1970s and continued through the 1980s, the less

powerful consoles needed to use visual tricks to give the impression of a three-

dimensional environment. These tricks grew more sophisticated as the next generation of

hardware began to arrive. In 1991, Nintendo released their second console, the Super

Nintendo, which was based upon a more powerful 16-bit processor than its predecessor.

The Super Nintendo was able to easily scroll multiple planes of images at different rates, creating a convincing sense of depth and doing so with far more vivid graphics than the original Nintendo system could provide. The 16-bit era of console gaming was hotly contested, as the Super Nintendo and Sega’s Genesis system battled for market share. In 22

the end, Nintendo had sold twice as many consoles as Sega, a thorough victory that

would set up the eventual exit of Sega from the hardware market at the end of the decade.

Though game consoles were back to stay and had established an irreversible

foothold in consumer electronics, their graphics capabilities still lagged behind that of the

arcades into the middle of the 1990s. But by 1995, real-time 3D graphics were finally

available to home gamers with the release of a new gaming system from Sony.

Consoles Go 3D

A surprising competitor, Sony introduced their first video game console, the

PlayStation, into a market thoroughly dominated by Nintendo. Interestingly, the

engineering for the PlayStation began as a planned CD-ROM expansion drive for the

cartridge-based Super Nintendo. Feeling confident with their existing technology,

Nintendo abandoned the PlayStation project, leaving Sony to complete the system and market it separately as the first home system to specifically support real-time 3D graphics. By modern standards, the PlayStation’s hardware is not technically impressive, but at the time of release its capabilities were largely unprecedented outside of the arcade.

Capable of rendering 1.5 million polygons per second1, the PlayStation could offer low-

resolution but thoroughly immersive 3D environments. These capabilities, along with

Sony’s excellent marketing and licensing efforts, began to turn the tide of the console

industry as Nintendo would eventually slip into second place.

Of course, Nintendo did not relinquish the crown easily. In 1996, Nintendo

followed with their third generation console, the Nintendo 64 (named so for its 64-bit

1 http://en.wikipedia.org/wiki/Playstation 23

processor). As had become expected with the gaming giant, the new system was released

alongside another flagship title from their franchise; this time, Mario made

his 3D debut in Super Mario 64. A revolutionary action/adventure game, Super Mario 64

is generally recognized as the first great real-time 3D title available on the consoles.

Using an analog controller to explore a vivid and fully interactive 3D space, the latest

iteration of the Super Mario franchise established a new high water mark for fantastic

visuals as well as exceptional gameplay. However, while the game itself was an

undisputed success, the Nintendo 64 console would not fair quite as well. While

technologically equal to the PlayStation and in many ways superior, Nintendo made a poor decision to continue developing their games on a proprietary cartridge format rather than the generic CD-ROM discs used in Sony’s system. While the cartridges did have some technical benefits, they could not store nearly as much data as a CD-ROM and they were far more expensive to produce. As a result, many of Nintendo’s long-time license partners moved to Sony’s cheaper, more development friendly platform. Just a few years into the 64-bit era of console gaming, it was clear that Nintendo’s mistake had cost them their role as the industry leader. 24

(fig.11) Nintendo arrived in the 3D age with the debut of the Nintendo 64 console and Super Mario 64 (1996).

The current generation of video game consoles kicked off in 2000 when Sony released their second home system, the aptly named PlayStation 2. This system was vastly more powerful than its predecessor, capable of rendering 66 million polygons per second2 – dozens of times more than the original PlayStation’s graphics engine could generate, and approximately as many polygons per frame as were used in some scenes from Pixar’s 1995 computer animated classic Toy Story3. Nintendo again followed Sony with a new machine, the GameCube, along with a new entry into the home gaming market by PC software giant Microsoft, called the X-Box. By this time, interactive real- time 3D gaming on consoles had become the standard, with lush, elaborate 3D graphics

2 http://en.wikipedia.org/wiki/PlayStation_2 3 Kerlow, p.14 25

dominating many popular games. While arcades had long been the source for cutting

edge technology, even that was no longer necessarily true. Adaptations of arcade titles to the home systems was now a common occurrence and the home systems were often able to perfectly match the impressive graphics seen in expensive arcade games. Certain genres of popular 3D arcade games, especially fighting games like the Tekken series,

made the transition especially well, and it can be difficult to tell the difference between

the console version and the arcade original. With consoles finally catching up to arcade

systems in graphics capability and having long since surpassed them in terms of

distribution, the home systems now are the undisputed leaders of the 3D entertainment

market.

(fig.12) in the arcade (left, 1999) is almost indistinguishable from its near identical twin on the PlayStation 2 (2000).

The Home Computer Catches Up

The expense of sophisticated hardware kept computers out of most homes when

the arcade craze began, and even home machines were almost never able to

match the graphics power available to the newest arcade systems. When Apple 26

computers and later the IBM-based PCs finally began to reach a mainstream audience in

the mid-to-late 1980’s, they were almost hopelessly incapable of matching even the

contemporary video game consoles, much less the powerhouse visuals common in arcade

games. Personal hardware simply lagged behind its more specialized

competitors, and the multi-functional nature of the PC often left it unsuited to performing

many of the tasks needed to achieve acceptably fluid graphics to effectively simulate

real-time 3D interaction.

Computer games made use of all the familiar tricks from the early arcade and

console days until 1992, when a revolution occurred. Almost simultaneously, two games

changed the face of PC gaming: Ultima Underworld from Origin Systems and Castle

Wolfenstein 3D from . Both of these games exposed the user to a first-person

perspective and created a previously unprecedented sense of immersion in a 3D

environment. An adventure game, Ultima Underworld placed the user in a system of underground caverns, populated with objects, monsters and other characters to interact

with. Building upon the famous Ultima line of role-playing games, the game is one of

somewhat typical fare, medieval swords-and-sorcery style, but presented in an original

fashion. It wasn’t a fully 3D game, as most things were simply rendered 2D sprites

instead of actual three-dimensional volumes, but the simulated 3D effect was compelling

for its time. 27

(fig.13) Origin System’s Ultima Underworld (1992) was one of the first PC games to give the user a first-person perspective.

While Ultima Underworld also debuted in 1992, it was the release of Wolfenstein

3D that became the watershed moment for interactive real-time 3D on the personal computer. Not only did it help pioneer 3D rendering technology, it created an entirely new genre of game, the first-person shooter, and established new records for the distribution of games through shareware (a method in which the first portion of a game is distributed freely, with subsequent chapters accessible after a fee has been paid). Pitting a lone WWII soldier against a horde of Nazis in a castle stronghold, the user mowed down wave upon wave of foes in addicting fashion, and gamers everywhere couldn’t get enough. The graphics were rendered in just 256 colors and the environment was limited to a series of nearly identical perpendicular rooms and hallways, but the interactive experience was more engrossing than ever. Additionally, Wolfenstein 3D marked the emergence of id Software as the leader in real-time 3D technology, an accolade they still hold without dispute a full twelve years later. 28

(fig.14) Wolfenstein 3D (1992) not only helped revolutionize PC graphics standards, it also created a new genre of game, the first- person shooter.

Though Wolfenstein 3D was the first game in this new genre, it was id’s following

efforts that made them famous – Doom and Doom II. Released in 1993 and 1994

respectively, the Doom games built upon id’s first-person shooter genre, only this time

facing off against a legion of hellish zombies and demons infesting a futuristic moon

base. Again using an arsenal of weapons, the user waded through level after level of

technological horror, but this time with graphics far surpassing those of their predecessor.

The Doom games feature environments made of irregular geometry, no longer limited to

Wolfenstein 3D’s perpendicular layout and uniform ceiling height. Additionally, all

surfaces were texture mapped, greatly diversifying the levels, and varying degrees of

light helped set the mood. Since the first Doom was initially marketed as a shareware

product, it is impossible to tell how many copies of the Doom games have been

downloaded and played, but it is believed to be in excess of 20 million4.

4 http://en.wikipedia.org/wiki/Doom 29

(fig.15) The PC games Doom (1993) and Doom II (1994) left an indelible mark on the PC gaming industry.

The Advent of Real 3D on the PC

Though the Doom games were an important step on the road to interactive 3D environments on the PC, they were still in effect just a facsimile of a real three- dimensional experience. While the game levels themselves composed a fully realized 3D environment, the characters and objects were created just as they were in the days of

Wolfenstein, painted on two-dimensional sprites. All that changed in June of 1996, with the debut of id Software’s newest blockbuster, the first-person shooter Quake. Released at almost the same time as Super Mario 64, Quake brought true 3D to the PC just as

Nintendo’s newest hit brought the technology to the mainstream for console gaming. 30

(fig.16) Quake (1996) was the first polygon-based first-person shooter on the PC.

With Quake, the characters and objects were all modeled from polygons along with the environment, finally putting real 3D volumes into an interactive world. The models were exceptionally low detailed, generally under 500 polygons per character, but this represented the pinnacle of real-time 3D technology at the time. Textures remained limited to an 8-bit color palette, but now these textures could be generated on a 256x256 pixel space and directly mapped onto a model. This offered an unprecedented level of realism in PC gaming, and set the standard for texturing techniques still in use today.

Additionally, the characters in Quake were all animated in real-time, with their meshes deform in accordance with inputs from the player or instructions from the .

Using a vertex animation system, each frame of a given animated sequence was 31

individually created and played back in the proper order to give the illusion of movement,

and offering a wider range of character actions than had been seen before.

Quake is important because it was the first PC game to offer a true interactive

real-time 3D experience, but also because it formed the basic building block of many

games and 3D engines to come. In 1997, id Software followed with a sequel, Quake II, which used a more advanced version of the engine and included a number of new visual effects. Additionally, id licensed their graphics engines out to a number of other developers to be used to build their own games. Some simply built their games directly atop id’s technology, while others made dramatic alterations, opening new doors into the interactive experience.

One of the most significant games of the post-Quake era is the first-person

shooter Half-Life, released by Valve Software in December of 1998. Praised by many as one of the greatest games of all time, Half-Life set new standards in video game design, particularly in the areas of narrative and artificial intelligence. However, while its rendering capabilities weren’t really superior to the it was built upon, a

number of new capabilities were added to the mix. Character and creature models now

approached 1000 polygons, roughly double the standard set by Quake, and textures could

now each use a unique color palette. Whereas everything in Quake and Quake II had to

be painted from the same 256 hues, Half-Life was able to use a much more vivid

collection of textures. 32

(fig.17) Valve Software’s Half-Life (1998) and its expansions are still popular with gamers today.

Even more importantly, however, were improvements in animation technique.

The Quake-based system of vertex animation was not dynamic, as each frame of

animation was essentially an altered, static version of the original. In Half-Life, characters and creatures were instead animated using a skeletal system, in which an underlying system of bones is manipulated to deform the model around it, the way real bones move and shape the position of the body. Not only did this technique allow for more fluid and diverse animated sequences, it also opened the door for dynamic combinations of motions. For example, with vertex animation a character would have separate animations for running and for looking up or down, and could only perform one at a time – never both. Half-Life allowed for character skeletons to be blended in real- time, meaning that different elements of different animations could be combined to create a more believable system of movement. A character could be shown to look up or down while running by using the appropriate animated bones from the legs in one sequence and 33

the torso and neck in another combined together. This system allowed for a much wider and more believable set of character actions than had been attempted previously.

By the year 2000, interactive real-time 3D graphics had moved on and once again

the benchmark for impressive visuals was set by id Software. Quake III Arena, a multi-

player based first-person shooter, broke ground in several new directions and, as with the

original Quake engine, would become the platform of choice for a number of others

developers to use in building their own games. As is usual from a new generation engine,

the polygon count of Quake III Arena’s models was higher than its predecessors, this

time pushing nearly 2000 polygons per character. The most important achievements with

this new technology, though, were in the area of textures. Long limited to an 8-bit palette

of 256 colors, id Software’s artists were not able to use full 32-bit color for Quake III

Arena’s graphics, effectively allowing them to create photorealistic textures from a

palette with millions of hues. Additionally, texture images themselves had quadrupled in

detail by this time, going from 256x256 pixel squares to 512x512 sized squares on

average, and sometimes even larger. 34

(fig.18) Real-time 3D graphics took another huge leap forward with Quake III Arena (1999).

But perhaps most noteworthy of all was the implementation of new textural elements, including alpha mapping, specular mapping and procedural shader effects.

Alpha mapping allows certain parts of a texture appear transparent, useful for representing holes in or ragged edges around a polygon that the texture is mapped on to.

Specular mapping works similarly, but instead of making parts of a texture transparent, a specular map governs how much light a surface reflects and allows for glossy or matte surfaces. Neither of these techniques was explicitly new with Quake III Arena, but the game was noteworthy for widely implement these effects across all 3D environments and objects without restriction. More significantly were the procedural shaders which could be applied to models. Rather than relying just on painted two-dimensional graphics to color a polygon, computer-generated effects could also be used as part of the texture 35

process. This allowed for an amazing range of dynamic visual effects to appear across a

model’s surface, like the rippling reflective effect of metal or undulating wisps of smoke.

These shaders were programmable by the texture artists themselves, to achieve various

effects as needed. This technology has often required traditional 2D digital artists to

work as technicians as well, a common trend in the industry as games become ever more

sophisticated.

Quake III Arena visually dominated the real-time 3D scene for some time and,

unlike in previous years, no obvious successor as the leader in rendering technology has

emerged. In the fall of 2002, Epic Games released the first-person shooter Unreal

Tournament 2003, a continuation of their Unreal Tournament series begun in 1999. This game engine, and the slightly updated version of its sequel, Unreal Tournament 2004, represents the typical real-time 3D capabilities of software on the market today. In essence, the advancements made with Quake 3 Arena have been further expanded upon.

The potential polygon count for models has again risen, frequently topping 3,000 per character, and more advanced texture and shader features are common as well. One notable development in games in the present era is the inclusion of real-time physics calculations based on interactions with 3D objects and the environment. One touted feature of the new Unreal Tournament games is a ‘rag doll’ effect, the incorporation of

dynamic physics into character models. By linking certain calculations to the bones of a model’s skeleton, models are able to collapse and collide with objects in a more realistic fashion than scripted animation allows. In particular, when a user’s character dies in the game, the character model may realistically fall down a flight of stairs or react to other impacts the way a pliant cloth doll might. This is the first step towards incorporating 36 real-time physics into all aspects of an interactive 3D environment, a level of detail that will help further the suspension of disbelief when immersed in a virtual world.

(fig.19) Unreal Tournament 2003 is near the top of current interactive real-time 3D graphics capabilities.

New software on the horizon for 2004 promises a number of still more impressive features. Some upcoming games have further adapted physics into believable environments and will allow interaction with these dynamic objects between users in a multi-player setting. New shader and rendering techniques have been developed as well.

Most prominently showcased is a textural technique called normal mapping, a process in which highly realistic models, several hundred thousand polygons in complexity, are used to make accurate lighting calculations that are then transferred to a more abstract low polygon game model. This technique allows for blocky game models to appear far smoother and more detailed than they actually are. The most anticipated games for 2004, 37

both of which incorporate real-time physics and normal mapping, are sequels to a pair of titles mentioned above; id Software returns to their classic Doom series with Doom 3, and

Valve Software will deliver their long-awaited second game, Half-Life 2. While these games have endured some development delays, both are anticipated for release in the summer of 2004 and they are expected once again to push the boundaries of interactive real-time 3D beyond anything that users have seen before.

The Hardware That Makes It Happen

While arcade and console systems are developed with specific software in mind, computers are intended to be much more flexible and serve a wider variety of needs.

Because of this, advances in 3D graphics happened in a rather different fashion on the PC platform than with the dedicated gaming systems.

Taking the example of the Nintendo 64, Nintendo partnered with 3D hardware developer Silicon Graphics to create the internal components of their new system. As they were overseeing the development of the hardware itself and the software it would run, Nintendo was able to customize their system to function as efficiently and as powerfully as possible. For 3D technology with computers, such a close relationship between hardware and software was not possible. Unlike a Nintendo system, computer hardware and software is created by thousands of independent developers. The most well know PC software company, Microsoft, doesn’t create actual computers at all. Computer processors, hard disks, CD-ROM drives and virtually all other home computer components come from numerous manufacturers, and often do so in direct competition 38

with each other. As a result, one user’s PC may differ markedly from another, in terms of

actual power and software compatibility. New advances in technology sometimes

struggle to find success in the market, and unlike with consoles, it can sometimes take

years for the latest development to become an industry standard.

In 1996 and 1997, the first generation of 3D graphics hardware arrived on the

market for PC users in the form of new video display adapters (or ‘cards’) designed to

accelerate 3D performance. Due to the lack of any industry standards, several different

developers created their own 3D , each with a specific set of features. The most notable hardware included the ViRGE from S3, the Verite chipset from

Rendition, the Riva chipset from NVIDIA and the Voodoo chipset from 3Dfx. Naturally, each manufacturer’s product had its strengths and weaknesses, but all of them enabled the

PC user to have real-time solid polygon graphics for the first time. This generation of 3D graphics cards roughly approximated what was offered on the Nintendo 64 console, available at the same time.

A year or so into the debut of 3D graphics hardware it had become clear that 3Dfx and their Voodoo chipset was the industry leader. Using a 3D-only solution paired with a separate video card for 2D functions (a format unique to the early Voodoo hardware), the

3Dfx cards were able to devote all of their power to real-time 3D graphics. Generally just faster than the competition, the Voodoo hardware was consistently at or near the

cutting edge in terms of performance. Several of the lesser competitors (including S3 and

Rendition) quickly dropped to the wayside, and all other hardware developers were

relegated to at best a distant second place. 39

The Voodoo’s era of dominance would be short lived, however. A few poor decisions by 3Dfx, especially in choosing not to support high-end 32-bit color, left the door open for other developers to jump back in the race. By 1999, NVIDIA came to control much of the graphics market with their newer TNT and GeForce chipsets. 3Dfx would never recover from this reversal of fortune and would in fact be bought ought by

NVIDIA in the year 2000.

The GeForce line of video cards from NVIDIA not only highlighted a shift in market leadership, but it also introduced some new features to real-time 3D hardware.

Significantly, the capability for hardware transform & lighting (T&L) allowed programmers to move a lot of the complex calculations needed to draw a real-time 3D scene on the screen out of the main computer system (CPU and memory) and perform them on the video card. Again, due to the fragmented nature of computer hardware and software developers, it took some time before this feature could be properly exploited, but the end result allowed for faster 3D calculations and subsequently much more lush and immersive interactive environments.

As with 3Dfx before them, NVIDIA’s time as the frontrunner in the 3D graphics race was temporary. In 2002, Canadian developer ATI released their Radeon 9700 line of video accelerators and took over the number one spot in terms of real-time 3D performance. Long a follower in the 3D industry, ATI’s emergence as the technology leader illustrated the rapid changes which can occur in the PC hardware industry, a drastic difference from the five or six year development cycles common to the console manufacturers. ATI’s follow-up product, the Radeon 9800, further cemented their place at the top of the real-time 3D hardware developers. NVIDIA did not make any one error 40

that cost them their position as the industry leader, but instead simply released a

generation of cards that were not nearly as dominant over the competitors’ products as

they had been in the past. Fortunately for NVIDIA, this was just a temporary setback

rather than a career-ending situation like the one that befell 3Dfx.

In June of 2004, both NVIDIA and ATI will release their newest cutting edge 3D

graphics accelerators. While the jury remains out on how these cards will fair in the

marketplace, performance benchmarks for the new products are simply too close to call.

Both the new GeForce and Radeon cards have exceptional overall performance, and each

card excels in some areas that the other does not. Again, this lack of an industry-wide standard still will likely prevent new technologies and features from immediately being

implemented in the next generation of 3D games, but the neck-and-neck competition

continues to drive the PC graphics industry at a far more rapid pace than that of the

consoles.

Most of the 3D graphics adapters available to consumers today are an order of

magnitude more powerful than even the most expensive high-end digital components

available just a few years ago, as the growth of 3D technology has outstripped virtually

all other aspects of computer hardware in that time and each successive generation is

always capable of greater things than the one that preceded it. The latest hardware

features continue to drive the development of the software and ever more realistic and

demanding games continue to drive the development of the hardware, always keeping

interactive real-time 3D at the forefront if modern digital technology.

41

Aesthetics of Interactive Real-Time 3D

Discussing the aesthetics of interactive real-time 3D art is notably different from the analysis of most other forms of art and design because it is inevitably a critique of both the artist’s work and the medium itself. When comparing two styles of painting, for example, the specific requirements of the medium can often be put aside. An aesthetically pleasing Renaissance painting is evaluated on significantly different criteria than an Impressionist masterpiece, as the intentions of the artists of those works are also significantly different. However, the materials that the artists used remain largely the same and have little effect in determining the style of the resulting work. Variations in the medium certainly occur, but pigment is still pigment and canvas is still canvas; the use of one consistency of paint doesn’t mandate a realistic style, nor does a particular surface necessarily demand abstract art.

With digitally created 3D work, this is not the case. The intention of the 3D artist is often influenced by the medium in which he works – sometimes the art is enhanced by it, other times restricted as the artist works in and around its limitations. A casual observer can quickly identify the major visual differences between the pre-rendered 3D art assembled for film and that which is created in real-time for interactive multimedia, though they share a similar origin. The artist’s material and tools remain the same, sculpting from polygons and painting with pixels, but the final destination of that artwork determines the style in which the artist must work. Whereas the cinematic 3D artist is relatively unhindered in the visual design he chooses to pursue, the interactive 3D artist is limited by what his technology will allow and must tailor his work accordingly. 42

Because of this, the primary concern of an interactive 3D artist is in working

around and, when possible, overcoming the limitations imposed upon him by the medium

in an effort to maintain fidelity to his source material. In general, this means striving for an ever more natural, realistic interpretation of the content. Photo-realistic scenes,

accurate digital recreations of people and things, life-like movement and other elements

useful in creating an immersive, believable experience for the user. Each of these

elements presents unique and often difficult challenges to realize in real time. However,

realism is not always the goal; exaggerated, cartoon-style work can be similarly difficult.

Think of the fluid line work and dynamic, often unpredictable changes in form common with a typical cartoon and it is easy to anticipate the difficulties inherent in reproducing that with a medium based upon entirely upon digitally-generated planes and angles. The interactive 3D artist has to accept that a perfect interpretation is not possible and then set about finding the best compromise between what he wants to achieve and what the medium will allow him.

As a result, interactive real-time 3D is innately a representational style of artwork.

This means that the work created is often symbolic or iconic in fashion, with the content reduced to a streamlined, minimalist semblance of its original self. For example, an artist working on a video game adaptation of a James Bond film starring actor Pierce Brosnan

would not set out to precisely interpret the actor’s every physical feature – quite simply,

that’s just not possible. Instead, rather than trying to digitally recreate Pierce Brosnan,

the artist builds a character model that takes on a number of the actor’s attributes.

Special emphasis is put upon particularly distinguishable aspects; perhaps Brosnan’s nose

or chin in this case, along with accurate textural qualities. No one looking at the final 43 product will mistake the 3D model for a photograph of Pierce Brosnan, as would likely be the goal of cinematic 3D. The figure will be too blocky and angular, the color and lighting too flat, and the details too stylized to pass that test. However, if properly designed, the viewer would be able to immediately recognize that the model represents

Pierce Brosnan, much as the other elements of a real-time 3D scene might represent a car, a tree, a cartoon character or whatever else the artist wishes to bring into his interactive environment. A well-designed project recognizes the representational nature of the medium and ensures that all of the different components share in the same balance between fidelity and abstraction.

(fig.20) This image shows how low-polygon art can effectively create recognizable objects or characters, include James Bond actor Pierce Brosnan for the game 007: Everything or Nothing (2004).

This unique visual aesthetic is driven almost entirely by the technology available at the time of creation. While the representational style has long been the standard of real-time 3D, it has generally never been particularly desirable. The goal is always to achieve a new degree of realism, of accuracy to the artist’s vision. Each generation of computer graphics technology opens new doors to the interactive 3D artist, allowing greater levels of detail, access to superior visual capabilities and, above all, a better sense 44

of immersion in the final work. With real-time graphics capabilities always improving,

the artist must constantly refine his craft as more sophisticated tools and techniques

become available. This means that the aesthetics of good interactive real-time 3D art are constantly shifting - always united by certain common principles but always moving towards an ever more complex future.

Interactive real-time 3D content can generally be described by analyzing its three basic elements; in common terminology, those elements are low-poly modeling, texturing and animation. When dealing with the question of aesthetics, however, those three

elements are better revealed in their most fundamental state: shape and form, color and

light, motion and interaction. These broad tenets of design theory form the basis for

understanding the principles that guide the art.

Shape + Form

Creating the model is the first step in producing real-time 3D content, and so it is the component most responsible in determining how the finished piece will look. While digital 3D art is often likened to sculpture in that they are both forms crafted from volume rather than upon a flat plane, digital 3D has some unique characteristics that set the two forms apart. Most notable is that digital 3D art is in actuality an amalgam of both two- dimensional and three-dimensional space; the model is a legitimate 3D volume, but its third dimension exists only virtually, within the machine displaying it. A few rare exceptions aside, this display is made on a single, two-dimensional surface, usually on a computer monitor or as projected video. With a real, tangible sculpture, the brain can 45 take advantage of binocular vision to mentally craft an interpretation of the sculpture’s depth in space as well as its width and height. With most digital 3D content, that advantage is lost. Since the viewer’s information about the model comes from a single two-dimensional source, the viewer is unable to rely on natural vision to properly determine volume. This is made particularly difficult with interactive real-time 3D because of its representational, less-detailed nature. The volumes are already somewhat abstracted, and the absence of accurate lights and shadows typical with real-time interactivity further complicates the process.

Silhouettes

As a result of this translation into 2D presentation, other cues are necessary to visually sort objects into low-poly three-dimensional space. Primarily, the work relies upon occlusion and comparison in scale to account for depth perception. The flat nature of these images makes a real-time 3D object’s silhouette its primary design concern. As the artists are expected to craft their work within a specific and usually limited polygon budget, it is crucial to devote as many of those polygons as necessary to creating a proper perimeter to the form from all potential points of view. With the reduction of effective depth indicators for the 3D volume, the task instead becomes to communicate the model’s nature through a distinctive and easily recognizable silhouette. This concern might appear overstated when the viewer has time to linger on a single rendered frame of

3D content, but with interactive 3D and particularly with fast-paced video games, any given 3D element might not linger on screen for more than a fraction of a second. This 46 makes recognition of the object’s form more important than a uniformly accurate 3D model, particularly without the usual depth cues used to identify a volume.

(fig.21) With the texture removed, it is up to the model’s silhouette to define the look of a character. From Unreal Tournament 2003.

Abstraction of Detail

Since the majority of a real-time 3D polygon budget is generally assigned to refining the model’s silhouette, sacrifices must be made in other areas. This leads to what is likely the most easily identified visual aesthetic of low-poly modeling, reduction in detail. As mentioned above, it is this abstraction of the medium that necessitates the familiar, iconographic style. The challenge in designing effective low-poly models is recognizing what details can be reduced and how far those reductions can be taken. In general, any element that is helpful in creating an identifiable contour is retained and developed with as much detail as necessary. Conversely, 3D elements that are noticeable 47

in the silhouette from very few perspectives (buttons on a shirt) or do not significantly

differentiate the form (a fingernail curving around the tip of the finger) can frequently be

done away with altogether and rendered entirely through texture. Some elements fall in

between these extremes; they are reduced in detail but still need to be incorporated into

the model. For example, a character that will be required to speak must have teeth

modeled into the mouth, even if they are only occasionally visible. However, each tooth

does not have to be created individually. Instead, a segment of semi-circular polygons

representing the entire row of teeth will suffice, with each unique tooth describe through

the texture applied to those polygons. A successful low-poly model will find the proper

accommodation for these varying levels of detail, while creating a memorable silhouette from a variety of visual angles.

48

(fig.22) Since the character’s right hand is always holding a weapon, it can be modeled as a closed fist instead of individually modeled fingers as with the left, in order to reduce the number of polygons used. From Unreal Tournament 2003.

Proportion in Construction

Beyond that, basic volumetric and proportional issues begin to apply to the

model’s construction. While the effect two-dimensional rendering may devalue the effect

of these aspects in an interactive space, they are still crucial aspects to proper 3D design.

In particular, it is important have a natural, organic loops to the edges that define the model’s polygons. This means that, when possible, the edges that lay out the structure of a character ought to follow its actual contours as closely as possible. Maintaining a good 49 sense of edge flow throughout a model creates a fluid, more organized model, it also lends itself more readily to good animation. If a character’s muscles are modeled like real muscles, it becomes easier to make them extend and contract like real muscles. Poor edge flow technique can lead to irregular surfaces and particularly poor animation. Edge flow concerns are not just limited to organic models; difficulties with texture application and other surface problems can arise with highly geometric shapes as well.

(fig.23) This image of a human head shows the curving edge loops which define the form, particularly around the mouth and eyes, as well as across the skull. From 3Dtotal.com modeling tutorial.

Proportion within a low-poly model is also important – not just in ensuring that arms and legs are the right length or that a particular collection of models are well- 50 composed, but also that the polygons used in building a model are properly distributed.

This concern is sometimes referred to as polygon density. An aesthetically sound model will have polygons of approximately the same size and used at the same rate across the entire model. Some areas will naturally need more polygons than others, to accommodate specific detail or animation, but as a generally even distribution is important to ensure that each polygon is being used efficiently. If two-thirds of the polygons used in a human figure model are used in the lower body, the contour of the upper body is almost certainly being inadequately defined.

Order and Chaos

Another important consideration in low-poly design is camouflaging regularity.

The human mind is particularly good at sorting information and recognizing familiar patterns. That can be useful when taken advantage of properly, allowing the designer to imply certain details without modeling them fully and letting the viewer mentally complete the picture. However, this effect goes both ways. Most natural objects, from people to trees to rocks, are irregularly shaped. Even humans are not entirely symmetrical, as there are always observable differences even between the left and right sides of a person’s face. Unfortunately, computers are not particularly good at emulating this sort of irregularity. 3D applications rely on mathematics to define every point, line and shape, and they are much better suited to creating precisely ordered, regular geometry than anything chaotic. Often, 3D models of organic objects begin as familiar geometric volumes like cubes and spheres, and are then manipulated into a semblance of 51 natural irregularity. If the manipulation is not thorough enough and the regularity is not carefully masked, the illusion breaks down and the viewer sees a collection of planes and angles rather than whatever meaning was intended by the artist. What is meant to appear as a tree might instead be revealed as just a collection of cylinders roughly joined together.

(fig.24) This image from the PC game EverQuest (1999) illustrates the need to conceal regular, computer- generated geometry in an organic model.

52

Variety and Unity

Finally, like any art form, a real-time 3D artist faces question relative to his

influence or genre. Each project always has individualized concerns, and the value of the

resulting work is judged relative to how closely it represents the artist’s intent.

Nonetheless, fundamental aesthetic principles continue to apply here, whether the work is

a cartoon-styled video game, a realistic web-based product demo or any other form of

interactive real-time 3D. Like any body of design work, a good piece achieves the

appropriate balance between unity and variety. Each piece must visually relate to each

other, especially in terms of proportion and construction technique, but each piece also

needs to be distinctly unique so as to capture the viewer’s interest. An interactive project

in which things are too similar can quickly become boring while a project with wildly

divergent content will feel unstructured and difficult to interact with. The degree to

which this balance is found will vary from project to project, as some demand more

similarity and others encourage a more divergent approach, but this fundamental question of contrast must always be addressed.

Color + Light

While the models establish the base of real-time 3D art, it is the textures applied

to those models that are used to make up the difference in overcoming the limitations of

the medium. Low-poly model textures are rendered graphics stretched upon flat planes

that lack any real depth, and this contributes to the stylized nature of low-poly 3D. While 53 this is a less technologically demanding solution than attempting to render a fully detailed model in real time, it also requires a significant amount of 2D artwork to be successful.

It is at this point that the buttons and fingernails excised from the actual model are reincorporated, and the artist is now tasked with being a digital painter in addition to a sculptor. Unlike the cinematic 3D used in film and television, the lights and shadows implemented in real-time 3D are a poor facsimile of the real thing. As a result, the artist is required not only to create a texture and detail it, but also to compensate for the lack of proper light by simulating highlights and shadows on that texture. To create something as seemingly simple as the texture of a t-shirt for a 3D character, it is necessary to not only paint the shirt as it would naturally appear in neutral ambient light, but also paint in volumetric details like wrinkles and creases, then use appropriate tints and shade to make the texture appear realistically lighted. It can be a complex process.

Balancing Contrast with Consistency

With so much detail going into these two-dimensional graphics, the designer’s primary aesthetic concern has to be achieving consistency through the body of work while generating enough contrast to help accentuate the models they are applied to.

Specifically, a degree of contrast is necessary to get a strong figure-ground relationship between the model and its environment. As discussed above, the depth cues available to a real-time 3D artist are limited, and emphasis on the silhouette of an object makes textural contrast all the more important. The texture is needed not only to add color and detail to the model, but to help it ‘pop’ off the background and work as a distinct, easily 54

recognizable element. This is particularly important when an object that the artist wishes

to emphasize is not prominently featured within the scene. The use of colored lighting,

common in video games, can also dilute the hues of low contrast textures into a muddy,

indistinguishable mess.

(fig.25) Quake II (1997) introduced colored lighting to real-time 3D games.

Additionally, high contrast is needed to ensure that textures remain crisp even at significant depths inside the interactive environment. With real objects, colors inevitably begin to blur together or fade at a distance, but this issue is more prominent with digital

art. Since no detail can ever be rendered smaller than a single pixel, any details that are

beyond a certain range become averaged together with their neighboring pixels, often 55 into a vague, impressionist interpretation of the actual texture. This happens at a much shorter distance on a computer screen than in the real world, again emphasizing the importance effective contrast.

However, while contrast is important, it must be moderated with an appropriate degree of consistency. Bland and blurry colors are not aesthetically desirable, but wildly varied and clashing hues can be just as troublesome. Obtaining a sense of unity within a body of real-time graphic art is perhaps even more important than it is with models because there are a number of ways in which it can go wrong. Most obviously, it is important to have a thematically focused palette with which to work, particularly one that emphasizes similar value through many of the hues. For instance, a science fiction themed game might be built heavily on a lot of blues, greens and grays, with few warm colors. While that is a good beginning, it can go awry if there is too much difference in the intensity of those hues. Dark blue juxtaposed with brilliant green might be acceptable under real circumstances, but will almost certainly appear quite differently on a computer display. Monitors and the like operate by using additive light, meaning that each pixel of the screen is specifically illuminated to the necessary color, rather than the more subdued subtractive reflection of light that illuminates objects in the real world. This can lead to very intense variations, as the dark blue hues might disappear into black while the greens almost glow with intensity. The actual experience will even change from display to display, as different degrees of brightness and contrast alter the effect of the color palette.

Strong color combinations need to be used judiciously, particularly brighter hues, in order to avoid overwhelming the viewer.

56

Painting vs. Photography

While consistency is important at the basic color level, it is equally important that

the techniques used to generate the textures are also consistent. There are two main

approaches to creating textures for real-time 3D, hand-painted art and photo-sourced

textures. With the former, the entirety of a texture is painted from scratch, as the artist

uses his digital tools in the same fashion that a portrait painter handles his brushes and

paints. Photo-sourcing, on the other hand, relies almost entirely on using photographs

and external images to create the base textures, with manual adjustments made as

necessary. Both techniques have their merits. However, even when tonally similar, they

have a disparate artistic sense – while they can be combined, one form must have visual primacy over the other to ensure a consistent style. Photo-sourced textures are less critically appreciated of the two; while they can certainly save time and are more accurate than painted versions, special care must be taken to correct their innate highlights and shadows, bringing them in line with the rest of the work. A photo-source image of an object taken under intense, direct light will need a lot of work to fit into a body of work that is dark and heavily shadowed. Additionally, and especially with very low-poly models, the photo-realism of this style of texture may be at odds with the very representational style of the 3D form. Photo-source art is often useless for cartoon-based visuals. 57

(fig.26) This image from Max Payne 2 (2003) illustrates the use of photo-source imagery for game textures.

The other approach, hand painting, allows a greater degree of customization with the textures, but it also requires a lot of effort to accurately recreate specific detail. The time requirement alone can make photo-sourced content more desirable. And as with photo-sourced art, there are certain genres in which a painterly style isn’t always appropriate; projects that depend upon gritty realism are frequently better realized through appropriate photographed imagery. The final determination on which style or combination of styles to use is always relative to the project, but it is necessary to emphasize one form over the other. The two techniques are generally too dissimilar to work well together and projects which attempt to do so may look fractured and random. 58

Incorporating New Technology

Lastly, it is important to make the textures work for the model. Improvements in

color and light technology are among the most prominently advertised with each new

generation of real-time 3D graphics, and there can be a tendency to let that technology

overrule aesthetically sensible design. The advent of colored light technology saw a deluge of computer games that applied the effect excessively, with whole interactive

environments bathed in overlapping hues. This was almost always a visual disaster.

Other new technologies have been developed since then, from simple transparent textures

to complexly programmed procedural shaders capable of emulating a wide variety of visual effects, and the temptation to use the latest technique is always there. These techniques are just tools to help the real-time 3D designer communicate with the user and should be treated as such; if the tools themselves come to dominate the design, the meaning of the work may become lost. As the polygon counts available for models continues to rise, advancements in textures and textural effects are becoming the next significant stage in the development of nearly all interactive real-time 3D content. It becomes ever more important for the lighting and coloring of objects to step the medium closer to visual reality, and moderating those steps appropriately is as necessary as being able to take advantage of them. 59

(fig.27) This model of an in-game power-up for Quake III Arena (1999) uses a procedural shader to create a shimmering, metallic surface effect.

Motion + Interaction

Animation and interactivity form the third major aspect of real-time 3D art, and it is in this component that the form is most unique. Aesthetic principles familiar to both animators and cinematographers are often relevant, as well as issues of communication and involvement with the viewer. Much of art and design is static, with painting and sculpture something to viewed and appreciated, but not directly interacted with. Even art forms that are dependent upon motion, like film, only offer a passive experience to the viewer. Interactive real-time art is explicitly that – content that the viewer may actively 60

participate in as it happens. The dynamic nature of this form offers an ever-changing

visual experience, and subsequently involves a particularly wide array of design

concepts.

Perspective Dynamics and Composition

The questions of arrangement and composition are generally handled quite

differently in interactive real-time 3D than with many other forms. Since the position and orientation of the viewer’s perspective is often dynamic and controllable, issues of

scene composition and special relationships are sometimes deliberately overlooked.

When the viewpoint is subject to rapid and unpredictable changes, there is little reason to

expend the effort in perfectly realizing any one static perspective. A well-crafted visual

arrangement that works from one angle may be completely lost in another. With an

unrestricted interactive space, there is no guarantee that the viewer will ever be in

position to even see that particular arrangement, much less linger long enough to actually

appreciate it. Instead of working in terms of compositional elements as viewed directly

on a two-dimensional monitor, an overall sense of special relationship, of flow through

an interactive environment, is the more appropriate goal. With first-person video games,

the driving force behind much real-time 3D design, it is considered extremely important

for the environments to mesh together well, with different routes through the

environments establishing different rhythms and offering a highly immersive, interactive

experience. 61

(fig.28) While it lacks any specific sense of visual composition, this game level from Quake II (1997) is considered one of the best ever made because of its ability to consistently draw the player from one point of action to another.

Generally, this design follows the basic unity/variety principle, with a lot of familiar, connected spaces, but spaces that possess a unique and recognizable identity.

Environments that do not achieve a balance between the two can hinder the interactive process; too much similarity and the experience becomes repetitive and boring, but too little similarity renders the environment disordered and difficult to navigate. A strong sense of connection between the elements of an environment is important in order to always lead the user further into the experience. Even in a realistically designed 3D environment, it is sometimes necessary to violate logical constraints in order to improve the interactive experience. A real building will have doorways that open into dead ends 62

and hallways that lead away from the main areas of interest, but these features would be

undesirable within a video game environment as they do not help move the user into a

compelling event or encounter.

Additionally, this unusual component of spatial relationships is something that strengthens the user’s connection with the environment. By constantly redefining the composition through this dynamic perspective, the user is always analyzing new information and finding ways to relate it to what he has already seen. This plays off the user’s innate puzzle-solving psychology, leading to an ever-changing understanding and sense of awareness of virtual space beyond what is just on the screen. Despite being displayed in a two-dimensional format much like a television show, interactive 3D encourages the user to mentally “fill in the blanks” of the content beyond his field of vision and make assumptions as to other objects’ position and movement relative to himself. A passive experience like television is unable to break the borders imposed upon it by the length and width of its screen. Freedom from this limitation and the engagement of the user to think beyond purely visual boundaries is one of the defining aspects of the interactive real-time 3D experience.

Bringing Low-Poly to Life

Principles of motion also apply to individual objects as well as the environments

they inhabit. Often, these principles closely resemble those used in traditional animation, particularly the ones relative to characters. Concepts like ‘squash and stretch’, the deformation of an object to suggest action or reaction, are just as appropriate to 3D 63

animation as they are to hand drawn artwork. In particular, digital media has some

concerns that are almost exactly like those faced by cartoonists, particularly when dealing

with animated characters.

The primary consideration of low-poly animation is the need to keep things

moving. Because real-time 3D is limited in detail by the medium, things that are not

obviously in motion may appear static or non-interactive may easily be dismissed by the

user. With photorealistic media like film or cinematic 3D, an unoccupied figure standing

on a street corner can be relatively inert, as small motions and other visual cues can

communicate liveliness. A glance, a sigh, and a shift in posture might all indicate an

animate character. However, these motions largely ineffective for communicating in

real-time 3D. With figures in a low-poly 3D setting, a computer game for example, those

minute actions are generally not perceptible and do not add work to inform the user. The models have too little detail to make such slight movements apparent. It is necessary to make certain that the figures better illustrate their liveliness – looking around, fidgeting, and gesturing in an obvious fashion and so on. Even the obvious rise-and-fall of a character’s chest as it appears to breathe can go a long toward improving the user’s connection with the environment. Live things that do not move can be indistinguishable from lifeless ones.

Of course, this consideration does need to be moderated as appropriate to the

setting. While small actions are often not perceptible, unusual motion may be even

worse, distracting the user away from the experience. A character that is constantly in

motion with a seemingly endless series of repetitive actions can be misleading; a low

poly figure that incessantly looks around goes from simply being ‘alive’ to appearing 64 worried or paranoid, which may be contrary to the artist’s intent. Interactive real-time 3D animation almost always consists of pre-programmed animation routines that are executed based on user input, and repetition with these animations is inevitable. This circumstance, one fairly unique to the interactive medium, again necessitates a careful aesthetic balance between unity and variety to properly engage the user.

Exaggeration of Movement

Similarly, animation in real-time 3D often works best when exaggerated. Again, this principle has its roots in traditional animation techniques, in which cartoon figures often act and react at the extremes of motion. As mentioned above, subtle movements and nuance do not translate well into real-time, due to the lack of available detail.

Typically, low-poly animation requires demonstrative action for even small movements, so larger ones work best with a similarly heightened motion. Additionally, in a manner similar to the textural need for high contrast at depth, motioned rendered beyond a certain range can also be hard to see and appreciate. As a result, amplified motions are helpful in improving the user’s recognition of a moving object as an exaggerated form creates a more identifiable silhouette. Basic primary motions, things like running and jumping, are most directly enhanced in this way. The exaggeration of motion need not be comical unless the artist intends it to be so, but simply at the extremes of normal movement.

Emphasis on dynamic ranges of motion has long been a key aspect of sequential art like comic books, in which a single image must illustrate an entire action. Interactive real- 65

time 3D’s representational style is very much akin to this, and this concept of exaggeration translates well across both media.

Secondary Motion

In addition to the exaggeration of an object’s primary animation, careful attention

to secondary motion is also important. Secondary motions are the indirect and sometimes

unintended consequences of the primary movement. For instance, when a person runs,

his legs stride and his arms pump up and down as his entire body moves. These are all primary motions, directly intended to create the movement of running. However, these actions generate reactions. The runner’s hair may ripple and wave, his clothing may flow behind him and so on. This sort of movement isn’t restricted to figures. The wheels of a car that travels down road will rotate causing the entire vehicle to move, the primary motion, but the body of the car will also bounce slightly above the wheels as it travels, particularly on a rough surface. These are secondary motions, consequences of the primary movement, and the inclusion of these effects is essential to the immersion process. If a human figure runs by moving his arms and legs but everything else remains static, the figure may take on a robotic feel and lose the lively, organic sense it is intended to convey. This secondary motion need not require elements like clothes and

hair; even small secondary movements within the body, like the compression of the torso

as the lead foot plants on each stride, can better communicate a real sense of action than

primary movement alone. 66

(fig.29) As the main character moves around in Hitman: Contracts (2004), his tie swings along with him.

Finally with regard to models, it is important that low-poly 3D content deform as convincingly as possible during animation. Some angularity and clipping of solid planes into one another is inevitable in some instances, a recognized drawback of the medium.

However, if fluid, natural deformation is not refined as thoroughly as possible, the object begins to lose its representational intent and again breaks down into an obvious arrangement of polygon geometry, taking the user out of the experience. There is no single solution to this problem, though artists are able to take advantage of increasing polygon counts to more effectively accommodate these high deformation areas. 67

(fig.30) The lack of enough polygons in this model from EverQuest (1999) to accommodate the animations gives it blocky, squared-off knees when kneeling down.

The Future of the Art

Because the industry for digital technology changes so rapidly, predicting anything beyond the immediate future is particularly difficult. Assessments made today on the state of digital media a decade later are perhaps more likely to be wrong than they are to be right. Looking back over the three decades of electronic gaming, enormous and sometimes unexpected leaps in graphics capability are evident, as real-time 3D has 68 evolved from little more than two-dimensional tricks and illusions into fully realized virtual worlds. Ten years ago, the industry had yet to see 3D acceleration reach consumers; perhaps ten years from now, games will escape the boundaries of flat screens and instead be played on real-time holographic systems, or take advantage of some other currently impossible technology. The ever increasing pace of technological discovery makes accurately predicting these developments particularly unlikely.

Nonetheless, certain trends can still be recognized and short-term progress estimated with a fair degree of success. Software built today has to be created with tomorrow’s hardware in mind, and it is common for developers to forecast the likely platform for their applications two or three years in advance. By looking at recent history and the pace of current technology, the direction of the future can sometimes be anticipated. With interactive real-time 3D, the future seems clear – an inexorable march towards cinematic reality.

Interactive Real-Time Cinema

If the recent trends in interactive real-time 3D show anything, it is that the industry is constantly driven to perfect itself in terms of creating digital worlds that are indistinguishable from real life. Looking at even the most sophisticated games and real- time software available today, no one would mistake their 3D environments for images of the real world. All of the limitations of the medium, from abstract models to artificial lighting, still determine the final visual impression. However, compare a current example 69 with a real-time 3D application created only five years ago, and it is easy to see how far the medium has come in a relatively short period of time.

It is believed by some that 3D graphics will finally be able to near perfectly simulate a scene in real-time within the next two generations of hardware. Even id

Software’s legendary programmer, John Carmack, suggests that real-time cinematic graphics are not too far off5. As the hardware becomes more and more capable, developers are less limited by the traditional restrictions of low-poly content. In fact, the question arises as to whether real-time 3D art will even continue to exist in the future as it does today. Once the need to define an artist’s vision within the demands of low-poly design is no longer necessary, it seems possible that the entire medium as it currently exists will be completely replaced by a different, more cinematic aesthetic.

In particular, it is some of the advances in lighting and shading that indicate the future of cinematic quality real-time 3D is not far off. One important technique, mentioned earlier, is normal mapping. Normal mapped models allow for realistic lighting calculations to be made against an extremely detailed model, and the subsequent light and shading data interpolated onto a much lower poly model that is actually rendered. The highly detailed version is never seen, as it only exists within to generate the normal map. However, it is only a matter of time before graphics hardware is able to render that highly-detailed version in real-time, not only eliminating the need for the intermediate normal mapping step, but doing away with the low-poly model altogether.

Rather than need artificial effects to simulate high detail on abstract forms, the fully modeled geometry may completely replace its much less detailed predecessor altogether.

5 http://www.gamasutra.com/gdc2004/features/20040325/postcard-sanchez_carmack.htm 70

(fig.31) This screenshot from the upcoming Doom 3 PC game shows how normal mapping can make a low- polygon scene look exceptionally realistic.

Another upcoming technology that will help real-time 3D take the next step

towards cinematic-quality interactivity is high definition range imaging (often

abbreviated as HDRI.)6 This technology allows for much precise color calculations to be made, particularly when changes in lighting alter how a hue is perceived. Presently, the

color precision in real-time 3D graphics is somewhat limited, as only a certain number of

hues can be correctly reproduced. While this still allows for an enormous number of

colors, problems arise when trying to blend two hues together or affect them with lights;

they sometimes become desaturated and muddled together. HDRI works around this by

allowing for vastly more accurate color calculations. This ensures that blended hues

6 http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=139&page=3 71

remain as accurate as possible, while also creating the potential for new light-based

effects. Light sources will be able to realistically imitate glow or halo effects, like those

seen around a streetlight at night, or allow for intensely saturated colors to overlap other

regions, able to create a bloom of light around window frames and the like as natural

sunlight pours through.

(fig.32) An image from one of the Half-Life 2 tech demos illustrates the effect of high definition range imaging, as the incoming sunlight blooms around the window frame.

Both normal mapping and HDRI effects are available to developers now, and they

are on their way to becoming common technologies. Games with normal mapping are

already appearing on the market and those that take advantage of HDRI are not far

behind. These are just two examples of what is a much larger trend, the continued push

for a real-time interactive experience that is graphically all but identical to the real thing. 72

With immersion in an interactive experience the ultimate goal, life-like imagery will always number amongst the driving factors in aiding the suspension of disbelief.

Time and Money

While the rapid pace of graphics technology and the interest of interactive real-

time 3D designers in taking advantage of it go unquestioned, other factors may prohibit

the established style of low-polygon art from falling by the wayside. The expectation of

cinematic quality visuals dominating real-time 3D may be a widely held, but market

factors and recent trends indicate that might not be the case, at least not without a major

paradigm shift in how the industry creates digital content.

Initially, the issue of time becomes a problem. As 3D art becomes more detailed

and realistic, the time required to create it increases as well. Tasks that might once have

taken days to complete might now require weeks or months to turn into polished content.

This necessity for longer development times is evident in the visual effects industry for

film and television, where large staffs of technical artists often spend in excess of a year

creating, refining and then incorporating the 3D models and animation needed for a two-

hour motion picture. As the interactive real-time 3D industry trends towards similar levels of detail, development times have risen accordingly. Between id Software’s first two 3-D titles, Quake and Quake II, approximately one year of development took place.

When going from Quake II to Quake II Arena, a game based on a more sophisticated graphics engine, two years were required to build the game. Id Software’s current title in development, Doom 3, is a significant leap in technology over its predecessors, 73

approaching the level of cinematic 3D. Accordingly, the game has been in development

for almost four years, due to the much more demanding nature of the content that needs to be created.

In part, the problem of time can be addressed by spending more money.

Specifically, this means hiring more developers to create the body of content needed for a

real-time 3D project. While one modeler and animator, a couple texture artists and a few more environment designers was sufficient six or seven years ago, a larger staff is required to generate content for more recent graphics engines, and it seems possible that the game industry is headed towards the need for even more artists and designers, akin to the film industry. Again using id Software as an example, the original Quake required only about ten developers when it was completed in 1996, only six of whom were responsible for creating visual assets. Presently, id’s staff has grown to more than twenty employees and at least fifteen are involved in developing real-time 3D content. By industry standards, id Software has long had a relatively small staff of developers; other developers might have literally dozens of artists contributing to a single project. As with the trend towards longer development times, the need for more artists to create digital content continues to rise as well.

These demands for more development time and more money spent on hiring additional artists might rein in interactive real-time 3D’s charge towards cinematic realism before present low-poly techniques are made obsolete. Unlike the film industry, development time can reasonably be stretched only so long. A film is not explicitly dependent on its technology to be successful, as a well-made movie released ten years ago likely still stands up as a quality production today. Despite being produced with 74

fifteen year old graphics technology, Terminator 2: Judgment Day still looks fantastic,

even by modern standards. With games, the rapid improvements in 3D hardware do not

allow for the same longevity. When a developer begins work on a new title, it must

anticipate the hardware capabilities expected at the time of the game’s release. If they

underestimate the time required to complete the game and are forced to delay it for a

significant amount of time, the final product is likely to look dated and be less attractive

in the eyes of consumers in comparison to more visually appealing titles. A game

targeted for summer of 2003 that doesn’t see release until the middle of 2004 simply

might not look as impressive as other new titles that met their development deadline.

Additionally, unlike most film studios, interactive real-time 3D publishers do not

have enormous sums of money to devote to any one title. Whereas movie productions

regularly cost more than $100 million dollars to create, successful films have the

potential to make back that investment many times over. The third installment of the

Lord of the Rings trilogy, The Return of the King, was released in December of 2003 at

an estimated production cost of $150 million, but has made over $1.1 billion in box office

sales alone. The profit margins for video games are not nearly as high, and the money a

developer can risk spending on any given project is limited. It is not uncommon for

delayed and over-budget titles to be rushed out the door by the publisher before they are

ready, in order to recoup development costs even if it results in an unpolished product.

The possibility of investing in a large staff and a development cycle of five or six years to create just one cinematic-quality real-time 3D title may temper the industry’s enthusiasm for a photo-realistic interactive experience. 75

Of course, like any projections made about the real-time 3D industry, it is nearly

impossible to anticipate what changes in technology might alter the development process

in coming years. It is entirely possibly that new content creation tools and techniques

will allow artists to build cinematic 3D scenes in a fraction of the time it requires today,

or that changes in how the industry runs might reduce costs enough to make larger staffs

of developers more feasible. When considered with respect to the history of video games

and the constant leaps 3D acceleration hardware, it wouldn’t be wise to rule out the

possibility that low-poly 3D might one day cease to exist altogether. Realism in the

interactive experience remains the goal, and real-time cinematic 3D graphics are key to

making that happen.

Conclusion

In the end, the only thing that is certain about the future of interactive real-time

3D is that it is going to change. It is inevitable that the technical boundaries that define

the medium today continue to be pushed back, and the current visual sense of low-poly

content may be altogether replaced. This continual influx of change, of perpetual

reinvention, is really what makes interactive real-time 3D an always compelling

experience. How the art is perceived today varies greatly from perceptions ten and

twenty years ago, and how it will be looked upon decades from now is anyone’s guess.

Nonetheless, the aesthetic principles which govern interactive real-time 3D are built upon core concepts that are always relevant to visual design. The fundamental elements of 76

form or of color maybe come to be interpreted differently, but those basic tenets provide

the foundation for all digital three-dimensional content.

Building upon those concepts, the intent of the medium stays the same even as its

outward appearance continues to change. While the definition of interactive real-time 3D will certainly keep adapting to current trends and technologies, there is no question that the desire to explore new realities and be immersed in new experiences will always remain.

77

Bibliography

Publications

1. Beardsley, Monroe C. Aesthetics, Problems in the Philosophy of Criticism. Harcourt, Brace & World, Inc, 1958.

2. Behrens, Roy R. Design in the Visual Arts. New Jersey: Prentice Hall, Inc., 1984.

3. Brochmann, Odd (translated by Maurice Michael). Good or Bad Design?. New York: Van Nostrand Reinhold Company, 1970.

4. Csikszentmihalyi, Mihaly and Rick E. Robinson. The Art of Seeing: An Interpretation of the Aesthetic Encounter. California : The J. Paul Getty Trust, 1990.

5. Kerlow, Isaac C. The Art of 3D Computer Animation and Effects. New Jersey: John Wiley & Sons, Inc, 2004.

6. Kushner, David. Masters of Doom. New York: Random House, Inc, 2003.

7. Lipman, Matthew. Contemporary Aesthetics. Massachusetts: Allyn and Bacon, Inc, 1973.

8. Margolin, Victor and Richard Buchanan. The Idea of Design. Massachusetts: The MIT Press, 1995.

9. Parker, Dewitt H. The Principles of Aesthetics. Indypublish.com, 2003.

10. Steed, Paul. Animating Real-Time Game Characters. Massachusetts: Charles River Media, 2003.

Websites

1. 3D Total Tutorials [website]; available from http://www.3dtotal.com/ffa/tutorials/max/joanofarc/joanmenu.asp

2. ACM Queue - Gaming Graphics: Road to Revolution - What will it take to boost computer games to cinematic levels? [website]; available from http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=139

3. DP Royal Archives - Video Game Timeline [website]; available from http://www.digitpress.com/archives/timeline.htm

4. Gamasutra [website]; available from http://www.gamasutra.com/ 78

5. Polycount [website]; available from http://www.planetquake.com/polycount/

6. Wikipedia – the Free Encyclopeda [website]; available from http://en.wikipedia.org/wiki/Main_Page

Software

1. 007: Everything or Nothing (PS2 format), Electronic Arts, 2004.

2. Doom (PC format), id Software, 1993.

3. Doom II (PC format), id Software, 1994.

4. Doom 3 (PC format), id Software, ~2004.

5. Half-Life (PC format), Valve Software, 1998.

6. Half-Life 2 (PC format), Valve Software, ~2004.

7. Hitman: Contracts (PC format), IO Interactive, 2004.

8. I, Robot (Arcade format), Atari, 1984.

9. Night Driver (Arcade format), Atari, 1976.

10. Quake (PC format), id Software, 1996.

11. Quake II (PC format), id Software, 1997.

12. Quake III Arena (PC format), id Software, 1999.

13. Star Wars (Arcade format), Atari, 1983.

14. Super Mario 64 (Nintendo64 format), Nintendo, 1996.

15. Tekken Tag Tournament (Arcade format), , 1999.

16. Tekken Tag Tournament (PS2 format), Namco, 2000.

17. Toy Story 2 (PS2 format), Disney Interactive, 1999.

18. Ultima Underworld: The Stygian Abyss (PC format), Origin Systems, 1992.

19. Unreal Tournament 2003 (PC format), Epic Games, 2002.

79

20. Virtua Fighter (Arcade format), Sega, 1993.

21. Wolfenstein 3D (PC format), id Software, 1992.

22. Zaxxon (Arcade format), Sega, 1982.

Film

1. The Last Starfighter, 101 min., (Lorimar Film Entertainment, 1984).

2. The Lord of the Rings: Return of the King, 201 min., (WingNut Films, 2003).

3. Terminator 2: Judgment Day, 137 min., (Lightstorm Entertainment, 1991).

4. Toy Story, 81 min., (Pixar Animation Studios, 1995).