SHEFFIELD HALLAM UNIVERSITY

FACULTY OF ARTS, COMPUTING, ENGINEERING AND SCIENCES

“Technologies and Architecture for Networked Multiplayer Game Development”

an investigation by

Luke Salvoni

April 2010

Supervised by: Dr. Pete Collingwood

Page 1 of 43

Abstract

Multiplayer video games have been in existence for over three decades, where real time network games were first developed on a device originally designed as an electronic learning tool. Since then, there has been explosive growth in computer network communications which led to mainstream multiplayer titles developing Local Area Network (LAN) versions of their games.

Today, network gaming can be conducted using a variety of different protocols and on diverse architecture. But how do they differ from one another? Which architecture is the most appropriate? Which methodology should be selected for game development, and what technologies are used? This report will explore an array of research and existing texts to learn more about these areas, in hope of contributing to the current body of knowledge, and for those interested in the development of networked multiplayer games.

Page 2 of 43

Table of Contents

Abstract ...... 2

Glossary of Terms...... 4

1 – Project Overview ...... 5

1.1 Background ...... 5 1.2 Aims and Objectives ...... 6 2 – Technology and Strategy ...... 8

2.1 Introduction ...... 8 2.2 Game Development Technologies ...... 8 2.3 Project Management...... 12 2.4 Programming Languages for Game Development ...... 14 3 – Networking and Protocols ...... 19

3.1 Aims and Objectives ...... 19 3.2 Protocol Elaboration ...... 19 3.3 Network Software Architectures ...... 21 3.3 Networking Technical Challenges ...... 26 4 – Conclusions ...... 31

4.1 Results and Recommendations ...... 31 4.2 Future Development ...... 33 5 – Reflection ...... 34

5.1 Aims , Objectives and Successes ...... 34 5.2 Research Undertaken ...... 34 5.3 Problems Encountered ...... 35 6 – References ...... 36

7 – Bibliography ...... 41

8 – Appendices ...... 42

8.1 Appendix A – Original Project Specification...... 42

Page 3 of 43

Glossary of Terms

API - Application Programming Interface. The interface of a software program that allows other applications to interact with it. FPS - First Person Shooter, eg , Half-Life. IDE - Integrated Development Environment. A piece of computer software used in coding applications. IP - Internet Protocol. The network protocol within the TCP/IP suite. LAN - Local Area Network. The name given to a small group of connected computers, in an office for example. MUD - Multi-User Dungeon. A small virtual game world in which players assume fictional roles. OS - Operating System, eg Windows, Linux. P2P - Peer-to-peer. A type of network architecture, where computers are connected to each other and not to a server. PM - Project Management. RPG - Role-Playing Game, eg Diablo, World of Warcraft. RTS - Real-Time Strategy game, eg Age of Empires, Command and Conquer. TCP - Transmission Control Protocol. One of the transport protocols in the TCP/IP suite. TCP/IP - Known as the Internet Protocol suite, responsible for most communication over the Internet. TOS - Terms Of Service. A list of rules that someone would be expected to read and adhere to, before installing a piece of software or game. UDP - User Datagram Protocol. One of the transport protocols in the TCP/IP suite. VB - Visual Basic. A programming language, typically used in Windows application development.

Page 4 of 43

1 – Project Overview

1.1 Background

The first documented multiplayer game can be traced back to American physicist William Hinginbotham in 1958, for his Tennis simulation game, aptly named Tennis for Two. Hinginbotham worked at the Brookhaven National Laboratory, (BNL) New York, where they had regular visitor days in the autumn. This game was designed to show the capabilities of the instruments that Hinginbotham worked with, as he knew the visitors would be far more interested in their display if they were able to interact with it. (US Department of Energy, 1981) Although Tennis for Two was only a short-lived exhibit for the BNL visitors between 1958 and 1959, the next multiplayer game of their generation was about to make an appearance. Spacewar was developed for the new PDP-1 (Programmed Data Processor) computer that was installed at MIT in 1961 and also allowed for two players. (Armitage, Branch and Claypool, 2006) As with Hinginbotham‟s creation, Spacewar was not commercially viable at this stage as the MIT computer it was run off cost the equivalent of approximately $100,000 in today‟s markets. Less than one year before the birth of Spacewar, professors and students at the University of Illinois created PLATO – Programmed Logic for Automated Teaching Operations, a computer-based educational development and delivery system. (PLATO User‟s Guide, 1981) The first iteration of PLATO, launched internally in 1960 allowed for a single computer terminal to connect to a central mainframe to deliver curriculum material. Future versions of the system in the years ahead allowed for multiple machines in a room to connect to the mainframe, and by 1972 a new generation of mainframes was in production that would eventually support up to one thousand concurrent users. (Woolley, 1994) Some of PLATO‟s other initial features included email and split-screen instant messaging – and games. Spacewar even appeared on PLATO as well as various simulation titles. These early client-server interactions were

Page 5 of 43

performed by the Telnet application, which uses the Transmission Control Protocol (TCP) connection to transmit data. (Postel and Reynolds, 1983) This foray in producing small networked applications and games paved the way for Multi-User Dungeons. (MUD) This term was coined in 1978 by Computer Science undergraduate Roy Trubshaw. It was the name given to the virtual world he developed using the MACRO-10 assembler on a DecSystem-10 mainframe at Essex University, England. (Bartle, 2003) Some of the ideas presented to Trubshaw for the game came from friends at the University, such as Richard Bartle – who joined forces with Trubshaw to code the remainder of the while he finished his degree. These types of multiplayer games became increasingly popular throughout the 1970‟s and 1980‟s, which resulted in a mass of MUD clones and arcade multiplayer games being developed and released. As time passed, the exponential rate at which technology was growing meant for a new era of problems for programmers and developers – network architectures were evolving, the World Wide Web was being drafted and the power of personal computers continued to match the specifications as laid out in Moore‟s law; stating that the number of transistors in integrated circuits would roughly double every two years. (Moore, 1965) These new technologies are still advancing, which has led to a lack of research into networked multiplayer games and their related components. The aim of this report is to explore further these areas and to contribute to the current collection of ideas and techniques that this technology uses.

1.2 Aims and Objectives

As mentioned above, initial investigations show there appears to be a lack of research into network software architectures and protocols for multiplayer gaming, and in game development technologies and processes. This is perhaps due to the secretive nature of most games developers who do not wish to share details on their infrastructure or development processes – understandably so. Gaming, although now worth more than DVD and music sales combined and

Page 6 of 43

over four times the cinema box office takings in the UK, (Chatfield, 2009) is still not considered as an established academic route. The games industry as a whole still focuses on the production of games – as expected; therefore the bulk of their time is spent following often rigorous development cycles, with little room for error and no budget for additional research outside their requirements. This report begins with an insight into project management (PM) methodologies and game development technology – the latest of which remain closely-guarded by developers of any size and an area in which little is publicly known to anyone not employed in the industry. This area is important so that appropriate observations can be recorded and used to assist with the production of a balanced conclusion. As with the subsequent research areas in this report, these are equally valuable to enhance the current academic research that exists and to support personal development motives. Subsequent research includes an insight into network software architectures and protocols – the building blocks required for the successful production and maintenance of a multiplayer networked game. The final area which has been identified as relevant for this project concerns the challenges of network development. A number of dominant areas within network technology, administration and maintenance have been identified and shall be addressed as to their importance towards developers working on a multiplayer game. Once this research is complete, a thorough conclusion will be presented and a list of recommendations drawn up based on the findings in this report. A critical reflection will follow this to analyse the success of the project and of the research presented.

Page 7 of 43

2 – Technology and Strategy

2.1 Introduction

This section focuses on game development factors such as game engine design and technologies such as middleware. PM methodologies used in the industry will be also studied; this is followed by a look at programming languages for game development. The benefits of each will be analysed for use in the development of a multiplayer networked game.

2.2 Game Development Technologies History

The first recognised 3D game was Battlezone – created by Ed Rotberg and colleagues at Atari in 1980. (Hague, 1997) This game used the new vector display technology recently developed by Atari, which utilised lines, curves and points instead of traditional raster graphics, which had been around since the early 1960‟s. As game designers, Rotberg and his associates at Atari were eager to use the new technology to create a first-person 3D perspective game. Battlezone was designed as a single-player arcade game, but initially was not a public success as only two cabinets were produced; one of these was a modified version of the game, created especially for use by the US Army as a training tool for their infantry gunners. Some of Rotberg‟s fellow developers declined to work on the version for the army because of potential restrictions and additional conditions they would likely have had to adhere to. Elite by David Braben and Ian Bell in 1984 closely followed and was first released on the BBC Micro. The game involved trading goods between different star systems in space. (Morris and Rollings, 2004) Two sequels followed in 1993 and 1995 which introduced advanced 3D graphics – this meant that Elite and its sequels could be ported to most games consoles of the time. (Elite, 2010) Processing power continued to grow, as did the number of polygons used to create objects inside games and the drawing procedures they used. It was around this time that simple shading techniques were employed by developers

Page 8 of 43

with a single directional light, and then Gouraud shading which soon superseded this method. (Morris and Rollings, 2004) Developing games which exploited these technologies began to require larger teams and specialised staff to bridge the growing gap between game implementation and design. As games started to use the new capabilities of the first 3D accelerators, products with massive polygon counts, texture mapping and lighting effects flooded the market in the early 1990‟s. , a young games programmer at computer company left to found in February 1991, along with fellow programmer , games designer and artist Adrian Carmack. (no relation to John Carmack) (NTB, 2010) It was just one year earlier whilst at Softdisk developing small PC games that (John) Carmack realised how limited the PC was at handling the latest flight simulators. Carmack subsequently spent six weeks in an intense research phase, studying and testing code in order to draw large amounts of computer graphics at the fastest speeds. He created a program that would only render walls in an environment, and coupled this with a technique called ray casting. This method only requested the graphics processor to render strips of graphics at a time – dependant on the player‟s point of view. Carmack discussed his findings with Romero and agreed on what kind of game could fit the game engine he had created. In April 1991, Hovertank 3D was published by Softdisk (due to prior contractual agreements with the founders of id Software) and was the first fast-action, first-person shooter (FPS) for the PC. (Kushner, 2003) Towards the end of 1991, Carmack had time to look back at his work with game engines, when Romero proposed they recreate a classic Apple II game from the 1980‟s – . May 1992 saw the release of as shareware (one episode out of six was available for free) It wasn‟t long before the team was looking for a new project – this was when Carmack suggested they create a game about “demons versus technology.” Doom was born and incorporated new graphical advancements such as binary space partitioning, (BSP – assisted with rendering 3D models on a screen) texture mapping and dynamic lighting. (Kushner, 2003)

Page 9 of 43

Game Engine Design

Carmack and his colleagues at id Software had produced the first commercial game engine with the development of Doom – after release a number of developers that wanted to license their engine were in contact, and three titles followed, in addition to id‟s own sequels. Engine licensing was something that the team had already encountered after the release of Wolfenstein – they had an offer from one company that produced religious- themed games. They wanted to release a first-person game based on Noah‟s Ark – their bid was successful and was released on the Super Nintendo games console. (Kushner, 2003) Game engines by the mid 1990‟s were on the rise as more developers realised the possibilities of 3D gaming, and the importance of solid game engine design became a popular discussion for online forums and newsgroups. Eberly (2000) defines a game engine as a device that „manages the data in the world and renders it on the computer screen.‟ The game world would be the place that contains content created by artists, under the supervision of the game designers. The programmers are responsible for populating the world with game assets as required and to program the game engine. Additional functionality and individual processes such as graphical effects or AI would also be written by the programmers as separate scripts and then loaded into the engine. Early 3D games were often perceived in two ways – the first, as being visually stunning, utilising the latest graphical effects. The second was that they all used the same game engine – and therefore likely had nothing new to offer, particularly if gameplay was lacking or the story non-immersive. This is particularly prevalent in modern 3D gaming where many developers will license out their engines to other parties to their own FPS or role-playing game, (RPG) as these are very popular genres. id Software still publish or produce a major title once a year, and continue to develop and license their game engine since the original Doom. (Callaham, 2007) The success of a title can rely on a variety of factors – sometimes the name of the game engine can determine success, or the stunning visuals or gripping storyline. Regardless, in order to score well on reviews, it is said that the initial

Page 10 of 43

period a reviewer will have with a game in order to state how good it was is usually the deciding factor. The technology used can play a large part in winning over the media, as Morris and Rollings (2004) have identified – „the perception of technology.‟ Several game development styles are discussed, such as style before content, presentation before gameplay and gameplay coupled with accessibility or appearance. They state it is important that when deciding on whether to employ a game engine or not, that it does not remain the focus of the game‟s production – but merely a tool with which to enhance a product and the design of a game.

Middleware

Middleware was first described in a NATO conference on software engineering in October 1968; an example scenario was given and the file- handling software from a manufacturer was insufficient and had to be rewritten to match their specification. (Naur and Randell, 1969) This principle of creating a custom function to meet a requirement can be applied to all software development industries, or as Krakowiak (2003) defines, „the software layer that lies between the operating system and the applications.‟ Although middleware was defined so long ago, it wasn‟t until id‟s work on the and other developers‟ engines in early 1990‟s, that the term middleware was used again. Supplemental functions that were coded to work in tandem with a game engine soon became larger projects at some organisations and games developers; during the mid 1990‟s two particular companies – Emergent Game Technologies and Criterion Software – both worked on computer graphics engines of their own. Emergent‟s product was called and Criterion‟s graphics library was called RenderWare; they battled with Gamebryo to license their technology to game developers, but it was not until the new millennium that middleware really took off. Sony‟s PlayStation 2 console, launched late in 2000 to most had brought along a new host of challenges for developers. The success of Grand Theft Auto III in 2001 and the discovery that the development studio – DMA Design – had used RenderWare as an engine, sparked great interest in the industry as to the

Page 11 of 43

future use of middleware. Criterion‟s CEO stated in 2004 that between a fifth and quarter of published games used its technology. (Develop Online, 2007) The use of middleware in game development can also prove to be a tough decision for any studio, as O‟Neill (2008) writes, „Middleware can be of great help to a game developer... [but] costs can go much beyond the initial price tag.‟ This emergence of middleware in the PlayStation 2 era heralded a new solution for developers – they could potentially save time and money by simply licensing some middleware from a reputable firm. However, one of the major divides encountered by developers was the terms offered by the various providers. Some required a monetary value and then a percentage of takings on a game, others only royalties and some only money per game title. These are very tough decisions for smaller developers to handle, and can be risky. Licensing such technology should still be considered to assist with the development of a game, but serious thought needs to be given to the expected requirements and the terms fully calculated.

2.3 Project Management

A typical game development project is broken into the three classic stages of pre-production, production and post-production. Each of these could run for a few months up to over a year, although due to the high-risk nature of producing games for today‟s markets, few make a return on their investment from the publishers and even less make significant amounts of money. Other projects may not make it through the entire pre-production phase due to issues including personnel or poor project planning and management; this could be due to lack of contingency or surreal goals. This report will not go into detail on job roles or the phases already mentioned; instead, several lifecycle models will be analysed which may be used in the production of a multiplayer, networked game. The original software lifecycle model was the pure waterfall model; as the name suggests, one phase will follow on from the previous in a set order. If an issue occurs in one section, then a project is halted until a resolution is found and applied. The primary objectives of this model are documents, but can be applied to games design or more generic software development just as easily.

Page 12 of 43

The phases in this model will never overlap – any project which adopts this model will need to be air-tight, as mistakes are hard to rectify in later stages. McConnell (1996) states that this model works well for projects that are complex yet well understood, or if working with technically weak staff. The author continues to describe the disadvantages of the model, which include the primary reason for selecting the waterfall – the aims need to be specified from the outset, which can be a tough task, particularly in software development projects where flexibility is likely a requirement. It can be a long time before anything worthwhile is produced, so a development studio may incur unexpected costs if using this model, which is bound to have serious consequences. One of the modified waterfall models in existence is the Sashimi model – which refers to a Japanese hardware development model that contains overlapping sections. The areas contained within will be very similar to the pure waterfall model, except that the reliance on documentation is far reduced. Although this will save time, it may lead to confusion in later stages and can be harder to efficiently track a project‟s progress. Bates, (2004) states that rapid iterative prototyping is the best development model for most new games. This model is also known as extreme or agile programming – the main objective is to build a stable game prototype in a small amount of time and to play it. From here, changes can be applied and then another build can be produced. Using this model the design will likely change between builds, particularly in the earlier stages of development. The teams that compromise the bulk of the development – artists and programmers – will always meet between builds to refine the additions in the latest build in comparison to the last. Bates (2004) also states a number of classic mistakes that developers should avoid when utilising the rapid prototyping methodology; these include problem personnel and weak staff ability, misunderstood and pop- up tasks and wasted time on bug fixes. An additional model to be studied is called the spiral. McConnell (1996) best describes this as a „risk-orientated model that breaks a software project up into miniprojects. (sic)‟ Each of these smaller projects are designed to locate and dissolve all major risks associated with the development cycle – this will include anything from personnel issues through to poor technology. A developer would

Page 13 of 43

begin working with the spiral model from the centre – the smallest area and the least chance of something going wrong. From here, once the objectives have been formulated and the possible risks identified, work can begin on the first iteration of the application prototype. Once the work has been verified, work will continue outwards from the spiral on the next iteration, and so forth. The primary reason for selecting this model, states McConnell (1996) „is that as costs increase, risks decrease.‟ This is a true statement, as previously found out – as the project progresses the pathway moves out of the spiral and the segments are larger. McConnell (1996) further states that the main disadvantage of this model is that it‟s complex to successfully execute – „it requires conscientious, attentive and knowledgeable management.‟

2.4 Programming Languages for Game Development

During the 1980‟s, a typical development environment consisted of a screen full of hexadecimal digits – today this is still the same, but it is hidden behind a user interface. Code was typically developed on the target machine, and these hex codes would be loaded into memory in order to execute the application. Assemblers became widespread once more since their creation in the 1950‟s, as games of this era were first written in C, but later titles in full assembly. The release of Doom further prompted a huge rise in the popularity of C, as many thought it had been entirely developed in assembly. (Morris and Rollings, 2004) At the time of C‟s initial popularity, Danish computer scientist Bjarne Stroustrup had started working on adding further functionality to C in 1979, in what was first known as „C with Classes‟. (Stroustrup, 2009b) By 1985, the first commercial implementation of C++ was released, although many still preferred using C in game development. Products such as the Doom III engine, Electronic Arts‟ game engines and all Microsoft games are cited as having been written in C++, although its popularity as the language of choice in the games industry only took off during the late 1990‟s. (Stroustrup, 2009a) This is due to the early C++ compilers being very inefficient – converting the code to C and then to assembly. It was hard to relate the code to the assembly language, so many programmers did not trust the compiler. (Morris and Rollings, 2004)

Page 14 of 43

Python was first conceived in late 1989 as a solution to bridge an issue experienced with system administration in a different language – ABC – by Dutch programmer Guido van Rossum. (Python, 2010a) Van Rossum states that a scripting language was required in order to access the system calls in the Operating System (OS) he was working on. The official Python site declares that „Python is a high-level general-purpose programming language‟ – which can be applied to a variety of Internet Protocols, (IPs) software engineering and OS interfaces. (Python, 2010b) Python is further described by Riley (2004) as a language that „allows developers to implement ideas quickly and provides flexibility during the development process.‟ Riley also discusses how Python is used in three main ways in game development – a full language with which to develop applications, as a scripting language to interface between systems and as a data language to describe game rules and objects. Java was first known as “Oak” when it was invented by a group of five colleagues at Sun Microsystems in 1991, but became known as “Java” in 1995. The primary motives behind the inception of this language was the desire for „a platform-independent language‟ with which software was „embedded in various consumer electronic devices.‟ (Schildt, 2005) The issue the employees had with C and C++ was that they needed a specific compiler in order to create code for platforms that used those languages. „Compilers are expensive and time- consuming to create,‟ further writes Schildt. (2005) The focus of Java would be so that one could execute the code on a large range of processors and devices, and not be limited to particular platforms or systems. Microsoft were impressed by the success of the C programming language, and later the possibilities developers had with C++. The problem for Microsoft however, is that they did not produce the popular C compilers of the time and when programmers for the Windows Disk Operating System (DOS) came to write applications for the new Windows 3.0 OS, they faced a significant problem – how to program for Windows‟ event-driven programming model. (Lomax et al, 2006) Microsoft introduced Visual Basic (VB) in 1991 for programmers that would assist them in adapting to the workings of the Windows OS. The first VB compiler further tried to help the programmers with its interfacing, where a blank form would be the equivalent of an empty code sheet. From here, elements

Page 15 of 43

could be dragged to the stage – this allowed for the creation of programs that looked very similar to those already on the Windows platform. The programmers could also create scripts to respond to user input or events within their applications; all of these features ensured that VB was an instant success. In order to arrive at a balanced decision regarding which of these programming languages would be the most suitable for the development and creation of a multiplayer networked game; a list for each language was produced to weigh up a combination of some of their advantages and shortcomings. The list includes other factors found while researching such as the range of Integrated Development Environment‟s (IDEs) available and the amount of support their respective communities can offer. Authors stated at the end of these lists are the sources from which the bulk of the benefits and limitations given, are from.

C++:  General purpose, bias towards systems programming  Object-orientated with classes and virtual functions  Producing graphics-rich applications will require a library  Vast range of free and commercial IDEs and libraries  Large online support communities  Potentially large overheads and required software/hardware  A vast number of modern PC and console games are coded in C++, or their engine frameworks are (Stroustrup, 2008) Python:  Mostly platform independent, libraries are multilingual  Modules can be optimised into C/C++  Excellent range of commercial and free IDEs and libraries  Standard language tools are supported – debugger, profiler  Simple distribution and high productivity rate  Scripting possibilities – low memory overheads, perfect for game functionality and control, or for add-ons

Page 16 of 43

 Large online support community, same for the numerous websites that host different open- and commercial libraries (Riley, 2004)

Java:  Inherits syntax from C, and its object model from C++; this means programmers familiar with either C or C++ can learn Java more easily, and contrarily for Java programmers.  Support for object-orientation and classes  Security – language and platform designed with security in mind  Performance – slower than C and C++ compared to native code  Powerful Application Programming Interface (API) collection  Large online support communities (Flanagan, 2002), (Schildt, 2005)

Visual Basic:  Primary integration with .NET framework – catering for Windows applications, services and libraries  Strong Graphical User Interface (GUI) development for Rapid Application Development (RAD) – applications can be visually mocked up with underlying code missing  Large online support communities  API‟s for C++ integration  Object-orientation support  VBScript support – these can be embedded into web pages or used inside an OS to perform a plethora of tasks (Ford, 2006), (Lomax et al, 2006)

After careful research into these languages, (C was disregarded as an option due to the object-orientated nature of the other languages; for the development of a practical networked game it would be required) and taking into consideration the discussion in the previous Project Management section in 2.3,

Page 17 of 43

they all have a great number of benefits for use, and understandable reasons for their limitations. On the basis that such a large number of computer games and their engines are developed in C++, it would be natural to assume this route also as a game developer interested in producing a multiplayer game. This does not have to restrict the development to using C++ exclusively; there are additional benefits to using a language such as Python to add further game functionality or add-on support. Perhaps even a language like VB or Python may be contemplated for early pre-development or the production of a game prototype. The game engine would still likely be coded in C++, but depending on the network topology utilised for the multiplayer facilities then any required server code could be written in Java. There is also a chance that other factors such as the resources available to a developer – such as technology and staff – may further influence the language selection. The PM methodology could also have an impact on this decision, as could the target platform set out by the publisher funding the game development effort. It is best to conclude that this task cannot be defined so easily and is largely dependent on a number of external factors.

Page 18 of 43

3 – Networking and Protocols 3.1 Aims and Objectives

This section will focus on learning more about the TCP/IP protocol suite (also known as the Internet Protocol Suite) and the two transport protocols contained within. The TCP/IP suite has been identified as one of the essential building blocks of the Internet‟s core technology and all related network communications. (Armitage, Branch and Claypool, 2006) This is important as the intent of this project is to acquire information about multiplayer game development, which would utilise such communications technology in order to function. Network software architectures are also studied, and a number of common networking challenges that both developers and administrators regularly face are surveyed, in order to obtain a better understanding of the work they do and the potential impact a fault could have on such a system.

3.2 Protocol Elaboration

As mentioned in the introduction, the focus in this section will be to examine the TCP/IP suite and learn more about its transport protocols and their relevance to programming a multiplayer game. Figure 1 shows the range of layers within the TCP/IP model.

Fig 1 – Protocols and networks in the initial TCP/IP model. (Tanenbaum, 2003: Ch1 section 1.4, fig 1-22)

The TCP and User Datagram Protocol (UDP) are two different transport protocols that work exclusively with the IP network layer, in addition to the Page 19 of 43

connectivity with the link layer and an application layer to provide data flow. These transport protocols have individual architectures and both follow different instruction sets – although TCP and UDP are both contained within the TCP/IP suite, referring to TCP/IP does include all protocols in the collection. All data that flows through the transport layer will pass through the IP at both ends of a system and at every router in an Internet. (Stevens, 1994)

TCP

The TCP protocol provides a „connection-orientated, reliable byte stream service,‟ says Stevens. (1994) The first term means that two applications using this suite must establish connections with one another before they can exchange data – a task like this can be easily accomplished through socket level programming in a relevant language. Once this is successful, data transfer begins, byte by byte. However, delays from tens of milliseconds to a few seconds may occur, state Armitage, Branch and Claypool. (2006) The recipient of these packets also has to confirm their delivery, so it is easy to detect when data has not been received or when transmission errors occur. The previous authors also discuss „windowed flow control‟ – a feature of TCP that regulates the speed at which packets are sent. If sending is successful, then the window grows; else if packet loss is detected then the protocol will try to resend the data; this will require a smaller window – thus better regulating the network traffic. Latency problems could also occur because of this feature and the data transmission backlog; this will cause serious problems for smaller applications that rely on a frequent volume of data for operation. The reliability of TCP is perhaps best described by Stevens, (1994) who lists a number of reasons to define this; a few of these are given below:

 Application data is split up into segments – these can vary in size  When a segment is sent, a timer is created and checked, waiting for verification of its delivery. This verification is sent slightly delayed.

Page 20 of 43

 Checksums are maintained in the header of a segment and its data. Upon receipt of the data, the sum can be checked – if the data has been altered then TCP will discard it and wait for the sender to resend.

UDP

The UDP protocol, as implied by the acronym definition is referred to as a datagram – an independent message sent over a network, whose arrival, arrival time and content are not guaranteed. (Campione and Walrath, 1996) Although this protocol might be viewed as a risky choice for applications, it is perfect for those that do not require the accuracy of TCP. (guaranteed control over datagram usage) UDP will also transmit data at a faster pace, and the source and destination ports are contained within the header of a UDP segment, so connections are not necessary as the data goes straight from host to recipient. An insight into the things that UDP does not do, and the primary use of this protocol is described by Tanenbaum, (2003) listed below:

 No flow control, error control or retransmission of bad segments. The handling of these is down to the user and application.  Smaller size than TCP segments, „connectionless protocol‟  Ideal for client-server setups, the code is simpler and fewer messages are required during transmission

3.3 Network Software Architectures

This section will cover the two most predominant network topologies – client/server and peer-to-peer. (P2P) Early MUD games used a client/server configuration; this was due to the server storing the game content scripts with which a client would remotely connect to in order to play the game. This allowed for a vast number of users to be connected simultaneously to the same game and interact with it independently. The ideas behind P2P topology was first realised with the release of Doom, as Armitage, Branch and Claypool (2006) describe, „all players in the game were independent „peers‟ running their own

Page 21 of 43

copy of the game and communicating directly with the other Doom peers.‟ They continue to state that every 1/35th of a second, every client was checked for input from every other client and sends the information to the other connected players. From here, the game could progress (also known as the game-state update) and the players assumed they were playing in real-time with one another. But what else really differentiates these two topologies, and which is best used today in the development of a multiplayer game?

Client/Server

The client/server topology is the most popular architecture used in commercial online games as well as traditional MUDs, state Armitage, Branch and Claypool. (2006) A number of processes, or perhaps even a single process could be running on one server at a time and this will be in contact with every client or responsible for the maintenance of an aspect of the game-state update. Gameplay is likely to suffer if the server cannot keep up with the flow of communication – this is known as a bottleneck. A single server setup is perfect for a smaller online game, supporting up to around 64 players but will not be sufficient to deal with the requirements of a larger multiplayer game. (Merabti and El Phalibi, 2004) In their thesis, the authors also set forth a solution to handle the situation previously described – by setting up a group of machines with dedicated responsibilities that will form the server component of an online multiplayer game. Architecture of this nature, once initially configured has great possibilities for replication and scalability – particularly in demanding online games or real-time transaction processing systems. Research by McFarlane (2005) shows there are several possible solutions that achieve this, the first of which involves splitting up a game world into zones and then assigning each zone to a server. This way, each server is responsible for a group of objects within the game world and this brings with it a number of advantages for reducing network traffic. The other solution offered by McFarlane is referred to as the „zoned world multi- server architecture.‟ When a client goes to connect to the server, the game world appears as a set of replicated servers – load-balancing software or

Page 22 of 43

hardware approaches can be used to further assist this solution. A game client will then connect to one of these servers, but will not be aware which one it connects with – as far as the player knows they have connected to their usual server, but instead a practical software or hardware solution has intercepted their connection request and distributed the load to an available server. This method also has a number of benefits for a company running a large online game, to be discussed in full after the P2P topology is outlined.

Peer-to-peer

Peer-to-peer networking can typically be found in a large number of smaller online games – some of the first multiplayer games like Doom and various MUDs implemented this topology successfully, but it became apparent that as the number of peers increased, poor latency among game hosts and other network issues were more likely to be encountered. This architecture is ideal for smaller games or for applications that do not require such essential information, as Armitage, Branch and Claypool (2006) write, „communication amongst the client peers is generally for game information that is not essential for achieving consistent views of the game state.‟ Other technologies can be combined with this view, such as client-side prediction, which can conceal any potential lag issues experienced with playing online games, to make the game appear to be running in true real-time. McFarlane (2005) outlines the basic operation of the P2P topology, „every node [is] an equal peer to the others‟ and mentions a host of benefits for the use of this network configuration. It is important to recognise that as a P2P network consists of computers joining in any order at any time, and with varying specifications, as such nobody is in control of it. This raises concerns over security and privacy, so it is not uncommon for there to still be a small network of servers governing a P2P online gaming network.

Page 23 of 43

Advantages and Drawbacks

Now that some initial research has been completed on two very common topologies that utilise the TCP/IP suite, it is logical to produce a conclusion on the advantages and drawbacks of each, and how this could be translated to the development of a networked multiplayer game. As with the previous sections, the client/server topology shall be analysed first.

Client/Server – Advantages

In the thesis produced by Merabti and El Phalibi, (2004) it is clear that there are two primary forms of a client/server network that are regularly deployed for multiplayer online gaming. The first occurs where an individual player will opt to host the game on their machine, from here additional players can be invited to join the game or options will be provided to publish their server information on the Internet. This description can be found often applied to smaller multiplayer RPGs or to FPS‟s, such as Half-Life. The other solution provided by the authors is as discussed in the previous section detailing this topology – where tasks can be split up and handed to dedicated servers for computing. This zoned server architecture also has fantastic benefits for clients connecting to a multiplayer online game, which will typically reduce the effects of network latency. If a developer is able to split up computationally-expensive tasks for a multiplayer game – such as scripting or chat channel management (as these will become under increasing pressure from a large number of connected clients) – then they will likely consider creating server shards to run in parallel with the servers that manage and operate these other game features. A shard „allow[s] many distinct copies of the game to coexist,‟ writes Merabti and El Phalibi. (2004) If the shards are in different geographical regions to the game audience, this is where they will likely experience better gameplay. Additional benefits include stronger security protocols and fairly straightforward administration models, says McFarlane. (2005) This statement is supported by Alexander, (2003) who discusses the advantages of centralised control in operating an online service of this nature. Once correctly configured,

Page 24 of 43

client/server networks for multiplayer gaming can yield robustness and scalability, which would be expected from consumers subscribing to a multiplayer game service. (Merabti and El Phalibi, 2004)

Client/Server - Drawbacks

As discussed in the opening section about the client/server topology, one of the primary issues encountered by system administrators is due to the capacity servers provide. Servers are limited by their hardware and this means finite numbers of connections and bandwidth available to consumers; these can become overloaded without a dedicated network team continually monitoring them. Reasons for failure in this area can usually be traced back to poor financial planning and the lack of flexibility. (Merabti and El Phalibi, 2004) Financial issues can be extended as a negative reason for selecting this architecture, as the creation of server farms and maintenance requires significant investment. This is a problem because it is difficult in predicting the popularity of a multiplayer online game and the resources it will require. (McFarlane, 2005) If a game of this nature seems successful from the outset – naturally within a few months the game will either continue to grow or the subscriber base will shrink. If a developer has invested a large amount of money to provide support and maintenance for a large user base which is now diminished, they will be unlikely to continue operating it as a profitable business.

Peer-to-peer – Advantages

In the thesis written by McFarlane, (2005) two primary reasons for selecting this architecture are discussed. The first is for financial gain, which could be otherwise spent on staff to enhance game production for example. The savings would occur predominantly from not requiring server hardware and supporting their associated bandwidth costs. This mirrors one of the reasons as to why the client/server topology may be unsuitable for such a service – the popularity cannot be predicted and the exact hardware requirements are subject to change after a game is first received.

Page 25 of 43

The second reason presented by McFarlane stems from bandwidth savings – user communication forms a large part of any multiplayer game. P2P data packets commonly use the UDP protocol, (Gong, 2005) which is connectionless and goes straight from host to recipient. In an online game, this will also save large amounts of bandwidth as all private messaging will only go to those intended, rather than being sent though a server cluster and then distributed back out to the correct clients.

Peer-to-peer - Drawbacks

The technical advantages previously outlined also form part of one of the reasons presented by McFarlane (2005) as a drawback of using this architecture. He states that a „pure P2P system does not require that any particular subset of participants remain active in order for the system to continue to function.‟ If the game can function without the provider, then where is their revenue coming from? This reason can be extended to cover privacy and security issues that dedicated servers would be responsible for in such a system – if sensitive data like this is not stored by the game provider, then using this model in its pure form will be unattractive to responsible gamers. One of the primary uses for P2P architecture given by Merabti and El Phalibi (2004) includes file sharing applications – as any user of such a program will likely understand, in order to connect between peers the program requires their IP address. This data can be easily obtained when working on a P2P network, and could be used to exploit machines, soon becoming a serious issue for the operation of a multiplayer game – to keep hackers and malicious users away from legitimate clients.

3.3 Networking Technical Challenges

This section will summarise a number of factors that a responsible multiplayer games developer would be aware of during the development and post-production stages of their product. A number of pure technical challenges will be discussed and the impact of these areas failing may have on the

Page 26 of 43

community and the consequences for the developer to bear. Additionally, user- imposed issues such as cheating and griefing – known as emergent gameplay – means the developer needs to have mechanisms to deal with players that circumvent expected gameplay behaviour. (Kosak, 2004) The information detailed in this section will reference back some of the work already completed, and combine it with personal experiences of playing a range of online, multiplayer FPS, RPG and real-time strategy (RTS) games.

Latency, Jitter and Packet Loss

As outlined in section 3.2, IP is the network protocol used by the TCP and UDP transport protocols to carry data packets between hosts and destinations. Latency is the name given to the time it takes for the round trip of data to go from source to destination and back to the source, hopefully with the confirmation of delivery. As one may expect, latency can fluctuate depending on network load or as a response to interference somewhere along the route taken to transport packets. This fluctuation is referred to as jitter – the lower this value (usually in milliseconds, [ms] as latency is described) then the better for multiplayer gaming. If a data packet using either transport protocol does not reach its destination then that packet is lost. These three features of IP can have serious effects on gameplay, as Armitage, Branch and Claypool (2006) write:

 Latency – the real-time interactivity is disturbed, making it harder to react to situational changes within the game  Jitter – can make it difficult for players and the game engine to compensate for long-term average latency  Packet loss – game-state updates are lost

Bandwidth

In section 3.3 two popular network architectures were studied and the benefits and downfalls of each were highlighted. Bandwidth was mentioned in both of these, as it is directly related to financial implications for a multiplayer

Page 27 of 43

game provider – particularly if the game is sold as a recurring service and not a standalone title with added multiplayer functionality. A developer in this area would seek to reduce the future bandwidth costs of their game by ensuring that all network code written is the most efficient – additional expenditure may be required while developing the title to assist with enforcing this, but they will likely see a return on the investment by producing solid network code. This will have a further benefit, as the hardware involved with running the code, be it game servers or administrative tools – are more likely to remain stable during the operation of the game and more able to handle the potential demand – assuming the game is at least a moderate success. This will have a positive impact on the gaming community they are building, or have already established.

Cheating and Griefing

This area is not immediately viewed upon as a technical challenge within the domain of developing a multiplayer game – but the tools used to combat these kinds of players do fall into this category. As with real world society, rules and laws exist for our safety and well-being – all games will contain varying amounts of these too, and there will be reasons why players have limited control over some features and others are more relaxed. This will help to maintain a number of things inside a game world – steady character development and the economy for example. Cheating can fall into a number of groups depending on the type of cheat being used – some will allow for players to see into hidden areas or through walls, and more malicious codes will attempt to cripple opponents‟ Internet connection in order to gain advantage over them and their peers. This is one of the main reasons for such players to employ any kind of cheat method – vantage and superiority over the rest of the population. A large number of cheating types are further explored by Armitage, Branch and Claypool, (2006) which include server-side, client-side cheating and network-layer cheats. Griefing is the term given to a player inside a game that is deliberately abusing or harassing another participant for an undefined reason. Such reasons

Page 28 of 43

could include earlier defeat to that player or perhaps for the same reasons that bullies pick on others at school – to gain attention and to appear dominant. Developers of online games will usually create a detailed terms-of-service (TOS) list that accompanies their game product, which users will be expected to read before being allowed to install the game, or connect to it. Inside this document they will find a list of potential consequences for which the game provider may take against their account if they are found to be breaking game rules, or for cheating or griefing offences. Such consequences could include temporary bans through to permanent suspension of their account, or limit their access to services related to the game in question.

Security and Privacy

Section 3.3 also made mentions of security and privacy issues faced when using relevant network architecture to develop an online multiplayer game. This is due to the ways in which these architectures operate and what sets them apart from one another. As a consumer of a multiplayer game, one would expect the information held by the games provider on their customers would be stored securely, and be inaccessible to external hackers and be enclosed in physically secure data centres. These facilities are costly to setup and maintain – as already discussed it is very hard to predict the success of a game, particularly in the online multiplayer market. It is also important to note that as the majority of games will appeal to a global market, the data protection laws in all countries will likely differ, and adherence to these is equally crucial to success.

System Administration

This area will encompass some of those as already brought to attention in the previous security and privacy section, and issues surrounding network hardware like latency and bandwidth. Administration for a multiplayer game can be further extended to cover customer service, server maintenance, content updates or patches and data storage. Specialists in these areas will likely be required to maintain the smooth operation of an online game service, or the

Page 29 of 43

appropriate contacts must be in place to respond to incidents in these areas. Larger games will consequently require more support staff to assist the online and offline communities that spawn from their customer base. Some of these areas are discussed further in terms of P2P architecture requirements for such a game service, in the paper by Merabti and El Phalibi. (2004)

Page 30 of 43

4 – Conclusions

4.1 Results and Recommendations

The origins of multiplayer gaming have been traced back to the late 1950‟s, where a simple oscilloscope demonstration brought innovation from technology that was designed for other purposes. Today, the online multiplayer gaming market is worth billions of Dollars and represents a significant amount of revenue generated by the whole games industry. As a result of the global attention this receives and based off the success of present day titles, a variety of developers are interested in tapping into this opportunity. However, initial research has shown there is a lack of academic publications catering for the development of such products, despite their increasing popularity. This thesis commenced by discovering how the first multiplayer games came to existence, and concocted a plan to capture research into a number of relevant areas that a developer would likely consider, before commencing development on such a title. Particularly for less experienced, first time or smaller developers, they may not have access to knowledge inside more experienced studios – ultimately, a competitor is unlikely to share such data. The sections covered in this report were all deemed as being of importance to meet this requirement of information gain – to provide an insight into such areas for consideration and to discuss in some situations the benefits and drawbacks of using particular network protocols or programming languages. Such information in the industry is likely assumed, so this paper has helped to bridge the gap for anyone that perhaps wanted to understand why some organisations operate in particular ways, or why so many smaller developers fail with multiplayer online gaming projects. It is perhaps best to summarise the non-historic areas contained within this report, to state why they were important and then retain the information as a list of recommendations, for those wishing to pursue multiplayer game development or those interested in game or network technologies.

Page 31 of 43

1. The importance of game engines and consideration of middleware solutions: Evaluating the resources available to a games developer is paramount to the success of a project – from staff capabilities to in-house technology; before a project can be fully budgeted it is crucial to analyse whether it is possible before commencing. Commissioning the use of middleware technology should equally be assessed, as it could solve a number of development issues or prove to be a burden if underestimated.

2. Project management: Although typically dealt with by project leaders, in the pre-production phase of such a project, there should be significant communications between the majority of development staff, and likely game publishers. It is desirable that thorough idea generation phases and concept selections are evaluated before moving onto building game prototypes, or the production phase. We looked at a number of PM methodologies that could be used to develop a game – it is important that the methodology matches the resources at their disposal.

3. Protocols and architecture: This technology forms the dominant underlying components of developing and maintaining a successful online multiplayer game. It is equivocal that – largely resource dependant – the correct architecture/s are selected for use in developing such a product, else they will likely have drastic repercussions to the developer, and their relationship with a publisher and the gaming community.

4. Technical challenges: In addition to using the best for purpose network architectures, the other domains that address network administration must be given equal attention. If these are at least accounted for in the game design, regardless of their being subject to change in the development cycle, then there is a better chance of their success – assuming the title is not cancelled and satisfies the publisher‟s requirements for completion.

Page 32 of 43

4.2 Future Development

Despite the successes of the investigation, there will always remain room for further development. This is primarily due to the scope of this particular project – it would have been irrelevant to discuss issues that lie deeper into the subjects outlined in this paper, as this is meant as a more introductory view of top-level issues concerning multiplayer game development. There are a number of additional sections that could follow on from the research presented here, these include:

 Investigations into the development of game engine graphical effects, and their associated costs of running on different platforms and in different programming languages  Analysis of various middleware solutions and their implementations  Investigations into the modifications of the TCP and UDP transport protocols – as described in this paper they have clear differences, but do converging protocols already exist?  Further investigations into the game production cycle and the importance of solid project management – do different game genres affect this?  A separate study into the benefits of grid computing, how this could be applied to a client/server configuration and compared with a P2P setup  An investigation into the architectures used in different game genres and how this affects their operation

Page 33 of 43

5 – Reflection

5.1 Aims , Objectives and Successes

The aims of this project, as defined in section 1.2 and the abstract was to explore existing research into network protocols and architecture, PM methodologies and game development technology. From here, the findings and conclusions drawn from the research carried out would be used to conclude the report, and provide a list of recommendations. This list should be useful for those that wish to learn more about the development of multiplayer networked games, those already in the industry at any level that seek concise information regarding these areas, and as aggregation of the existing material in the domain. These topics and the recommendation objective are what formed the start and endpoints of the project deliverable presented in section 4.1. Based on the findings delivered, it is believed that the objectives of this project have been satisfied and the recommendations provided will be of use to any individual or organisation covered by the description in the previous paragraph. A number of suggestions have been raised in section 4.2 regarding the future development, or next steps that a project like this could progress with. Any of these proposals would have assisted with the success of this project – but as stated in section 4.2 they may be considered too in-depth for a project of this introductory nature.

5.2 Research Undertaken

The research presented in this project encompasses a wide range of media and subject matter – from books to theses and news articles to user guides, the research has been a major influence on the recommendations provided in section 4.1. This is coupled with the project objectives which included broadening the existing research and contributing to personal development. It has been both fascinating and exciting to have learned so much about some of the prefatorial subjects that have such a great influence on the development of multiplayer games. The range of future development areas identified in section

Page 34 of 43

4.2 shows there is great room for future academic research into this area, and it would be pleasant to read about these in the near future.

5.3 Problems Encountered

During the progress of this investigation, no problems have been encountered as the research has all been taken from existing, published sources. This has been paired with existing knowledge from personal experiences with playing a range of game genres online; knowing how some of the game operators maintain their TOS, and what this means for players who disobey them – deliberately or otherwise. As displayed in the appendix at section 8.1, this project was first conceived to be an application-based venture. The primary objective would have been to create a multiplayer game prototype, after research had been carried out in some of the areas studied in this report. The prototype could have taken any form and be written in any one of the languages learned about in section 2.4, but after some preliminary research the project scope had to be altered, and this investigation was conceptualised. The principal reason behind this decision, based on the four programming languages detailed in this report – was that such a game would likely be written in Python. The list of advantages are given in section 2.4, and this had some of the best possibilities for rapid prototyping. This language is not so familiar, and would likely require a large amount of time to become proficient enough at it with, to produce a functional game prototype. The time allocated to produce the project did not have the flexibility to learn a new language, and the report and remaining research would have suffered. Regardless, it means that there is still a possibility to work on such a project in the future, which could also be added to the development works listed in section 4.2.

Page 35 of 43

6 – References

ALEXANDER, Thor (ed.) (2003) Massively Multiplayer Game Development. Charles River Media.

ARMITAGE, Grenville, BRANCH, Philip and CLAYPOOL, Mark (2006) Networking and Online Games. John Wiley & Sons, Ltd, England.

BARTLE, Richard (2003) Designing Virtual Worlds. New Riders, USA.

BATES, Bob (2004) Game Design, 2nd ed. Thomson Course Technology.

BERNERS-LEE, Tim and CAILLIAU, Robert (1990) [online] WorldWideWeb: Proposal for a HyperText Project. Last accessed 22 April 2010 at http://www.w3.org/Proposal.html

CALLAHAM, John (2007) [online] Rage: id Tech 5 First Look. Last accessed 22 April 2010 at: http://www.firingsquad.com/games/id_software_rage/

CAMPIONE, Mary and WALRATH, Kathy (1996) [online] The Java Tutorial, 5th ed. (draft) Addison-Wesley. Last accessed 22 April 2010 at: http://scv.bu.edu/Doc/Java/tutorial/networking/datagrams/definition.html

CHATFIELD, Tom (2009) [online] Videogames now outperform Hollywood movies. The Guardian. 27 September 2009. Last accessed 22 April 2010 at http://www.guardian.co.uk/technology/gamesblog/2009/sep/27/videogames- hollywood

Develop Online (2007) [online] The Rise of Middleware 2.0. Last accessed 22 April 2010 at: http://www.develop-online.net/features/13/Rise-of-Middleware-20

EBERLY, David (2000) 3D Game Engine Design: A Practical Approach to Real- time Computer Graphics. Morgan Kaufmann Publishers.

Page 36 of 43

Elite (2010) [online] Elite History. Last accessed 22 April 2010 at: http://elite.frontier.co.uk/history/

GONG, Yiming (2005) [online] Identifying P2P users using traffic analysis. Last accessed 22 April 2010 at: http://www.symantec.com/connect/articles/identifying-p2p-users-using-traffic- analysis

HAGUE, James (1997) [online] Halycon Days: Interviews with Classic Computer and Video Game Programmers. Last accessed 22 April 2010 at: http://www.dadgum.com/halcyon/index.html

HSIAO, Tsun-Yu and YUAN, Shyan-Ming (2005) [online] Practical middleware for massively multiplayer online games. Internet Computing, IEEE, Volume 9 Issue 5. Pages 47-54. Journal from IEE Explore last accessed 11 April 2010 at: http://ieeexplore.ieee.org

ID Software (2010) [online] id Software - id History. Last accessed 22 April 2010 at: http://www.idsoftware.com/business/history/

KOSTER, Raph (2002) [online] Online World Timeline. Last accessed 22 April 2010 at http://www.raphkoster.com/gaming/mudtimeline.shtml

KRAKOWIAK, Sacha (2003) [online] What is Middleware. Last accessed 22 April 2010 at: http://middleware.objectweb.org/

KUSHNER, David (2003) Masters of Doom. Random House, New York.

LOMAX, Paul et al. (2006) Visual Basic 2005: In a Nutshell, 3rd ed. O'Reilly.

McCONNELL, Steve (1996) Rapid Development: Taming Wild Software Schedules. Microsoft Press.

Page 37 of 43

McFARLANE, Roger (2005) Network Software Architectures for Real-Time Massively-Multiplayer Online Games. MSc, School of Computer Science, McGill university, Montreal.

MERABTI, M and EL PHALIBI, A. (2004) [online] Peer-to-peer architecture and protocol for a massively multiplayer online game. GlobeCom Workshops. Pages 519-528. Conference paper from IEE Explore last accessed 11 April 2010 at http://ieeexplore.ieee.org

MOORE, Gordon (1965) [online] Cramming more components onto integrated circuits. Last accessed 22 April 2010 at ftp://download.intel.com/museum/Moores_Law/Articles- Press_Releases/Gordon_Moore_1965_Article.pdf

MORRIS, Dave and ROLLINGS, Andrew (2004) Game Architecture and Design: A New Edition. New Riders, Indiana.

NAUR, Peter and RANDELL, Brian. (eds.) (1969) Software Engineering. Garmisch, Germany, 7-11 October 1968. Brussels, Scientific Affairs Division, NATO. 231pp.

NTB (2010) [online] North Texas Businesses - Top Businesses - Company Profiles: id Software. Last accessed 22 April 2010 at: http://www.northtexasbusinesses.com/industry-top-companies-id-software.asp

O'NEILL, John (2008) [online] My Turn: The Real Cost of Middleware. Last accessed 22 April 2010 at: http://www.gamedaily.com/articles/features/my-turn- the-real-cost-of-middleware/71334/?biz=1

PLATO User's Guide (1981, revised) [online] Control Data Corporation, Minnesota, USA. Last accessed 9 April 2010 at http://www.bitsavers.org/pdf/cdc/plato/97405900C_PLATO_Users_Guide_Apr81 .pdf

Page 38 of 43

POSTEL, J and REYNOLDS, J (1983) [online] Telnet Protocol Specification. Last accessed 22 April 2010 at http://www.ietf.org/rfc/rfc854.txt

Python (2010a) [online] Python FAQ - 1.2: Why was Python created in the first place? Last accessed 22 April 2010 at: http://www.python.org/doc/faq/general/#why-was-python-created-in-the-first- place

Python (2010b) [online] Python FAQ - 1.3: What is Python good for? Last accessed 22 April 2010 at: http://www.python.org/doc/faq/general/#what-is- python

RILEY, Sean (2004) Game Programming with Python. Charles River Media.

SCHILDT, Herbert (2005) Java: A Beginner's Guide, 3rd ed. McGraw-Hill.

STEVENS, William (1994) TCP/IP Illustrated. Addison-Wesley.

STROUSTRUP, Bjarne (2008) [online] C++ Compilers. Last accessed 19 April 2010 at: http://www2.research.att.com/~bs/compilers.html

STROUSTRUP, Bjarne (2009a) [online] C++ Applications. Last accessed 11 April 2010 at: http://www2.research.att.com/~bs/applications.html

STROUSTRUP, Bjarne (2009b) [online] Stroustrup: FAQ - Why was C++ invented? Last accessed 19 April 2010 at: http://www2.research.att.com/~bs/bs_faq.html#invention

TANENBAUM, Andrew (2003) Computer Networks, 4th ed. Prentice Hall.

US Department of Energy - Video Games - Did they begin at Brookhaven? (1981) [online] Last accessed 22 April 2010 at http://www.osti.gov/accomplishments/videogame.html

Page 39 of 43

WOOLLEY, David (1994) [online] PLATO: The Emergence of Online Community. Last accessed 22 April 2010 at http://thinkofit.com/plato/dwplato.htm

YU, Y.C.A. and LAW, K.L.E. (2007) [online] Grid computing on Massively Multi- User Online Platform. Computer Communications and Networks, ICCCN 13-16 Aug 2007. Pages 135-140. Conference paper from IEE Explore last accessed 11 April 2010 at http://ieeexplore.ieee.org

Page 40 of 43

7 – Bibliography

HETLAND, Magnus L (2008) Beginning Python: From Novice to Professional, 2nd ed. Apress.

JONES, Richard (2005) [online] Rapid Game Development in Python. In: Open Source Developers' Conference, 2005. Last accessed April 22 2010 at: http://richard.cgpublisher.com/product/pub.84/prod.11

LOBÃO, A.S. and HATTON, E (2003) .NET Game Programming with DirectX 9.0. Apress.

MacDONALD, Matthew (2003) Peer-to-Peer with VB .NET. Apress.

McGUGAN, Will (2007) Beginning Game Development with Python and : From Novice to Professional. Apress.

TERDIMAN, Daniel (2006) [online] 'World of Warcraft' battles server problems. Last accessed 22 April 2010 at: http://news.cnet.com/World-of-Warcraft-battles- server-problems/2100-1043_3-6063990.html http://5years.doomworld.com/ A celebratory fansite dedicated to the times of 5 years after Doom's release. Includes interviews with many of the original staff and a copy of the Doom bible. http://unthought.net/c++/c_vs_c++.html An in-depth article on the advantages of C++ over C, including a programming challenge from a public mailing list to write a simple program that would run faster in either C or C++. http://www.gridcafe.org/index.html A site full of information on the benefits of Grid computing and associated development work.

Page 41 of 43

8 – Appendices 8.1 Appendix A – Original Project Specification

SHEFFIELD HALLAM UNIVERSITY FACULTY OF ARTS, COMPUTING, ENGINEERING AND SCIENCES (ACES) BSc (Hons) Games Software Development (Final Year)

PROJECT DEFINITION

Student: Luke Salvoni Date: 13th November, 2009 Supervisor: Dr. Pete Collingwood Level of Project: BSc (Hons) Games Software Development Title of Project: The investigation and development of a networked multiplayer game prototype. Type of Project: Application based

ELABORATION

Multiplayer video games have been in existence for over three decades, where real time network games were developed on a device originally designed as an electronic learning tool. Since then, there has been explosive growth in computer network communications which led to mainstream multiplayer titles developing LAN versions of their games.

Today, network gaming can be executed using a variety of different protocols and on a range of different devices. But how do they differ from one another? Are they scalable? My aim is to create a multiplayer, distributed game prototype on the PC; in order to achieve this I will need to carry out a range of research encompassing game implementation and development technology. From here, I will create my game prototype using relevant techniques and ideas learned about through my research.

PROJECT OBJECTIVES

 Gain an in depth awareness of different development technologies and strategies for networked multiplayer games

 Analyse and compare the various technologies and techniques and how they can be combined; then select the most appropriate development technologies and use them for the prototype production

 Design and build a functional networked multiplayer game prototype

 Test and evaluate game prototype

 Produce relevant game manual/documentation

 Produce a comprehensive project report and evaluate the success of the project, making suitable recommendations for further development

Page 42 of 43

DELIVERABLE

The deliverable for this project will be the design and development of a networked multiplayer game prototype, for use on a PC.

TASK PLAN

1. Investigate and identify a suitable range of game development technologies and deployment strategies; then select the most appropriate language/s and IDE/s for the prototype development Fri 27th November, 2009

2. Determine and assess the usage of relevant network protocols and how these can be implemented for use with network computer games, and utilised with the selected development tools for best performance Wed 16th December, 2009

3. Investigate a suitable range of game implementation techniques and their applications, and how these can be utilised in my prototype and in conjunction with the selected technologies Wed 30th December, 2009

4. Collate the previous research and analyses and produce detailed game specifications Mon 4th January, 2010

5. Build the game prototype Mon 18th January, 2010

6. Test and debug, as required, the game prototype against the requirements and game purpose Mon 16th March, 2010

7. Produce relevant game documentation as per game specifications Fri 2nd April, 2010

8. Critically reflect on project in terms of: - Project plan - Research into network protocols and technologies - Selection of methods and tools - Future development of game prototype - Completed prototype against the game specifications (see #4) Fri 9th April, 2010

9. Complete writing of project report; final proofreading and submission to printers Fri 16th April, 2010

PROJECT ETHICS

All ethical issues pertaining to the project have been considered and an appropriate course of action will be followed as necessary. At this stage I do not believe I will use human subjects for research or test purposes and am content that any work I undertake using copyrighted material or software shall be credited and references, and any terms of use adhered to.

Page 43 of 43