2/26/2016

Outline

• Network Games – Architectures Distributed Computing Systems – Compensation techniques – Cheating – Cloud games (Slides for Final Class) • Peer-to-Peer Systems – Overview – P2P file sharing

Communication Architectures Data and Control Architectures Split-screen All peers equal - Limited players -Easy to extend -Doesn’t scale (LAN only) • Want consistency – Same state on each node – Needs tightly coupled, low latency, small nodes • Want responsiveness – More computation locally to reduce network Central server Server pool – Loosely coupled (asynchronous) - Clients only to -Improved server scalability -Server may be -More • In general, cannot do both  Tradeoffs bottleneck complex

“Relay” Architecture Abstraction Relay Architecture Choices

(Example: Dumb terminal, send and wait for response)

• Want control to propagate quickly so can update data (responsiveness) • Want to reflect same data on all nodes (consistency) (Example: Smart terminal, send and echo)

1 2/26/2016

Network Game Architectures Outline

• Centralized • Network Games – Use only two-way relay (no short-circuit) – Architectures (done) – One node holds data so view is consistent at all times – Lacks responsiveness – Compensation techniques (next) • Distributed and Replicated – Cheating – Allow short-circuit relay, provides responsiveness – Cloud games – What about consistency?  Make design decisions • Replicated has copies, used when predictable (e.g., behavior • Peer-to-Peer Systems of non-player characters) • Distributed has local node only, used when unpredictable – Overview (e.g., behavior of players) – P2P file sharing

Interest Management – Auras Dead Reckoning • Based on ocean navigation techniques (“dead” == “deduced (ded.)”) • Predict position based on last known position plus direction • Nodes express area of interest to them – Only send updates when deviates past threshold – Do not get messages for outside areas (predicted position)

(“warp”) - Only world information in circle/sent sent even if world is larger (actual position) - Side benefit  can • When prediction differs and adjust, get “warping” or prevent cheating (later) “rubber-banding” effect – Some techniques move smoothly to place over short time

Time Delay Time Warp • Server delays processing of events • With network latency, must lead opponent to hit (even with – Wait until all messages from clients arrive “instant” weapon!) – (Note, game plays at highest round-trip time) • Instead, knowing latency roll-back (warp) to when action took place • Server sends messages to more distant client first, – Usually, estimate latency as ½ round-trip time delays messages to closer – Needs accurate estimate of round-trip time • Client 100 ms behind Client 1 Client 2 Server processes • Still hits (note command arrives command arrives both client commands blood) • (Boxes are bounding Time boxes)

Time Delay

https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking

2 2/26/2016

Time Warp Notes Outline • Inconsistency – Player target • Network Games – Move around corner – Architectures (done) – Warp back  hit – Compensation techniques (done) – Bullets seem to “bend” around corner! – Cheating (next)  “Magic” bullets – Cloud games • Fortunately, player often does not notice – Doesn’t see opponent • Peer-to-Peer Systems – May be just wounded – Overview – P2P file sharing

Cheating Packet and Traffic Tampering

• Unique to games • Packet interception – prevent some packets from – Other multi-person applications don’t have reaching cheater – e.g., suppress damage packets, so cheater is – e.g, Distributed Interactive Simulation (DIS), not invulnerable public, “employees” so considered trustworthy • Packet replay – repeat event over for added • Cheaters want: advantage – Vandalism – create havoc (relatively few). – e.g., multiple bullets or rockets if otherwise limited • Mostly, game design to prevent (e.g., no friendly fire) • Solutions: – Dominance – gain advantage (more) – MD5 Checksum or Encrypt packets • Next slides – Authoritative host keeps within bounds

Packet Tampering Information Exposure • Allows cheater to gain access to • Reflex augmentation - enhance replicated, hidden game data (e.g. status of other players) cheater’s reactions – Passive, since does not alter traffic – e.g., aiming proxy monitors – e.g., ignore “fog of war” in RTS, or “wall opponents movement packets, hack” to see through walls in FPS when cheater fires, improve aim • Cannot be defeated by network alone • Tough to detect • Instead: – e.g., PunkBuster – scan for – Sensitive data should be encoded “known” hacks – Kept in hard-to-detect memory location – Centralized server may detect cheating – False positives? (e.g., attack enemy could not have seen)

S. Yeung and J. Lui. “Dynamic Bayesian approach for detecting cheats in multi- player online games”, Springer Multimedia Systems, Vol. 14, No. 4 Sep. 2008. aimbot human

3 2/26/2016

Outline Cloud-based Games • Connectivity and capacity of networks growing • Network Games • Opportunity for cloud-based games – Architectures (done) – Game processing on servers in cloud – Compensation techniques (done) – Stream game video down to client – Cheating (done) – Client displays video, sends player input up to server – Cloud games (next) Game frames

• Peer-to-Peer Systems Server

– Overview Server

Server – P2P file sharing Player input

Thin Client Cloud Servers 20

Why Cloud-based Games? Cloud Game - Modules (1 of 2) • Potential elastic scalability • Input (i) – receives – Overcome processing and storage limitations of clients control messages from – Avoid potential upfront costs for servers, while supporting demand • Ease of deployment players – Client “thin”, so inexpensive ($100 for OnLive console vs. $400 for Playstation 4 console) • Game logic – manages – Potentially less frequent client hardware upgrades game content – Games for different platforms (e.g., and Playstation) on one device • Networking (n) – • Piracy prevention exchanges data with – Since game code is stored in cloud, server controls content and content cannot be copied server – Unlike other solutions (e.g., DRM), still easy to distribute to players • Rendering (r) – renders • Click-to-play game frames – Game can be run without installation • How to put in cloud?

Cloud Game - Modules (2 of 2) Application Streams vs. Game Streams

“Cuts” • Traditional thin client • Approximate traffic analysis 1. All game logic on player, applications (e.g., x-term, – 70 kb/s traditional network cloud only relay remote login shell): game – Relatively casual interaction – 700 kb/s virtual world information (traditional • e.g., typing or mouse clicking – 2000-7000 kb/s live video network game) – Infrequent display updates (HD) • e.g., character updates or 2. Player only gets input and scrolling text – 1000-7000 kb/s pre-recorded displays frames (remote • Computer games: video rendering) – Intense interaction • e.g., avatar movement and • Cloud-based games? 3. Player gets input and shooting renders frames (local – Frequently changing displays – 7000 kb/s (HD) rendering) • e.g., 360 degree panning Challenge: Latency since player input requires round-trip to server before player sees effects

4 2/26/2016

Outline Definition of Peer-to-Peer (P2P)

• Network Games (done) • Significant autonomy from central servers – Architectures (done) • Exploits resources at edges of Internet – Compensation techniques (done) – Storage and content – Cheating (done) – Multicast routing – Cloud games (done) – CPU cycles – Human knowledge (e.g., recommendations, • Peer-to-Peer Systems (next) classification) – Overview • Resources at edge may have intermittent – P2P file sharing connectivity

P2P Includes P2P File Sharing – General • P2P communication • Alice runs P2P client on • Asks for “Hey Jude” – Instant messaging her laptop • Application displays – Voice-over-IP (e.g., Skype) • Registers her content in other peers with copy • P2P multicast routing P2P system • Alice choses one, Bob – e.g., Mbone, Yoid, Scattercast

• P2P computation • File is copied from Bob’s – e.g., seti@home, folding@home computer to Alice’s • P2P systems built on overlays  P2P – e.g., PlanetLab • While Alice downloads, • P2P file sharing others upload – e.g., Napster, gnutella, , eDonkey, BitTorrent …

Example: Searching P2P File Sharing Capabilities

1000’s of nodes N2 • Allows Alice to show directory in her file Set of nodes may change N1 N3 system – Anyone can retrieve file from it Key=“title” Internet – Like Web server Value=MP3 data… ? Client Publisher • Allows Alice to copy files from other’s Lookup(“title”) – Like Web client N N 4 N 6 • Allows users to search nodes for content 5

based on keyword matches • Needles versus Haystacks – Like search engine (e.g., Google) Searching for top 40 pop song? Or obscure punk track ‘81 nobody’s heard of? • Search expressiveness Whole word? Regular expressions? File names? Attributes? Whole-text search?

5 2/26/2016

P2P File Sharing Systems Napster: Publish

Central Flood Super- Route node flood Centralizedinsert (X, 123.2.21.23 ) Whole File Napster Gnutella Freenet (napster.com) ... Chunk BitTorrent KaZaA (DHTs) Based (swarm) (bytes) eDonkey2k Publish New BT

I have X, Y, and Z! 123.2.21.23

Napster: Search Napster: Discussion

123.2.0.18 • Pros – Simple

Centralizedsearch(A) – Search scope is O(1) (napster.com returns 123.2.0.18) Fetch – Controllable (pro or con?)  returns 163.2.1.0 … • Cons

Query Reply – Single point of failure – Server maintains O(N) state – Server does all processing Where is file A? – (Napster’s server farm had difficult time keeping up with traffic) Client “pings” each host, picks closest

Query Flooding (e.g., Gnutella) Flooding Discussion

I have file A. • Pros I have file A. – Fully de-centralized – Search cost distributed – Processing @ each node permits powerful search semantics Reply • Cons – Search scope is O(N) – Search time is O(???) – depends upon “height” of tree – Nodes leave often, network unstable

Query • Hop-limited search works well for haystacks – For scalability, does NOT search every node. May have to re- Where is file A? issue query later

6 2/26/2016

Flooding with Supernodes (e.g., KaZaA) Supernodes: Publish

• Architecture “Super Nodes” – Hierarchical – Cross between Napster insert(X, 123.2.21.23) and Gnutella ... • Some nodes better connected, longer connected than others – Use them more heavily Publish – Super Nodes • “Smart” query I have X! flooding – Only flood through 123.2.21.23 Super Nodes – Only one Super Node replies

Supernodes: Search Supernode Flooding Discussion

search(A) --> • Pros 123.2.22.50 – Take into account node heterogeneity • Bandwidth • Host computational resources • Host aavailability search(A) – 123.2.22.50 --> May take into account network locality 123.2.0.18 – Scales better Query Replies • Cons

Where is file A? – Still no real guarantees on search scope or search time • Similar behavior to plain flooding, but better 123.2.0.18

Fetching in Parallel and Swarming Fetching in Parallel and Swarming

tracks peers (e.g., BitTorrent) (e.g., BitTorrent) participating in torrent • When have file ID, get list of peers with ID • When have file ID, get • Download in parallel from multiple peers list of peers with ID • Download in parallel • “Swarming” from multiple peers – Download from others downloading same object ⁻ Use “rarest first” algorithm at same time (tit-for-tat) to increase availability

• “Swarming” – Download from others downloading same object at same time

7 2/26/2016

BitTorrent: Publish/Join BitTorrent: Fetch

Tracker

BitTorrent: Summary

• Pros – Works reasonably well in practice – Gives peers incentive to share resources; avoids freeloaders • Cons – Central tracker server needed to bootstrap swarm – Tracker is a design choice, not a requirement • Newer variants use a “distributed tracker” - a Distributed Hash Table (DHT)

8