CS 268: Computer Networking

L-18 DNS and the Web

DNS and the Web

• DNS • CDNs • Readings • DNS Performance and the Effectiveness of Caching • Development of the

2

1 Naming • How do we efficiently locate resources? • DNS: name  IP address • Service location: description  host • Other issues • How do we scale these to the wide area? • How to choose among similar services?

3

Overview

• DNS

• Server Selection and CDNs

4

2 Obvious Solutions (1) Why not centralize DNS? • Single point of failure • Traffic volume • Distant centralized database • Single point of update

• Doesn’t scale!

5

Obvious Solutions (2) Why not use /etc/hosts? • Original Name to Address Mapping • Flat namespace • /etc/hosts • SRI kept main copy • Downloaded regularly • Count of hosts was increasing: machine per domain  machine per user • Many more downloads • Many more updates

6

3 Domain Name System Goals

• Basically building a wide area distributed database • Scalability • Decentralized maintenance • Robustness • Global scope • Names mean the same thing everywhere • Don’t need • Atomicity • Strong consistency

7

DNS Records

RR format: (class, name, value, type, ttl)

• DB contains tuples called resource records (RRs) • Classes = (IN), Chaosnet (CH), etc. • Each class defines value associated with type FOR IN class: • Type=A • Type=CNAME • name is an alias name for • name is hostname some “canonical” (the real) • value is IP address name • Type=NS • value is canonical name • name is domain (e.g. foo.com) • Type=MX • value is name of authoritative • value is hostname of for this domain mailserver associated with name

8

4 DNS Design: Hierarchy Definitions

• Each node in hierarchy stores a list of names that root end with same suffix org • Suffix = path up tree net edu com uk • E.g., given this tree, where would following be stored: gwu cmu berkeley bu mit • Fred.com • Fred.edu cs eecs • Fred.berkeley.edu • Fred.cs.berkeley.edu • Fred.cs.mit.edu

9

DNS Design: Zone Definitions

• Zone = contiguous section of name space root • E.g., Complete tree, org ca single node or subtree net edu com uk • A zone has an associated set of name servers gwu cmu ucb bu mit Subtree cs eecs radlab Single node Complete Tree

10

5 DNS Design: Cont. • Zones are created by convincing owner node to create/delegate a subzone • Records within zone stored multiple redundant name servers • Primary/master name server updated manually • Secondary/redundant servers updated by zone transfer of name space • Zone transfer is a bulk transfer of the “configuration” of a DNS server – uses TCP to ensure reliability • Example: • CS.Berkeley.EDU created by Berkeley.EDU administrators

11

Servers/Resolvers • Each host has a resolver • Typically a library that applications can link to • Local name servers hand-configured (e.g. /etc/ resolv.conf) • Name servers • Either responsible for some zone or… • Local servers • Do lookup of distant host names for local hosts • Typically answer queries about local zone

12

6 DNS: Root Name Servers

• Responsible for “root” zone • Approx. dozen root name servers worldwide • Currently {a-m}.root- servers.net • Local name servers contact root servers when they cannot resolve a name • Configured with well- known root servers

13

DNS Message Format

Identification Flags

12 bytes No. of Questions No. of Answer RRs

No. of Authority RRs No. of Additional RRs Name, type fields for a query Questions (variable number of answers)

RRs in response to query Answers (variable number of resource records)

Records for authoritative Authority (variable number of resource records) servers

Additional Additional Info (variable number of resource records) “helpful info that may be used 14

7 DNS Header Fields • Identification • Used to match up request/response • Flags • 1-bit to mark query or response • 1-bit to mark authoritative or not • 1-bit to request recursive resolution • 1-bit to indicate support for recursive resolution

15

Typical Resolution

root & edu DNS server www.cs.berkeley.edu www.cs.berkeley.edu

NS ns1.berkeley.edu NS ns1.cs.berkeley.edu ns1.berkeley.edu Local DNS server Client DNS server A www=IPaddr ns1.cs.berkeley.edu DNS server

16

8 Typical Resolution

• Steps for resolving www.berkeley.edu • Application calls gethostbyname() (RESOLVER)

• Resolver contacts local name server (S1) • S1 queries root server (S2) for (www.berkeley.edu) • S2 returns NS record for berkeley.edu (S3) • What about A record for S3? • This is what the additional information section is for (PREFETCHING)

• S1 queries S3 for www.berkeley.edu • S3 returns A record for www.berkeley.edu • Can return multiple A records  what does this mean?

17

Lookup Methods

Recursive query: root name server • Server goes out and searches for more info 2 (recursive) iterated query • Only returns final answer or “not found” 3 Iterative query: 4 • Server responds with as much as it knows 7 (iterative) local name server intermediate name server • “I don’t know this name, but ask this server” dns.eurecom.fr dns.umass.edu 5 6 1 8 authoritative name Workload impact on choice? server dns.cs.umass.edu • Local server typically does recursive • Root/distant server does iterative requesting host gaia.cs.umass.edu surf.eurecom.fr

18

9 Workload and Caching

• What workload do you expect for different servers/names? • Why might this be a problem? How can we solve this problem? • DNS responses are cached • Quick response for repeated translations • Other queries may reuse some parts of lookup • NS records for domains • DNS negative queries are cached • Don’t have to repeat past mistakes • E.g. misspellings, search strings in resolv.conf • Cached data periodically times out • Lifetime (TTL) of data controlled by owner of data • TTL passed with every record

19

Typical Resolution

root & edu DNS server www.cs.berkeley.edu www.cs.berkeley.edu

NS ns1.berkeley.edu NS ns1.cs.berkeley.edu ns1.berkeley.edu Local DNS server Client DNS server A www=IPaddr ns1.cs.berkeley.edu DNS server

20

10 Subsequent Lookup Example

root & edu DNS server ftp.cs.berkeley.edu

ftp.cs.berkeley.edu berkeley.edu Local DNS server Client DNS server ftp=IPaddr cs.berkeley.edu DNS server

21

Reliability

• DNS servers are replicated • Name service available if ≥ one replica is up • Queries can be load balanced between replicas • UDP used for queries • Need reliability  must implement this on top of UDP! • Why not just use TCP? • Try alternate servers on timeout • Exponential backoff when retrying same server • Same identifier for all queries • Don’t care which server responds

22

11 Prefetching • Name servers can add additional data to any response • Typically used for prefetching • CNAME/MX/NS typically point to another host name • Responses include address of host referred to in “additional section”

23

Root Zone • Generic Top Level Domains (gTLD) = .com, .net, .org, etc. … • Country Code Top Level Domain (ccTLD) = .us, .ca, .fi, .uk, etc. … • Root server ({a-m}.root-servers.net) also used to cover gTLD domains • Load on root servers was growing quickly! • Moving .com, .net, .org off root servers was clearly necessary to reduce load  done Aug 2000

24

12 New gTLDs

• .info  general info • .biz  businesses • .aero  air-transport industry • .coop  business cooperatives • .name  individuals • .pro  accountants, lawyers, and physicians • .museum  museums • Only new one actives so far = .info, .biz, .name

25

New Registrars • Network Solutions (NSI) used to handle all registrations, root servers, etc. … • Clearly not the democratic (Internet) way • Large number of registrars that can create new domains  However, NSI still handle root servers

26

13 DNS Experience • 23% of lookups with no answer • Retransmit aggressively  most packets in trace for unanswered lookups! • Correct answers tend to come back quickly/with few retries • 10 - 42% negative answers  most = no name exists • Inverse lookups and bogus NS records • Worst 10% lookup latency got much worse • Median 8597, 90th percentile 4471176 • Increasing share of low TTL records  what is happening to caching?

27

DNS Experience • Hit rate for DNS = 80%  1-(#DNS/#connections) • Most Internet traffic is Web • What does a typical page look like?  average of 4-5 imbedded objects  needs 4-5 transfers  accounts for 80% hit rate! • 70% hit rate for NS records  i.e. don’t go to root/ gTLD servers • NS TTLs are much longer than A TTLs • NS record caching is much more important to scalability • Name distribution = Zipf-like = 1/xa • A records  TTLs = 10 minutes similar to TTLs = infinite • 10 client hit rate = 1000+ client hit rate

28

14 Some Interesting Alternatives • CoDNS • Lookup failures • Packet loss • LDNS overloading • Cron jobs • Maintenance problems • Cooperative name lookup scheme • If local server OK, use local server • When failing, ask peers to do lookup • Push DNS • Top of DNS hierarchy is relatively stable • Why not replicate much more widely?

29

Overview

• DNS

• Server selection and CDNs

30

15 CDN • Replicate content on many servers • Challenges • How to replicate content • Where to replicate content • How to find replicated content • How to choose among known replicas • How to direct clients towards replica • DNS, HTTP 304 response, , etc. • Akamai

31

Server Selection

• Service is replicated in many places in network • How to direct clients to a particular server? • As part of routing  anycast, cluster load balancing • As part of application  HTTP redirect • As part of naming  DNS • Which server? • Lowest load  to balance load on servers • Best performance  to improve client performance • Based on Geography? RTT? Throughput? Load? • Any alive node  to provide fault tolerance

32

16 Routing Based • Anycast • Give service a single IP address • Each node implementing service advertises route to address • Packets get routed from client to “closest” service node • Closest is defined by routing metrics • May not mirror performance/application needs • What about the stability of routes?

33

Routing Based • Cluster load balancing • Router in front of cluster of nodes directs packets to server • Can only look at global address (L3 switching) • Often want to do this on a connection by connection basis – why? • Forces router to keep per connection state • L4 switching – transport headers, port numbers • How to choose server • Easiest to decide based on arrival of first packet in exchange • Primarily based on local load • Can be based on later packets (e.g., HTTP Get request) but makes system more complex (L7 switching)

34

17 Application Based • HTTP supports simple way to indicate that Web page has moved • Server gets Get request from client • Decides which server is best suited for particular client and object • Returns HTTP redirect to that server • Can make informed application specific decision • May introduce additional overhead  multiple connection setup, name lookups, etc. • While good solution in general HTTP Redirect has some design flaws – especially with current browsers?

35

Naming Based

• Client does name lookup for service • Name server chooses appropriate server address • What information can it base decision on? • Server load/location  must be collected • Name service client • Typically the local name server for client • Round-robin • Randomly choose replica • Avoid hot-spots • [Semi-]static metrics • Geography • Route metrics • How well would these work?

36

18 How Akamai Works

• Clients fetch html document from primary server • E.g., fetch index.html from cnn.com • URLs for replicated content are replaced in html • E.g. replaced with • Client is forced to resolve aXYZ.g.akamaitech.net hostname

37

How Akamai Works • How is content replicated? • Akamai only replicates static content • Serves about 7% of the Internet traffic ! • Modified name contains original file • Akamai server is asked for content • First checks local cache • If not in cache, requests file from primary server and caches file

38

19 How Akamai Works • Root server gives NS record for akamai.net • Akamai.net name server returns NS record for g.akamaitech.net • Name server chosen to be in region of client’s name server • TTL is large • G.akamaitech.net nameserver choses server in region • Should try to chose server that has file in cache - How to choose? • Uses aXYZ name and consistent hash • TTL is small

39

Hashing

• Advantages • Let the CDN nodes are numbered 1 ... m • Client uses a good hash function to map a URL to 1 ... m • Say hash (url) = x, so, client fetches content from node x • No duplication – not being fault tolerant. • One hop access • Any problems? • What happens if a node goes down? • What happens if a node comes back up? • What if different nodes have different views?

40

20 Robust hashing

• Let 90 documents, node 1..9, node 10 which was dead is alive again • % of documents in the wrong node? • 10, 19-20, 28-30, 37-40, 46-50, 55-60, 64-70, 73-80, 82-90 • Disruption coefficient = ½ • Unacceptable, use consistent hashing – idea behind Akamai!

41

Consistent Hash • “view” = subset of all hash buckets that are visible • Desired features • Balanced – in any one view, load is equal across buckets • Smoothness – little impact on hash bucket contents when buckets are added/removed • Spread – small set of hash buckets that may hold an object regardless of views • Load – across all views # of objects assigned to hash bucket is small 42

21 Consistent Hash – Example

• Construction 0 • Assign each of C hash buckets to 14 random points on mod 2n circle, where, hash key size = n. Bucket 12 4 • Map object to random position on circle • Hash of object = closest 8 clockwise bucket • Smoothness  addition of bucket does not cause much movement between existing buckets • Spread & Load  small set of buckets that lie near object • Balance  no bucket is responsible for large number of objects

43

How Akamai Works cnn.com (content provider) DNS root server Akamai server

Get foo.jpg 11 10 Get index. html 1 2 3 5 Akamai high-level DNS server 4 6 7 Akamai low-level DNS server 8 Akamai server 9 End-user 12 Get /cnn.com/foo.jpg

44

22 Akamai – Subsequent Requests cnn.com (content provider) DNS root server Akamai server

Get index. html 1 2 Akamai high-level DNS server

7 Akamai low-level DNS server 8 Akamai server 9 End-user 12 Get /cnn.com/foo.jpg

45

Coral: An Open CDN

Browser Coral Origin httpprx dnssrv Coral httpprx Server dnssrv Coral httpprx dnssrv Browser Coral httpprx dnssrv

Coral httpprx dnssrv Coral httpprx dnssrv Browser

Browser

Pool resources to dissipate flash crowds

• Implement an open CDN • Allow anybody to contribute • Works with unmodified clients

• CDN only fetches once from origin server 46

23 Using CoralCDN • Rewrite URLs into “Coralized” URLs

• www.x.com → www.x.com.nyud.net:8090

• Directs clients to Coral, which absorbs load

• Who might “Coralize” URLs? • Web server operators Coralize URLs • Coralized URLs posted to portals, mailing lists • Users explicitly Coralize URLs

47

CoralCDN components

Origin Server ? httpprx ?  Fetch data httpprx from nearby dnssrv DNS Redirection Cooperative Return proxy, preferably one Web Caching near client Resolver Browser www.x.com.nyud.net 216.165.108.10 48

24 Functionality needed

 DNS: Given network location of resolver, return a proxy near the client put (network info, self) get (resolver info) → {proxies}

 HTTP: Given URL, find proxy caching object, preferably one nearby put (URL, self) get (URL) → {proxies}

49

Use a DHT?

• Supports put/get interface using key-based routing • Problems with using DHTs as given NYC Japan NYU NYC Columbia

• Lookup latency Germany • Transfer latency

• Hotspots 50

25 Coral Contributions

• Self-organizing clusters of nodes • NYU and Columbia prefer one another to Germany

• Rate-limiting mechanism • Everybody caching and fetching same URL does not overload any node in system

• Decentralized DNS Redirection • Works with unmodified clients

No centralized management or a priori knowledge of proxies’ locations or network configurations

51

Overview

• DNS

• Service location

52

26 Service Location

• What if you want to lookup services with more expressive descriptions than DNS names • E.g. please find me printers in cs.berkeley.edu instead of laserjet1.cs.berkeley.edu • What do descriptions look like? • How is the searching done? • How will it be used? • Search for particular service? • Browse available services? • Composing multiple services into new service?

53

Service Descriptions • Typically done as hierarchical value- attribute pairs • Type = printer  memory = 32MB, lang = PCL • Location = Berkeley  building = Soda • Hierarchy based on attributes or attributes- values? • E.g. Country  state or country=USA  state=CA and country=Canada  province=BC? • Can be done in something like XML

54

27 Service Discovery (Multicast)

• Services listen on well known discovery group address • Client multicasts query to discovery group • Services unicast replies to client • Tradeoffs • Not very scalable  effectively broadcast search • Requires no dedicated infrastructure or bootstrap • Easily adapts to availability/changes • Can scope request by multicast scoping and by information in request

55

Service Discovery (Directory Based) • Services register with central directory agent • Soft state  registrations must be refreshed or the expire • Clients send query to central directory  replies with list of matches • Tradeoffs • How do you find the central directory service? • Typically using multicast based discovery! • SLP also allows directory to do periodic advertisements • Need dedicated infrastructure • How do directory agents interact with each other? • Well suited for browsing and composition  knows full list of services

56

28 Service Discovery (Routing Based)

• Client issues query to overlay network • Query can include both service description and actual request for service • Overlay network routes query to desired service[s] • If query only description, subsequent interactions can be outside overlay (early-binding) • If query includes request, client can send subsequent queries via overlay (late-binding) • Subsequent requests may go to different services agents • Enables easy fail-over/mobility of service • Tradeoffs • Routing on complex parameters can be difficult/expensive • Can work especially well in ad-hoc networks • Can late-binding really be used in many applications?

57

Wide Area Scaling • How do we scale discovery to wide area? • Hierarchy? • Hierarchy must be based on attribute of services • All services must have this attribute • All queries must include (implicitly or explicitly) this attribute • Tradeoffs • What attribute? Administrative (like DNS)? Geographic? Network Topologic? • Should we have multiple hierarchies? • Do we really need hierarchy? Search engines seem to work fine!

58

29 Other Issues • Dynamic attributes • Many queries may be based on attributes such as load, queue length • E.g., print to the printer with shortest queue • Security • Don’t want others to serve/change queries • Also, don’t want others to know about existence of services • Randy’s home SLP server is advertising the $50,000 MP3 stereo system (come steal me!)

59

30