Towards securing the Internet of Things with QUIC Lars Eggert Technical Director, Networking 2020-2-23 QUIC: a fast, secure, evolvable transport protocol for the Internet § Fast better user experience than TCP/TLS for HTTP and other content § Secure always-encrypted end-to-end security, resist pervasive monitoring § Evolvable prevent network from ossifying, can deploy new versions easily § Transport support TCP semantics & more (realtime media, etc.) provide better abstractions, avoid known TCP issues UDP CC TLS HTTP 2 © 2020 NetApp, Inc. All rights reserved. What motivated QUIC? 3 © 2020 NetApp, Inc. All rights reserved. The Internet hourglass Classical version email WWW phone... § Inspired by OSI “seven-layer” model Layer 7 § Minus presentation (6) and session (5) SMTP HTTP RTP... § “IP on everything” TCP UDP… Layer 4 § All link tech looks the same (approx.) § Transport layer provides IP Layer 3 communication abstractions to apps § Unicast/multicast ethernet PPP… § Multiplexing Layer 2 § Streams/messages CSMA async sonet... § Reliability (full/partial) Boardwatch Magazine, Aug. 1994. § Flow/congestion control copper fiber radio... Layer 1 § … Steve Deering. Watching the Waist of the Protocol Hourglass. Keynote, IEEE ICNP 1998, Austin, TX, USA. http://www.ieee- 4 © 2020 NetApp, Inc. All rights reserved. icnp.org/1998/Keynote.ppt The Internet hourglass 2015 version (ca.)In the meantime... § The waist has split: IPv4 and IPv6 • The interface at the endpoint is Applications Layer 7 largely§ TCP theis samedrowning as itout has UDP been: § HTTP and TLS are de facto part of HTTP • "Thetransport network is a file descriptor" TLS Layer 4 • The§ waistConsequence: of the hourglass web apps has on creptIPv4/6 TCP up to HTTP: even less flexible. • Transport is squeezed in the middle. ip4 Layer 3 ip6 • Way out: applications implement their own new transport features. Link Layer 1/2 B. Trammell and J. Hildebrand, "Evolving Transport in the Internet," in IEEE Internet Computing, vol. 18, no. 5, pp. 60-64, Sept.-Oct. 2014. 5 © 2020 NetApp, Inc. All rights reserved. 3 What happened? § Transport slow to evolve (esp. TCP) § Fundamentally difficult problem Middlebox § Network made assumptions about what boom (TCP) traffic looked like & how it behaved Slow § Tried to “help” and “manage” Rise of transport § TCP “accelerators” & firewalls, DPI, NAT, etc. the web evolution § The web happened § Almost all content on HTTP(S) § Easier/cheaper to develop for & deploy on Internet § Amplified by mobile & cloud ossification § Baked-in client/server assumption 6 © 2020 NetApp, Inc. All rights reserved. TCP is not aging well § We’re hitting hard limits (e.g., TCP option space) § 40B total (15 * 4B - 20) § Used: SACK-OK (2), timestamp (10), window Scale (3), MSS (4) § Multipath needs 12, Fast-Open 6-18… § Incredibly difficult to evolve, c.f. Multipath TCP By Ere at Norwegian Wikipedia (Own work) [Public domain], via Wikimedia Commons § New TCP must look like old TCP, otherwise it gets dropped § TCP is already very complicated § Slow upgrade cycles for new TCP stacks (kernel update required) § Better with more frequent update cycles on consumer OS § Still high-risk and invasive (reboot) § TCP headers not encrypted or even authenticated – middleboxes can still meddle § TCP-MD5 and TCP-AO in practice only used for (some) BGP sessions 7 © 2020 NetApp, Inc. All rights reserved. S. Ladiwala et al. / Computer Communications 32 (2009) 691–702 693 network. In this section, we first describe the overall idea of trans- acknowledging them to the sender before they have arrived at parent TCP acceleration from the perspective of an end-to-end con- the receiver, the TCP accelerator effectively splits one TCP connec- nection traversing the network. Then, we view TCP acceleration tion into two connections with shorter feedback loops. In order to from a router’s point of view. be able to retransmit packets that may get lost after an acknowl- edgment has been sent to the sender, the accelerator node requires 3.1. Network topology a buffer (shown on the side of Fig. 2). The following example shows the typical behavior of the TCP accelerator: Fig. 1(a) illustrates a conventional TCP connection where only the end-systems participate in Layer 4 processing. The network Immediate response to sender: SYN and DATA packets are performs Layer 3 forwarding on datagrams and does not alter immediately buffered and acknowledged. The only exception any of the Layer 4 segments. Fig. 1(b) illustrates how TCP acceler- is the first arrival of DATA 3, where no buffer space is ation nodes (denoted by ‘A’) change this paradigm. An accelerator available. node terminates TCP connections and opens a second connection Local retransmission: When packets are lost, they are locally to the next Layer 4 node. This allows the accelerator node to shield retransmitted (e.g., DATA 3). Due to a shorter RTT for both con- the TCP interactions (e.g., packet loss) from one connection to an- nections, a shorter timeout can more quickly detect the packet other. As a result, the feedback control loops, which implement the loss. fundamental mechanisms of reliability, flow control, and conges- Flow control back pressure: When the connection from the tion control, are smaller with lower delay. As a result, accelerated accelerator node is slower than the one to it, buffer space will TCP can react faster and achieve higher throughput than conven- fill up and no additional packets can be acknowledged and tional TCP. stored. This will cause the sender to detect packet loss and slow down. 3.2. Node architecture Before we quantify the performance improvement from TCP The most important observation in Fig. 2 is that the end-systems Acceleration in Section 4, we discuss how an accelerator node do not see any difference to a conventional TCP connection (other implements this functionality. than packet order and performance). 3.2.1. Acceleration example 3.2.2. Acceleration algorithm To illustrate the behavior of an individual TCP accelerator node, The detailed interactions of a TCP accelerator node with a flow Fig. 2 shows a space–time diagram for an example connection over of packets from a connection are shown in Algorithm 1. The steps conventional routers and TCP accelerators. For simplicity, unidirec- of the algorithm are: tional traffic with 1-byte packets is assumed. The initial sequence Lines 1–2: A packet is received and classified. The variable p rep- number is assumed to be 1. As Fig. 2(a) illustrates in this example, conventional routers just forward segments without interacting on resents the packet and f represents the flow to which the packet the transport layer. In contrast, the TCP accelerator node in belongs. Lines 3 and 35–36: If a packet is not a TCP packet, it is forwarded Fig.Middleboxes 2(b) actively participates in the TCPmeddle connection (e.g., responds to SYN, DATA, ACK, and FIN segments). By receiving packets and without further consideration. Lines 4–10: If a packet is a SYN packet (indicating connection Example: TCP accelerators 694 setup) and no flow recordS. Ladiwala exists, then et al. /a Computer flow record Communications is estab- 32 (2009) 691–702 lished. A SYN/ACK is returned to the sender and the SYN is for- warded towardsConventional the destination (‘‘upstream” and ‘‘downstream” TCP Accelerator respectively).Sender Since the SYN canReceiver get lost, a timerSender needs to be Receiver router accelerator buffer started for that packet. If the SYN/ACK gets lost, the original sen- der will retransmit the SYN and cause a retransmission of the SYN SYN SYN SYN SYN/ACK. SYN / ACK 1 Lines 11–19: If data are received from the upstream senderData and 1 SYN / ACK 1 SYN / ACK 1 ACK 2 1 buffer space is available, then the packet is buffered and for- Data 1 SYN / ACK 1 warded downstream. If no buffer space is available, theData TCP 2 Data 1 Data 3 ACK 2 1 2 TCP connection accelerator needs to propagateData 1 back-pressuretimeout to slow downACK 3 Data 2 the sender. In this case, the packet is not acknowledged andACK per- 3 2 ACK 2 ACK 3 ceived as a packet drop by the sender. The congestion controlData 3 ACK 2 ACK 4 Data 3 3 mechanism of the sender will slow down the sending rate until timeout Data 2 Data 4 buffer spaceData becomes 3 available.Data 2 Packets are only forwarded ACK 5 3 4 when the downstream connectionData 3 does not have too much out- Data 3 ACK 3 Data 4 standingtimeout data (lines 15–18). ACK 4 A ACK 3 Lines 20–26: If an ACK is received, then buffer space in the com- ACK 5 4 A Data 4 plementary flow (flow in theData opposite 4 direction, denoted by f ) Data 3 can be released. This reducesData the 3 amount of outstanding data ACK 3 and (potentially several) packets can be transmitted from the ACK 5 buffer space ofACKf 3. Lines 27–31: ACKIf a 5 FIN is received, connection teardown can be TCP connection TCP connection TCP connection initiated. Lines 32–33: If a packet does not match the above criteria, it is handled as an exception. Sameer Ladiwala, Ramaswamy Ramaswamy, and Tilman Wolf. Transparent TCP acceleration. Computer Communications, Volume 32, Issue 4, 2009, pages 691-702. Fig. 1. Conventional and accelerated TCP connections. Systems that implement TCP Lines 39–40: Whenever a timeout occurs, the packet that has ini- functionality are marked with ‘A’. tiated the timer is retransmitted. Fig. 2. Message sequence chart of an example connection comparing conventional and accelerated TCP connections. 8 © 2020 NetApp, Inc. All rights reserved. addition to the state maintenance as described above, a RTT esti- Algorithm 1. TCP acceleration algorithm mator needs to be maintained for each flow according to the TCP 1: p receive packet specification.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages50 Page
-
File Size-