Paper Title (Use Style: Paper Title) s31

International Journal of Enhanced Research in Management & Computer Applications,

IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016

The Performance evaluation of using Quic , TCP and SCTP within Cloud and Cloudlet environments

Page | 1

International Journal of Enhanced Research in Management & Computer Applications,

IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016

Nawel Kortas1

1 IsitCom - Department of Computer Science - .H.Sousse - University of Sousse - Tunisia

Page | 1

International Journal of Enhanced Research in Management & Computer Applications,

IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016

Page | 1

International Journal of Enhanced Research in Management & Computer Applications,

IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016

Page | 1

International Journal of Enhanced Research in Management & Computer Applications,

IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016


This paper presents the performance evaluation of Quick UDP Internet Connections (QUIC), SCTP, TCP and TCP-Reno within Cloud and Cloudlet services. We first present the transport protocols and a comparison between Cloud and Cloudlet. Secondly, we assess the performance of QUIC compared to SPDY and TCP in terms of transport time decrease. Then, we present a simulation setting for energy aware using Green-Cloud simulator. The simulator is planned to arrest the details of the consumed energy by datacenter mechanisms such us switches, and servers. The simulation results acquired for TCP, SCTP and Reno display the value of each protocol in employing power management schema, such as voltage scaling and dynamic shutdown with different architectures. After that, we set the demonstration results of throughput and latency of TCP and Quic within cloud and cloudlet services using CloudSim simulator and browser services.

Keywords: Quic, TCP, TCP-Reno, SCTP, Cloud, Cloudlet, SPDY, File sharing services, transport protocols.


Cloud computing [1] is TCP/IP based in high development and integrations of computer technologies. Without the standard inter-connect protocols and the mature of assembling datacenter technologies, cloud computing would not become reality either.Cloud server process and maintain all the service data or users need to contact the cloud server whenever they want to consume data or contribute data that they have generated. The goal of this paper is to present the performance evaluation of the well-known transport protocols within cloud and cloudlet services. We firstly make a comparison between the transport protocols, then a comparison between Cloud services and cloudlet [2]. Section 4 presents simulation environment and the structure of the simulator. Then, section 5 presents the performance evaluation, experimental setup and its results. Finally, Section 6 concludes the paper.



TCP are mainly required for Internet applications such as World Wide Web, e-mail, remote administration, file transfer, etc. TCP is able to recover data that is damaged, lost, duplicated, or delivered out of order by the Internet communication system. TCP ensures reliable data transfer, and uses sequence numbers and acknowledgements from the receiver. The basic idea is that each octet is assigned a sequence number in a segment that is transmitted. TCP sender preserves a copy on a retransmission queue after sending it over the network and starts a timer. TCP provides flow and congestion control mechanisms through the use of congestion window [3].

B. TCP-Reno

TCP-Reno [4] preserves the basic principle of TCP. However it adds some mechanisms to enable the early detection of packet loss and congestion state. The logic behind is that whenever the end user receives a duplicate response, then this could have been already received. Also if the next expected segment has been delayed in the network and the segments couldn’t reach the end user or if this one receives a number of duplicated responses then this could probably mean that the packets are lost. Moreover, TCP-Reno suggests the use of the Fast Retransmit algorithm.


Stream Control Transmission Protocol (SCTP) [5, 6] has similar congestion control and retransmission mechanisms to those of TCP, and were primary designed for wired networks. SCTP is a reliable transport protocol with multi-homing and multi-streaming mechanisms.

Table 1: Comparison of SCTP and TCP

Services/Features / TCP / SCTP
Connection-oriented / Yes / Yes
Full duplex / Flow control / Yes / Yes
Reliable data transfer / Yes / Yes
Partial-reliable data transfer / No / Optional
Unordered data delivery / No / Yes
Congestion control / Yes / Yes
Selective ACKs / Optional / Yes
Preservation of boundaries / No / Yes
Path MTU discovery / Yes / Yes
Application PDU bundling / Yes / Yes
Multistreaming / No / Yes
Multihoming / No / Yes

Multistreaming within an SCTP association separates flows of rationally different data into independent streams. This separation enhances the application flexibility by allowing it to identify semantically different flows of data, and have the transport layer “manage” these flows (as the authors [6] argue should be the responsibility of the transport layer, not the application layer)

Multihomed support provided by SCTP progresses the robustness of the association because there may be several connections in the same association. If one IP connection fails, traffic can still be sent using another connection. In an association one connection is marked as primary and if this connection has a breakdown the sender starts sending data using one of the other connections In fact, SCTP connections can still continue transmitting data via backup path if the network path disruption happens. This mechanism is transparent to upper application layer.

D. The QUIC Protocol

Quick UDP Internet Connections (QUIC) [7][8] is a protocol proposed by Google and designed to supply security and reliability along with reduced connection and transport latency. Google has already deployed QUIC protocol in their servers and has a client implementation in the Chrome web browsers.

Figure 1 sets the focal architectural differences between HTTP over TCP, SPDY over TCP and QUIC over UDP. The main idea behind QUIC is to use the UDP to overcome the SPDY inefficiencies as discussed in Section 1 along with an ad-hoc intended encryption protocol named "QUIC Crypto" which provides a transport security service similar to TLS.

Application layer

Transport layer TCP TCP UDP

Internet layer IP

Figure 1: HTTP, SPDY and QUIC architecture

Since the UDP is unreliable and does not implement congestion control, QUIC implements retransmissions and congestion control at the application layer. Furthermore, QUIC leverages most part of the SPDY design choices to inherit its benefits [10].


QUIC [7] [8] [9] [10] stands for “Quick UDP Internet Connections”. It is an experimental web protocol from Google. It is an extension of the research evident in SPDY and HTTP/2.

Google’s QUIC uses a TCP emulation in UDP that has a data encoding that includes Forward Error Correcting Codes (FEC) as a way of performing a limited amount of repair of the data stream in the face of packet loss without retransmission. QUIC achieve bandwidth estimation as a means of rapidly realization an efficient sending rate.

Application layer

Presentation layer

+ Session layer

Transport layer

Network layer IP

Figure 2: Presentation of basic protocol stacks for QUIC, SPDY and HTTP/2 over TLS

SPDY additional assists QUIC by multiplexing application sessions within a single end-to-end transport protocol session. This approach avoids the startup overhead of each TCP session, and leverages the inspection that TCP takes some time to establish the bottleneck capacity of the network. QUIC is exposed to overlap with the scope of TCP and TLS, as it includes features that are similar to TLS and simulates the reliable packet transmission and congestion evasion mechanisms of TCP.

By using the UDP, QUIC is able to eliminate the HOL issue a effecting the SPDY multiplexed streams. All packets have to wait until the lost packet is retransmitted before TCP can transport the data to the application (TCP does not allow out-of-order delivery); on the other hand, UDP can transport the entire received packet to the application without waiting for the retransmission

All the inadequacies mentioned above are due to the employ of TCP as transport protocol. This has provoked Akamai and Google to suggest new reliable protocols on top of UDP. Akamai has proposed a Hybrid HTTP and UDP content delivery protocol that is working in its CDN [13]. Google is aiming at reducing TCP handshake time [15] and, more recently in [12], at improving TCP loss recovery mechanism. Regrettably, the improvements above mentioned are not realized in the default version of TCP. Motivated by this indication Google has proposed QUIC [14] over UDP to replace HTTP over TCP. QUIC has been already deployed by Google in their servers, such as the ones powering YouTube, and can be performed in the Web client Chrome browser that runs on billions of desktop and mobile devices. This puts Google in the position of driving a switch of a sizable amount of traffic from HTTP over TCP to QUIC over UDP.

E. Connection startup latency

Figure 3: Startup latency

Figure 3(a) sets the time requisite to setup a TCP connection: it takes one RTT for the handshake and at least one extra RTT or two in the case of an encrypted connection over TLS. When QUIC is implemented (Figure 3(b)), the time taken to set up a connection is mainly one RTT; in the case the client has already talked to the server before, the startup latency assumes zero RTT even in case of an encrypted connection (QUIC uses its own encryption algorithm named QUIC-Crypto). QUIC-Crypto decrypts packets separately: this avoids sequential decoding addiction which would damage QUIC ability to provide out-of-order delivery to reduce the HOL.

F. Quic and TCP

A single lost packet in an basic TCP connection stalls all of the multiplexed SPDY streams over that connection. By comparison, a single lost packet for X parallel HTTP connections will only stall one out of X connections. With UDP, QUIC can maintain out of order delivery, so that a lost packet will typically impact (stall) at most one stream [11]. TCP's congestion evasion via a single congestion window also puts SPDY at a disadvantage over TCP when compared to several HTTP connections, each with a separate congestion window. Separate congestion windows are not impacted as much by a packet loss, and we expect that QUIC will be capable to more equitably handle congestion for a set of multiplexed connections.

TCP, and TLS/SSL, regularly require one or more trip times (rtts) throughout the connection. Quic can frequently reduce connection costs towards zero RTTs (for example: say Good morning then send data request without waiting).

TCP bracket is built into the kernel of operating systems. Taking into account how gradually users around the word upgrade their OS, it is unlikely to see significant adoption of client-side TCP changes in less than 5-15 years , Quic permits us to test and experiment with new ideas, and to get results sooner, We are expectant that Quic features will migrate into TCP and TLS if they prove effective.

G. Forward Error Correction

One more attractive feature of QUIC is the Forward Error Correction (FEC) module that copes with packet losses. The advantage of the FEC module could be particular effective in further reducing HOL over a single QUIC stream by punctually recovering a lost packet, particularly in the case of high RTT where retransmissions could considerably affect the HOL latency. It works as follows: one FEC packet is computed at the end of a series of packets as the XOR sum of the packets payload; these packets compose a\FEC Group [16]. Quic is well employ bandwidth estimation in each direction into congestion avoidance, and then pace packet transmissions event to reduce packet loss, it will also use packet-level error correction codes to reduce the need to retransmit lost packet data. Quic aligns cryptographic block limitations with packet boundaries, so that packet loss impact is further contained.


A.  The cloud computing

Cloud Computing services have reserved a place in modern computing technology because of its current powerful computing capabilities (CPU, memory, storage media) and also as a result to networking competences such as evaluation and cluster computing and due to the vitalization techniques [17] as in Figure 4. Cloud computing [18] is TCP/IP based on high development and integrations of computer technologies such as fast microprocessor, huge memory, high-speed network and reliable system architecture.

Figure 4: Cloud computing virtualization

The cloud computing as new technology supports the end user to decrease important computing effort to achieve its goals. In case of IaaS (infrastructure as a service) [19] user, the end user is no more needed to purchase computer equipment which needed to complete its objective, but he can charge all equipment he needs then use them as much as it can. In adding to IaaS provided by cloud computing staffs, computer application creators can take benefit of the multiple development platforms existing on the cloud to develop their own applications and organize them online, which is known as PaaS( platform as the service) [20] .

B.  Why Cloudlet

C Users Nawel Downloads cloudlet image1 png One solution to defeat these resource limitations is mobile cloud computing [21]. By leveraging infrastructure such as Amazon’s EC2 cloud or Rackspace [21], computationally expensive tasks know how to be offloaded to the cloud. However, these clouds are usually far from the mobile user, and the high WAN latency makes this approach insufficient for real-time applications. To cope with this high latency, Satyanarayanan [22] introduced the concept of cloudlets: trusted, resource rich computers in the near vicinity of the mobile user (e.g. near or collocated with the wireless access point). Mobile users can then rapidly instantiate custom virtual machines (VMs) on the cloudlet running the required software in a thin client fashion [23].

Figure 5. The cloudlet infrastructure and use.

Even if cloudlets might determine the issue of latency, there are two important drawbacks of the VM based cloudlet approach that we have not mentioned. The first one remains dependent on service providers that actually deploy such cloudlet infrastructure in LAN networks. To improve this constraint, we suggest a more dynamic cloudlet concept, where all devices in the LAN network can assist in the cloudlet. Next to the cloudlet infrastructure provided by service providers in the mobile network or by a corporation as a corporate cloudlet, all devices in the home network can share their resources and form a home network cloudlet. On the train, different users can also share resources in an ad hoc cloudlet.