Paper Title (Use Style: Paper Title) s31

Total Page:16

File Type:pdf, Size:1020Kb

Paper Title (Use Style: Paper Title) s31

International Journal of Enhanced Research in Management & Computer Applications, IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016 The Performance evaluation of using Quic , TCP and SCTP within Cloud and Cloudlet environments

Nawel Kortas1

1 IsitCom - Department of Computer Science - .H.Sousse - University of Sousse - Tunisia

ABSTRACT

This paper presents the performance evaluation of Quick UDP Internet Connections (QUIC), SCTP, TCP and TCP-Reno within Cloud and Cloudlet services. We first present the transport protocols and a comparison between Cloud and Cloudlet. Secondly, we assess the performance of QUIC compared to SPDY and TCP in terms of transport time decrease. Then, we present a simulation setting for energy aware using Green-Cloud simulator. The simulator is planned to arrest the details of the consumed energy by datacenter mechanisms such us switches, and servers. The simulation results acquired for TCP, SCTP and Reno display the value of each protocol in employing power management schema, such as voltage scaling and dynamic shutdown with different architectures. After that, we set the demonstration results of throughput and latency of TCP and Quic within cloud and cloudlet services using CloudSim simulator and browser services.

Keywords: Quic, TCP, TCP-Reno, SCTP, Cloud, Cloudlet, SPDY, File sharing services, transport protocols.

1. INTRODUCTION

Cloud computing [1] is TCP/IP based in high development and integrations of computer technologies. Without the standard inter-connect protocols and the mature of assembling datacenter technologies, cloud computing would not become reality either.Cloud server process and maintain all the service data or users need to contact the cloud server whenever they want to consume data or contribute data that they have generated. The goal of this paper is to present the performance evaluation of the well-known transport protocols within cloud and cloudlet services. We firstly make a comparison between the transport protocols, then a comparison between Cloud services and cloudlet [2]. Section 4 presents simulation environment and the structure of the simulator. Then, section 5 presents the performance evaluation, experimental setup and its results. Finally, Section 6 concludes the paper.

2. TRANSPORT PROTOCOLS

A. TCP TCP are mainly required for Internet applications such as World Wide Web, e-mail, remote administration, file transfer, etc. TCP is able to recover data that is damaged, lost, duplicated, or delivered out of order by the Internet communication system. TCP ensures reliable data transfer, and uses sequence numbers and acknowledgements from the receiver. The basic idea is that each octet is assigned a sequence number in a segment that is transmitted. TCP sender preserves a copy on a retransmission queue after sending it over the network and starts a timer. TCP provides flow and congestion control mechanisms through the use of congestion window [3].

Page | 1 B. TCP-Reno TCP-Reno [4] preserves the basic principle of TCP. However it adds some mechanisms to enable the early detection of packet loss and congestion state. The logic behind is that whenever the end user receives a duplicate response, then this could have been already received. Also if the next expected segment has been delayed in the network and the segments couldn’t reach the end user or if this one receives a number of duplicated responses then this could probably mean that the packets are lost. Moreover, TCP-Reno suggests the use of the Fast Retransmit algorithm. B. SCTP Stream Control Transmission Protocol (SCTP) [5, 6] has similar congestion control and retransmission mechanisms to those of TCP, and were primary designed for wired networks. SCTP is a reliable transport protocol with multi-homing and multi-streaming mechanisms.

Table 1: Comparison of SCTP and TCP Services/Features TCP SCTP Connection-oriented Yes Yes Full duplex / Flow control Yes Yes Reliable data transfer Yes Yes Partial-reliable data transfer No Optional Unordered data delivery No Yes Congestion control Yes Yes Selective ACKs Optional Yes Preservation of boundaries No Yes Path MTU discovery Yes Yes Application PDU bundling Yes Yes Multistreaming No Yes Multihoming No Yes

Multistreaming within an SCTP association separates flows of rationally different data into independent streams. This separation enhances the application flexibility by allowing it to identify semantically different flows of data, and have the transport layer “manage” these flows (as the authors [6] argue should be the responsibility of the transport layer, not the application layer)

Multihomed support provided by SCTP progresses the robustness of the association because there may be several connections in the same association. If one IP connection fails, traffic can still be sent using another connection. In an association one connection is marked as primary and if this connection has a breakdown the sender starts sending data using one of the other connections In fact, SCTP connections can still continue transmitting data via backup path if the network path disruption happens. This mechanism is transparent to upper application layer. D. The QUIC Protocol

Quick UDP Internet Connections (QUIC) [7][8] is a protocol proposed by Google and designed to supply security and reliability along with reduced connection and transport latency. Google has already deployed QUIC protocol in their servers and has a client implementation in the Chrome web browsers. Figure 1 sets the focal architectural differences between HTTP over TCP, SPDY over TCP and QUIC over UDP. The main idea behind QUIC is to use the UDP to overcome the SPDY inefficiencies as discussed in Section 1 along with an ad-hoc intended encryption protocol named "QUIC Crypto" which provides a transport security service similar to TLS.

Application layer

Page | 2 International Journal of Enhanced Research in Management & Computer Applications, IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016

Transport layer TCP TCP UDP

Internet layer IP

Figure 1: HTTP, SPDY and QUIC architecture

Since the UDP is unreliable and does not implement congestion control, QUIC implements retransmissions and congestion control at the application layer. Furthermore, QUIC leverages most part of the SPDY design choices to inherit its benefits [10].

E. QUIC and SPDY QUIC [7] [8] [9] [10] stands for “Quick UDP Internet Connections”. It is an experimental web protocol from Google. It is an extension of the research evident in SPDY and HTTP/2. Google’s QUIC uses a TCP emulation in UDP that has a data encoding that includes Forward Error Correcting Codes (FEC) as a way of performing a limited amount of repair of the data stream in the face of packet loss without retransmission. QUIC achieve bandwidth estimation as a means of rapidly realization an efficient sending rate.

Application layer

Presentation layer

+ Session layer

Transport layer

Network layer IP

Figure 2: Presentation of basic protocol stacks for QUIC, SPDY and HTTP/2 over TLS

SPDY additional assists QUIC by multiplexing application sessions within a single end-to-end transport protocol session. This approach avoids the startup overhead of each TCP session, and leverages the inspection that TCP takes some time to establish the bottleneck capacity of the network. QUIC is exposed to overlap with the scope of TCP and TLS, as it includes features that are similar to TLS and simulates the reliable packet transmission and congestion evasion mechanisms of TCP. By using the UDP, QUIC is able to eliminate the HOL issue a effecting the SPDY multiplexed streams. All packets have to wait until the lost packet is retransmitted before TCP can transport the data to the application (TCP does not allow out-of-order delivery); on the other hand, UDP can transport the entire received packet to the application without waiting for the retransmission All the inadequacies mentioned above are due to the employ of TCP as transport protocol. This has provoked Akamai and Google to suggest new reliable protocols on top of UDP. Akamai has proposed a Hybrid HTTP and UDP content delivery protocol that is working in its CDN [13]. Google is aiming at reducing TCP handshake time [15] and, more recently in [12], at improving TCP loss recovery mechanism. Regrettably, the improvements above mentioned are not realized in the default version of TCP. Motivated by this indication Google has proposed QUIC [14] over UDP Page | 3 to replace HTTP over TCP. QUIC has been already deployed by Google in their servers, such as the ones powering YouTube, and can be performed in the Web client Chrome browser that runs on billions of desktop and mobile devices. This puts Google in the position of driving a switch of a sizable amount of traffic from HTTP over TCP to QUIC over UDP. E. Connection startup latency

Figure 3: Startup latency Figure 3(a) sets the time requisite to setup a TCP connection: it takes one RTT for the handshake and at least one extra RTT or two in the case of an encrypted connection over TLS. When QUIC is implemented (Figure 3(b)), the time taken to set up a connection is mainly one RTT; in the case the client has already talked to the server before, the startup latency assumes zero RTT even in case of an encrypted connection (QUIC uses its own encryption algorithm named QUIC-Crypto). QUIC- Crypto decrypts packets separately: this avoids sequential decoding addiction which would damage QUIC ability to provide out-of-order delivery to reduce the HOL. F. Quic and TCP A single lost packet in an basic TCP connection stalls all of the multiplexed SPDY streams over that connection. By comparison, a single lost packet for X parallel HTTP connections will only stall one out of X connections. With UDP, QUIC can maintain out of order delivery, so that a lost packet will typically impact (stall) at most one stream [11]. TCP's congestion evasion via a single congestion window also puts SPDY at a disadvantage over TCP when compared to several HTTP connections, each with a separate congestion window. Separate congestion windows are not impacted as much by a packet loss, and we expect that QUIC will be capable to more equitably handle congestion for a set of multiplexed connections. TCP, and TLS/SSL, regularly require one or more trip times (rtts) throughout the connection. Quic can frequently reduce connection costs towards zero RTTs (for example: say Good morning then send data request without waiting). TCP bracket is built into the kernel of operating systems. Taking into account how gradually users around the word upgrade their OS, it is unlikely to see significant adoption of client-side TCP changes in less than 5-15 years , Quic permits us to test and experiment with new ideas, and to get results sooner, We are expectant that Quic features will migrate into TCP and TLS if they prove effective. G. Forward Error Correction One more attractive feature of QUIC is the Forward Error Correction (FEC) module that copes with packet losses. The advantage of the FEC module could be particular effective in further reducing HOL over a single QUIC stream by punctually recovering a lost packet, particularly in the case of high RTT where retransmissions could considerably affect the HOL latency. It works as follows: one FEC packet is computed at the end of a series of packets as the XOR sum of the

Page | 4 International Journal of Enhanced Research in Management & Computer Applications, IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016 packets payload; these packets compose a\FEC Group [16]. Quic is well employ bandwidth estimation in each direction into congestion avoidance, and then pace packet transmissions event to reduce packet loss, it will also use packet-level error correction codes to reduce the need to retransmit lost packet data. Quic aligns cryptographic block limitations with packet boundaries, so that packet loss impact is further contained.

3. CLOUD SERVICES AND CLOUDLET

A. The cloud computing Cloud Computing services have reserved a place in modern computing technology because of its current powerful computing capabilities (CPU, memory, storage media) and also as a result to networking competences such as evaluation and cluster computing and due to the vitalization techniques [17] as in Figure 4. Cloud computing [18] is TCP/IP based on high development and integrations of computer technologies such as fast microprocessor, huge memory, high-speed network and reliable system architecture.

Figure 4: Cloud computing virtualization The cloud computing as new technology supports the end user to decrease important computing effort to achieve its goals. In case of IaaS (infrastructure as a service) [19] user, the end user is no more needed to purchase computer equipment which needed to complete its objective, but he can charge all equipment he needs then use them as much as it can. In adding to IaaS provided by cloud computing staffs, computer application creators can take benefit of the multiple development platforms existing on the cloud to develop their own applications and organize them online, which is known as PaaS( platform as the service) [20] .

B. Why Cloudlet

One solution to defeat these resource limitations is mobile cloud computing [21]. By leveraging infrastructure such as Amazon’s EC2 cloud or Rackspace [21], computationally expensive tasks know how to be offloaded to the cloud. However, these clouds are usually far from the mobile user, and the high WAN latency makes this approach insufficient for real-time applications. To cope with this high latency, Satyanarayanan [22] introduced the concept of cloudlets: trusted, resource rich computers in the near vicinity of the mobile user (e.g. near or collocated with the wireless access point). Mobile users can then rapidly instantiate custom virtual machines (VMs) on the cloudlet running the required software in a thin client fashion [23].

Page | 5 Figure 5. The cloudlet infrastructure and use.

Even if cloudlets might determine the issue of latency, there are two important drawbacks of the VM based cloudlet approach that we have not mentioned. The first one remains dependent on service providers that actually deploy such cloudlet infrastructure in LAN networks. To improve this constraint, we suggest a more dynamic cloudlet concept, where all devices in the LAN network can assist in the cloudlet. Next to the cloudlet infrastructure provided by service providers in the mobile network or by a corporation as a corporate cloudlet, all devices in the home network can share their resources and form a home network cloudlet. On the train, different users can also share resources in an ad hoc cloudlet.

C. Cloudlets solution mechanism for network latency problem Clouds are usually far from the mobile user, and the high WAN latency makes it deficient for real-time applications. To cope with this high latency, Satyanarayanan [22] introduced the concept of cloudlets: trusted, resource rich computers in the near vicinity of the mobile user (e.g. near or collocated with the wireless access point). Mobile users are able to quickly instantiate tradition virtual machines on the cloudlet operation the required software in a thin client fashion [25]. As distinct in Caceres’ paper [23], cloudlets are decentralized and generally dispersed Internet infrastructure whose compute cycles and storage resources can be leveraged by closely mobile computers. A cloudlet might be a collection of multicore computers, with gigabit internal connectivity and a high bandwidth wireless LAN. A cloudlet can also be a very dominant multi-core server with Internet connectivity depending on the application scenario. Cloudlets shortly had been planned to assist mobile users, directly connected to them in terms of storage and processing [23] [24]. D. Comparison of Cloud vs. Cloudlets Table 2. Comparison of Cloudlet vs. Cloud Cloudlet Cloud State Only soft state Hard and soft state Management Self-managed little to Professionally administered, not professional attention 24*7 operator Environment “ Datacenter in a box” at business Machine room with power conditioning and coding premises Network Need of Low LAN latency Need of High Internet latency latency Sharing Few users at a time 100-1000 s of users at a time Distance Near to the mobile users Far to the mobile users Ownership Decentralized ownership by local business Centralized ownership by Amazon, Yahoo!,.

4. STRUCTURE OF THE SIMULATOR A. GreenCloud simulator GreenCloud [26] is an addition to the network simulator Ns2 [26] which we developed for the training of cloud computing environments. The GreenCloud offers users a de- tailed fine-grained modeling of the energy consumed by the essentials of the data center, such as servers and switches.

Page | 6 International Journal of Enhanced Research in Management & Computer Applications, IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016

Moreover, GreenCloud offers a thorough investigation of workload distributions. Also, a specific attention is devoted on the packet level simulations of communications in the data center infrastructure, which makes available the finest-grain control. The simulator invented the results with fixed packet size and different traffics such us CBR (Constant Bit Rate), TFRC (TCP Friendly Rate Control) and Cloud-user traffic. Figure 4 presents the structure of the GreenCloud extension mapped onto the three-tier data center architecture.

Figure 6: Architecture of the GreenCloud simulator [26]

B. CloudSim simulator The experiments in this section were performed on the CloudSim [27] simulator that is a framework for demonstrating and simulating the cloud computing and cloudlet infrastructures and services.

Figure 7. The main parts and relations of CloudSim.

The CloudSim simulator has many advantages: it can simulate many cloud entities, such as datacenter, host and broker. It can also offer a repeatable and controllable environment. Moreover, we do not need to take too much attention about the hardware details and can concentrate on the algorithm design.

The simulated datacenter and its components can be built by coding and the simulator is very convenient in algorithm design [27]. The user can easily set the amount of computational nodes (hosts) and their resource configuration, which includes processing capacity, amount of RAM, available bandwidth, power consumption and scheduling algorithms. The main parts that relate to the experiments in this section and the relationship between them are shown in Figure 7 while the functions of those components are explained in table 3

Table 3. CloudSim components and their functions Page | 7 Component Function Cloud Information Service An entity registers indexes and discovers the resource. Datacenter It models the core hardware infrastructure, which is offered by Cloud providers. Datacenter Broker It models a broker, which is responsible for mediating negotiations between SaaS and Cloud providers. Host It models a physical server. Vm It models a virtual machine that is run on Cloud host to deal with the cloudlet. Cloudlet It models the Cloud-based application services. Vm Allocation A provisioning policy, which is run in datacenter level, helps to allocate VMs to hosts. Vm Scheduler The policies required for allocating process cores to VMs. It is run on every Host in Datacenter. Cloudlet Scheduler It determines how to share the processing power among Cloudlets on a virtual machine. It is run on VMs.

We conduct different experiments for the exploration of performance metrics related with the execution of cloudlets in cloud computing infrastructure. We explore the overhead associated with cloudlet allocation to VM and cloudlet execution. We are interested in the impact of VM deployment in the execution of cloudlets in diverse scenarios and analyze the overhead allied with the execution of cloudlets in VMs.

5. PERFORMANCE EVALUATION A. Experimental setup

We provide an example that studies simulations of an energy-aware data center by enabled and disabled DNS (dynamic network shutdown) the servers left idle are put into sleep mode and DVFS (dynamic voltage and frequency scaling) while the number of switches and cloud users are varied. Table 4 summarizes the simulation setup parameters. Table 4: simulation setup parameters

Data center architectures

Parameter / Topology 1 2 3 Core nodes (C1) 2 2 2 Aggregation nodes (C2) 4 4 4 Access switch (C3) 3 3 3 Servers (s) 30 30 30 Link (C1-C2) 10 GE 10 GE 10 GE Link (C2-C3) / (C3-S) 1 GE 1 GE 1 GE Architecture Three-tier DNS NO YES YES DVFS NO NO YES

B. Results Using GreenCloud simulator

- Topology 1

As it shows in table 4, we fixed the number of computer node to 30 while the number of cloud user, network switches and transport protocol are varied. Page | 8 International Journal of Enhanced Research in Management & Computer Applications, IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016

Figure 8: Variance of energy – Three-tier – 30 servers Figure 9: Variance of energy – High-Speed – 30 servers

Figure 8 presents variance of energy consumption with three-tier architecture. As it shows, SCTP and TCP-Reno have almost the same energy consumption with 6, 8 and 9 cloud users, this could be due to the multi-streaming used by SCTP and to the multi-association used by TCP-Reno that have both the same performance for the same CPU (central processing unit) use. However, with 5 and 10 Cloud users, TCP-Reno has less total of energy than SCTP, which could be due to the ability of Reno to detect multiple packet losses and to enter into fast recovery when it receives multiple duplicate packets. Otherwise, with 7 Cloud users SCTP has less energy than TCP-Reno that could be due to the launching of multihoming used by SCTP. We could perceive from the figure that TCP has more energy consumption than SCTP and TCP-Reno we might explain this by the higher good put rate of SCTP and the add-on mechanisms of TCP-Reno. SCTP might virtually always be faster than TCP because of larger values (MTU vs. MSS). We increase the debit from 1GE to 10 GE between aggregation nodes and access nodes and we increase the debit from 10 GE to 100 GE between the core nodes and aggregation nodes. As it shows in figure 9, SCTP and TCP-Reno have almost the same energy consumption with 5, 6, 8, 9 and 10 however, with 4 and 7 Cloud users, SCTP has less total of energy than TCP-Reno, and this difference may be due to the overhead incurred by SCTP in framing messages. TCP being a byte stream protocol tries to buffer data in the kernel in order to send out Maximum Transmission Unit (MTU) sized packets. SCTP can also put more than one data chunk [22] in a packet, provided both the chunks can be accommodated in the packet.

- Topology 2

Page | 9 Figure 10: Variance of energy – High-Speed – 1536 servers Figure 11: Variance of energy – TCP- Reno – 1536 servers

We increase the number of servers from 30 to 1536. As it shows the variation of server number which does not present an important variance of energy consumption for TCP and TCP-Reno due to the data center which does not include aggregation switches. TCP was never designed for use within a data center, some of its weaknesses become particularly acute in such environments as we shall discuss. The core switches are connected to the access network directly using 1 GE links and interconnected between using 10 GE links. Moreover the 3T high-speed mainly improves the 3T architecture with providing more bandwidth in the core and aggregation parts of the network. We change the architecture from three-tier to three-tier high-speed and we keep TCP-Reno as a transport protocol. As it shows with 3T architecture TCP-Reno, consume less energy than the 3THS architecture due to the increase of debit from 1GE to 10 GE between aggregation nodes and access nodes and the increase of debit from 10 GE to 100 GE between the core nodes and aggregation nodes. For the 3T topology where the fastest links are 10 G the core and aggregation switches consume a few kilowatts, while in the 3Ths topology where links are of 10 G speed faster switches are needed which consume tens of kilowatts.

- Topology 3 As it is set in table 3, we compare the impact on energy consumption with DNS and DVFS. The servers left idle are put into sleep mode (DNS) [20] while on the under loaded servers the supply voltage is reduced (DVFS) [20]. The time required to change the power state in either mode is 100 ms.

Figure 12: Variance of energy – without DFVS and DNS Figure 13: Variance of energy – With DNS

Figure 12 presents the variance of energy without the use of DVFS and DNS. We could observe from the figure that SCTP and TCP-Reno which have almost the same energy consumption with 4, 6 and 8 cloud users, this could be due to the multi-streaming used by SCTP and to the multi- association used by TCP-Reno that have both the same performance for the same CPU utilization. However with 5 and 10 Cloud users, TCP-Reno has less total of energy than SCTP which could be due to the earlier detected of packet loss. But otherwise with 7 Cloud users SCTP has less energy than TCP-Reno that could be due to the launching of multihoming used by SCTP. In figure 13, we make the use of DNS Able and as it is shown we got almost the same energy consumption with SCTP and TCP-Reno which is due to DNS technique that shut down the two thirds of the servers thus, there is not a large traffic to send, so we do not need to implement the Multihoming or Multistreaming with SCTP. We could observe also that TCP-Reno has less energy consumption with 5 and 10 Cloud users this could be due to the fast Retransmit and fast Recovery Page | 10 International Journal of Enhanced Research in Management & Computer Applications, IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016 assured by TCP-Reno. We could perceive from the figure that TCP has more energy consumption than SCTP and TCP-Reno we might explain this by the higher good put rate of SCTP and the add- on mechanisms of TCP-Reno. SCTP which is basically a connection oriented protocol and is very suitable only for the traffic which needs mostly reliable delivery and unreliable delivery in case of few packets. Congestion Avoidance, fast Retransmit and fast Recovery assured by TCP-Reno make it more achieved than SCTP and TCP.

Figure 14: Link access switches to host – with DVFS

Figure 14 presents the link access switches to host. It can be clearly seen that TCP connection suffered severe losses during the congested period but SCTP can use the alternate path to retransmit data over Multistreaming and recover quickly the lost packet. SCTP reduce link access to 70 % relative to TCP’s links this could be due to the multihoming, SCTP can stay transmitting data when a network interface breaks down. However TCP require the disconnect when network interfaces break down. We could also observe that TCP-Reno present better performance due to the constant update of the window size. TCP-Reno reduces link access until 80% relative to TCP links. TCP- Reno continues to increase its window size by one during each round trip time; this is called additive increase and multiplicative decrease. C. Results Using Cloud shared services

The throughput results of downloading files

12

10 )

s

/ 8

s With-QUIC t i b M 6 (

t With-TCP u p h

g 4 u o r h T 2

0 0 5 10 15 20 25 30 File size (Mo)

Figure 15: The Throughput results of downloading files from Cloud shared file services

Figure 15 presents the throughput results for downloading files from browser with and without Quic. As it is shown, we mark the highest value of throughput with browser integrate the Quic protocol. The average results are as follows: Browser with Quic (5,918 Mbits/s), and Browser without Quic (1,655 Mbits/s), so we mark a proportion of 357,59 % for the browser with Quic compared to the browser without Quic protocol.

Page | 11 The Latency results of downloading files 0,04

0,035

0,03 With-QUIC With-TCP

) 0,025 s

(

e 0,02 m i T 0,015

0,01

0,005

0 0 5 10 15 20 25 30 File size (Mo)

Figure 16: The Latency results of downloading files from Cloud shared file services Figure 16 presents the latency results for downloading files from browser with and without Quic. As it is shown, we mark the lowest value of latency with browser integrate the Quic protocol. The average results of latency are as follows: Browser with Quic (0,517998 ms), and Browser without Quic (1,225861 ms), so we mark a proportion of 42,56 % between the results.

4.1. The throughput results of downloading files from Cloudless and other similar services

Figure 17 presents the download rates for Cloudlet and other Cloud services. As it is shown, we mark the highest download rates with the Local-Cloudless. For the rest of services the throughput results are between 0,458962174 Mbits/s and 0,643545455 Mbits/s). Each experiment is conducted seven times to derive the precise download rate for each services.

The throughput results of downloading files from Cloudlet and other Cloud services 2 1,8 Cloudlet

) ShareFile 1,6 s

MegaUpload /

s 1,4 RapidShare t i

b 1,2 MediaFire M

( HotFile

t 1 u

p 0,8 h g

u 0,6 o r

h 0,4 T 0,2 0 0 5 10 15 20 25 30 File size (Mo)

Figure 17. The throughput results of downloading files from cloudlet and Cloud file shared services

D. Results Using Cloud shared services

Cloudlets Execution Time for Different Number of VMs 45 12

40 10 35

30 8 ) s ( 25 e 6 m i 20 T 15 4

10 2 5

0 0 2 5 10 15 20 25 30 35 40 Cloudlets VMs Figure 18. Cloudlets Execution Time for Different Number of VMs

Page | 12 International Journal of Enhanced Research in Management & Computer Applications, IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016

Figure 18 presents the execution time of applications (cloudlets) changes with a variant number of VMs. We conduct nine experiments for equally increasing number of cloudlets and VMs. A single VM is allocated to a single cloudlet and there is no overhead of VM scheduling for cloudlets. The figure indicates the average execution time of cloudlets. The Analysis of the results indicates that average execution time for each cloudlet increases with the equally growing number of cloudlets and VMs.

Cloudlets Average Execution Time for Shared VMs and Non-Shared VMs

60

50 Execution Time for Non Shared VM

40 )

s Execution Time for Shared VM

(

e 30 m i T 20

10

0 0 5 10 15 20 25 30 35 40 45 Cloudlets VMs

Figure 19. Cloudlets Average Execution Time for Shared VMs and Non-Shared VMs

The comparison of cloudlet execution time in diverse scenarios displays that average execution time of the cloudlet increases in either scenario. Further search in the increased value for either scenario institutes that average execution time of the cloudlet is higher with shared VMs as compare to non-shared VMs, for the reason that some extra overhead is attendant with the arrangement of VMs between multiple cloudlets for shared VMs.

Static deployment of VM performance for input submission of 100 cloudlets

3000

2500 Total execution time

2000 ) Average execution time s

(

e 1500 m i T 1000

500

0 0 20 40 60 80 100 120 Cloudlets VMs

Figure 20. Static deployment of VM performance for input submission of 100 cloudlets

Figure 20 presents a static deployment of VM performance for input submission of 100 cloudlets. The average execution and simulation times decrease considerably. Particularly, for VMs number greater than 50 the values increase with a lower rate. In the case of 100 VMs the total execution time diminutions radically and almost becomes equal to the average execution time. This means that for the particular configuration, and with the VMs number larger than 100 the system reaches a stable state.

Page | 13 Figure 21 sets the cloudlets time-shared and space shared for different number of VMs. As it is shown, the Space Shared is outperforming the time-shared in terms of average execution time reserved by a cloudlet for complete execution. For the reason that in time shared there is the procedure of elimination of a cloudlet from the current execution state to the waiting state, and then getting back the cloudlet from the waiting queue to the execution state. This results in deprivation of performance in terms of overall execution period.

Cloudlets time-shared and space-shared for different number of VMs ) s 1800 m (

e

t 1600

u Time shared c

e 1400 x e Space shared o 1200 t

n

e 1000 k a t

e 800 m i t

l 600 l a r

e 400 v o 200

0 0 5 10 15 20 25 Cloudlets VMs

Figure 21. Cloudlets time-shared and space-shared for different number of VMs

6. CONCLUSION

In this paper, we present the performance evaluation of the well-known transport protocols QUIC, TCP, TCP-Reno and SCTP within cloud and cloudlet services. We presented the simulation environment for energy-aware cloud computing data centers. The simulation results obtained by enabled and disabled DNS and DVFS demonstrate the impact of those two mechanisms for the energy consumption. We could observe that TCP-Reno performs better than TCP. Congestion Avoidance, fast Retransmit and fast Recovery assured by TCP-Reno make it more achieved than SCTP. In addition, results demonstrate that SCTP is a reputable protocol for three-tier architecture in case of reliable and unreliable delivery. Besides, we observe that the aggregating control and data connections for SCTP multistreamed association explain the decrease of energy consumption with the SCTP graphs however; SCTP offers an unreliable service with useless additional overheads. Moreover, with TCP-Reno, the lost packets are detected earlier and the pipeline is not emptied every time a packet is lost. As a matter of fact, by using cloudlet, users faultlessly exploit near computers to obtain the resource profit of cloud computing without deserve WAN latency and the throughput. We try also in this paper to prove the performance of cloudlet by displaying the performance demonstration results of throughput and latency by setting simulation results through CloudSim simulator. Additionally, we display an evaluation of using QUIC within cloud sharing services such as MegaUpload and RapidShare. To present more performance, we should focus to make result within the mobile and the 3G connection. Therefore, we need to present more focus and do a research on the conception of cloudlet as a middleware for device communication within Cloud Computing.

References

[1] Emmanouil Koukoumidis, Dimitrios Lymberopoulos, Karin Strauss, 'Pocket Cloudlets', March 5, (2011). [2] Chunye Gong, Jie Liu, Qiang Zhang and Zhenghu Gong,‘The Characteristics of Cloud Computing’ (2010). [3] A Comparative Analysis of TCP Tahoe, Reno, New-Reno, SACK and Vegas (2011). [4] R Stewart. et al. RFC 2960: Stream Control Transmission Protocol. October (2000).

Page | 14 International Journal of Enhanced Research in Management & Computer Applications, IJERMCA: 2319-7463, Vol. 5 Issue 1, March-2016

[5] Vicuña Nelson, Jiménez Tania and Hayel Yezekael. Performance of SCTP in Wi-Fi and WiMAX networks with multi-homed mobiles, October (2008). [6] Andersson T. An Evaluation:Multiple TCP connections vs Single SCTP Association with Multiple Streams. (2003) [7] “QUIC Protocol Official Website.” [Online]. Available: https://www.chromium.org/quic. April,(2015). [8] Hussein Bakria, Colin Allisona*, Alan Millera, Iain Olivera, 'HTTP/2 and QUIC for Virtual Worlds and the 3D Web, The 10th International Conference on Future Networks and Communications (FNC 2015)',(2015). [9] J. Roskind, “QUIC - Multiplexed stream transport over udp - Design Document.” [Online]. Available: https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqsQx7rFVev2jRFUoVD34/mobilebasic?pli=1. ,(2015). [10] G. Carlucci, L. De Cicco, and S. Mascolo, “HTTP over UDP: an Experimental Investigation of QUIC,” 30th ACM/SIGAPP Symp. Appl. Comput., (2015). [11] M. Fischlin and F. Günther, “Multi-Stage Key Exchange and the Case of Google’s QUIC Protocol,” in Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, pp. 1193–1204,(2014) [12] R. Lychev, S. Jero, A. Boldyreva, and C. Nita-Rotaru, “How secure and quick is QUIC? Provable security and performance analyses,” IEEE Symp. Secur. Priv., (2015). [13] Gaetano Carlucci Politecnico di Bari & Quavlive,Luca De Cicco Politecnico di Bari & Quavlive, Saverio Mascolo ,'HTTP over UDP: an Experimental Investigation of QUIC',(2015). [14] L. Popa et al. Http as the narrow waist of the future internet. In Proc. of ACM SIGCOMM Workshop on HotNets, pages 6:1{6:6, Monterey, California, (2010). [15] S. Radhakrishnan et al. Tcp fast open. In in Proc. of CoNEXT '11, pages 21:1{21:12, Tokyo, Japan, (2011). [16] J. Roskind. Multiplexed Stream Transport over UDP, (2013). [17] Armbrust, M., et al., A view of cloud computing. Communications of the ACM, (2010). [18] Mell, P. and T. Grance, The NIST definition of cloud computing. (2009). [19] Bhardwaj, S., L. Jain, and S. Jain, Cloud computing: A study of infrastructure as a service (IAAS). International Journal of engineering and information Technology, (2010). [20] Kulkarni, G., J. Gambhir, and R. Palwe, Cloud Computing-Software as Service. International Journal of Cloud Computing and Services Science (IJ-CLOSER), (2012). [21] I. Giurgiu, O. Riva, D. Juric, I. Krivulev, and G. Alonso. Calling the cloud: enabling mobile phones as interfaces to cloud applications. In Proceedings of the ACM/IFIP/USENIX 10th international conference on Middleware, Middleware’09, pages 83–102, (2009). [22] M. Satyanarayanan. Mobile computing: the next decade. In Proceedings of the 1st ACM Workshop on Mobile Cloud Computing & Services: page 5:6, (2010). [23] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies. The case for vm-based cloudlets in mobile computing. Pervasive Computing, IEEE, 8(4):14 –23, (2009). [24] CloudTrax (A cloud-based network controller) network planning guide. CloudTrax, (2012). [25] M. S. Corson, R. Laroia, J. Li, V. Park, T. Richardson, and G. Tsirtsis. Toward proximity-aware internetworking. Wireless Commun., 17(6):26– 33, December, (2010). [26] Dzmitry Kliazovich · Pascal Bouvry · Samee Ullah Khan, GreenCloud: a packet-level simulator of energy-aware cloud computing data centers (2010). [27] Yuxiang Shi; Xiaohong Jiang and Kejiang Ye, Sept. (2011), "An Energy-Efficient Scheme for Cloud Resource Provisioning Based on CloudSim," Cluster Computing (CLUSTER), 2011 IEEE International Conference, (2011).

Page | 15

Recommended publications