Scheduling Latency-Sensitive Applications in Edge Computing

Scheduling Latency-Sensitive Applications in Edge Computing

Scheduling Latency-Sensitive Applications in Edge Computing Vincenzo Scoca 1, Atakan Aral2, Ivona Brandic 2, Rocco De Nicola 1 and Rafael Brundo Uriarte 1 1IMT School for Advanced Studies Lucca, Italy 2 Vienna University of Technology, Austria [email protected], [email protected] Keywords: Edge Computing, Scheduling, Latency-Sensitive Services, Live Video Streaming, Resource Selection Abstract: Edge computing is an emerging technology that aims to include latency-sensitive and data-intensive applica- tions such as mobile or IoT services, into the cloud ecosystem by placing computational resources at the edge of the network. Close proximity to producers and consumers of data brings significant benefits in latency and bandwidth. However, edge resources are, by definition, limited in comparison to cloud counterparts, thus, a trade-off exists between deploying a service closest to its users and avoiding resource overload. We propose a score-based edge service scheduling algorithm that evaluates both network and computational capabilities of edge nodes and outputs the maximum scoring mapping between services and resources. Our extensive simulation based on a live video streaming service, demonstrates significant improvements in both network delay and service time. Additionally, we compare edge computing technology with the state-of-the-art cloud computing and content delivery network solutions within the context of latency-sensitive and data-intensive applications. Our results show that edge computing enhanced with suggested scheduling algorithm is a viable solution for achieving high quality of service and responsivity in deploying such applications. 1 INTRODUCTION Currently, few existing scheduling solutions for Edge Computing are related to Internet-of-Things, mostly focusing on the response time between ser- The enhancement of smart devices and network vice components rather than the optimization of user- technologies has enabled new internet-based services related metrics (end-to-end service response time) with strict end-to-end latency requirements (Bilal and (Skarlat et al., 2017). Scheduling approaches de- Erbad, 2017), for example, 360 degrees video and vised for similar paradigms, in particular cloud and on-line gaming. Edge Computing is a promising so- content delivery networks (CDNs) are characterized lution for the provision of latency-sensitive applica- by infrastructure limitations which make them un- tions since it offers a set of enabling technologies that fit to meet the requirements of this type of latency- moves both computation and storage capabilities (i.e., sensitive services. Specifically, cloud solutions are micro cloud data centers) to the edge of the network characterised by large distances between cloud data (Shi et al., 2016). This change allows deployment of centers and final users, incurring high network delay services close to end users, which reduces their re- that considerably degrades the service quality experi- sponse time. enced by end users. Moreover, due to high amount To effectively exploit the advantages of Edge in- of traffic generated by this kind of services, there is a frastructure, service instances should be scheduled on considerable risk of network congestion as well as a the node which better fits service requirements, taking bandwidth waste (Wang et al., 2017). In the case of into account contextual knowledge, i.e., user, appli- CDNs, any computational tasks must be carried out in cation and network information (Wang et al., 2017). the cloud, and a distributed network of cache servers, Indeed, defining optimal placement not only allows which is located in close proximity to users, is only to maximize user experience by reducing end-to-end used for disseminating output data (e.g. web objects, latency, but it also optimizes network traffic by re- rich media, etc.). However, servers are still several ducing bandwidth consumption and related costs, and hops away from final users and they need to process improves energy efficiency by reducing the need to requests on cloud servers and therefore do not consid- offload tasks to the cloud which incurs higher energy erably reduce the network delay when offloaded task consumption compared to micro data centers. is computationally heavy (Bilal and Erbad, 2017). 2 RELATED WORKS In this paper, we propose a novel score-based edge scheduling framework specifically designed for latency-sensitive services in Edge. The proposed Effectiveness of edge technologies has already technique is latency, bandwidth, and resource aware. been proved in many use cases, as reported by Wang It schedules each service instance on the VM type et al. in (Wang et al., 2017), however there are still whose computing and network capabilities can op- open research challenges to be tackled such as ser- timize service response time experienced by end vice scheduling on edge nodes. Recent works such as users. First, eligibility of available VM types on (Zhao et al., 2015; Guo et al., 2016; Mao et al., 2016) edge nodes are evaluated for hosting given service faces edge computation offloading problem, that is the based on VM network and resource capabilities, and decision of scheduling a task on mobile device or lo- then such services are scheduled in the most suit- cal/internet cloud. Nevertheless, these works do not able VMs according to eligibility scores, guarantee- take into account the actual service scheduling be- ing optimal service quality. We validate our ap- tween edge nodes. proach based on a live video streaming service and In the area of Fog Computing, in (Skarlat et al., evaluate how different deployment solutions, namely 2017), authors defined a scheduling approach for the Cloud, CDN and Edge, affect user response time in placement of service modules on fog nodes, focus- an latency-sensitive and data-intensive scenario. The ing only on the optimization of response time among obtained results show that edge-based solutions effec- components, without taking into account the end-to- tively improve user response time, highlighting the end service time. Aazam et al. in (Aazam and Huh, need of a suitable scheduling algorithm to obtain op- 2015), instead, defined a resource estimation and pric- timized results. Moreover, we compare the perfor- ing model for IoT, that estimates the amount of re- mance of our approach with (Duong et al., 2012), sources to be allocated for a given service, again with- a well-known latency-sensitive scheduling algorithm out providing an solution for the selection of a node for clouds, which aims to minimize the overall de- where to allocate the service. lay experienced by end users by reducing the users’ Scheduling of latency sensitivity services, instead, request waiting time, without explicitly considering has been widely studied in Cloud. For example, network latency. The results show that edge schedul- VM placement solutions for distributed clouds have ing algorithms must take also network latency into been developed in (Papagianni et al., 2013; Aral and account to minimize the overall service delay expe- Ovatman, 2016). These approaches seek for optimal rienced by end users. VM placement on physical nodes which minimizes Overall, the main contributions of this paper are: the network latency among them. However, these (i) a novel Edge scheduling framework for latency- mapping methodologies rely only on service require- sensitive services; (ii) the performance evaluation of ments and physical infrastructure knowledge with- different deployment solutions, namely cloud, CDN out considering user-related information, such as geo- and Edge, for latency-sensitive services, which shows localization, which is a critical feature for the schedul- the advantages of Edge computing in this context; and ing process in edge. Hence, they are not directly (iii) the evaluation of the proposed framework in com- adaptable to our problem. In the context of single parison to a solution for latency-sensitive services in cloud providers, authors in (Piao and Yan, 2010; Zeng clouds, which shows the benefits of using an algo- et al., 2014) developed service component placement rithm specifically devised for this architecture. methodologies that, similarly to the distributed sce- The rest of this paper is organised a follows. In the nario described before, optimize the service place- next section, we present the related works; in Sec.3 ment among servers in a single data center minimiz- we provide a motivating example discussing the ben- ing the transfer time between service components. efit of edge-based deployment approach for latency Therefore, also these approaches do not take into ac- sensitive applications such as a live video streaming count users information which would make their ap- service; in Sec.4 we present the scheduling approach plication suitable for service scheduling in edge. we devised; in Sec.5 we describe the experimental Authors in (Duong et al., 2012), however, in- setup used for the evaluation of our approach together tegrated user information in the service allocation with the results we obtained, whereas conclusions and methodology they proposed. Indeed, they defined a future works are reported in Sec.6. provisioning algorithm based on queuing theory to identify the number of VMs to be deployed in or- der to minimize the user waiting time. However, this approach, intended for IaaS clouds, only define the number of VMs needed to cope with the incoming load and is still missing VM placement policies which would lead to sub-optimal results

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us