
Latency-Driven Cooperative Task Computing in Multi-User Fog-Radio Access Networks Ai-Chun Pang1;2;3, Wei-Ho Chung2, Te-Chuan Chiu1, and Junshan Zhang4 1Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan 2Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan 3Graduate Institute of Networking and Multimedia, National Taiwan University, Taipei, Taiwan 4 School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85287, USA E-mail: [email protected], [email protected], [email protected], [email protected] Abstract—Fog computing is emerging as one promising so- bile edge computing [4], FP7 European Project (TROPIC) [5], lution to meet the increasing demand for ultra-low latency and OpenFog Consortium [6]. In our recent work [7], we services in wireless networks. Taking a forward-looking perspec- have proposed the Fog-Radio Access Network (F-RAN), which tive, we propose a Fog-Radio Access Network (F-RAN) model, which utilizes the existing infrastructure, e.g., small cells and leverages the resources of the current infrastructures in the macro base stations, to achieve the ultra-low latency by joint radio access network (RAN), such as base stations and small computing across multiple F-RAN nodes and near-range com- cells, to promptly respond to low latency requests from mobile munications at the edge. We treat the low latency design as an devices. In the F-RAN, those F-RAN nodes handle wireless optimization problem, which characterizes the tradeoff between connectivity as well as application service provisioning, which communication and computing across multiple F-RAN nodes. Since this problem is NP-hard, we propose a latency-driven creates a potential new business model for telecommunication cooperative task computing algorithm with one-for-all concept operators to cooperate with application/service providers. for simultaneous selection of the F-RAN nodes to serve with There are a number of critical challenges in the Fog proper heterogeneous resource allocation for multi-user services. system [8]–[13]. Ottenwalder et al. propose a placement and Considering the limited heterogeneous resources shared among migration scheme to guarantee end-to-end latency and reduce all users, we advocate the one-for-all strategy for every user taking other’s situation into consideration and seek for a “win- the network overhead by making migration decision earlier in win” solution. The numerical results show that the low latency a cloud and fog coexisting environment [9]. Sardellitti et al. services can be achieved by F-RAN via latency-driven cooperative design a computation offloading algorithm to minimize the task computing. overall users’ energy consumption by shifting the workloads Index Terms—Fifth-generation (5G) cellular networks, Fog to the remote powerful cloud server [10]. Deng et al. consider computing, Ultra-low latency. the cooperation between the cloud and fog, and tackle the workload allocation problem to minimize power consump- I. INTRODUCTION tion of the cloud server [11]. Intharawijitr et al. propose to Recently the 5G wireless technology for the next generation minimize the blocking probability ratio among all requested cellular system has garnered much attention, which aims workloads in the entire system by analyzing feasible selection at fulfilling the requirements of massive machine-type com- policies of assisting fog nodes [12]. Nishio et al. also propose munications, enhanced mobile broadband, ultra-reliable and a framework for heterogeneous resource sharing by taking low latency communications. Specifically, many applications all heterogeneous resources such as CPUs, communication (such as augmented/virtual reality and vehicle automation) are bandwidth, and storage into consideration from the perspective demanding in terms of high bandwidth and low latency. These of “time” [13]. However, the above related studies consider applications need intensive computations to accomplish object hybrid cloud and fog scenario only. tracking, content analytics and intelligent decision for better In our prior work [14], F-RAN is proposed to achieve ultra- accuracy, performance and user experiences. Cloud computing low latency by joint computing across multiple F-RAN nodes can utilize abundant computing resources for handling com- and near-range communications among F-RAN nodes. With plex tasks, but one significant challenge therein is to achieve high-bandwidth wireless access such as millimeter wave, the the ultra-low latency due to possible large network delay in large amount of application data delivered from one F-RAN traversing the time-sensitive data traffics through the Internet node to another do not need to traverse through backhaul backbone [1]. links, leading to significant reduction on the network latency. To tackle these challenges, a new paradigm, called fog By distributing computing-intensive tasks to multiple F-RAN computing, is emerging. It is an architecture by extending nodes, the computing latency can be substantially reduced, cloud computing to the edge of the network so that ultra-low which nevertheless comes at the cost of communication delay. latency can be achieved at the edge [2]. Indeed, there have Intuitively, the more F-RAN nodes are selected for the com- recent efforts on fog computing by certain academic/industry puting task, the smaller the computing latency, but the larger projects and standardization activities, e.g., Cloudlet [3], Mo- the communication delay would be. The joint consideration of distributed computing and wireless networking naturally gives rise to the computing and communication tradeoff. Worth noting is that [14] considers the single-user scenario only and presented some preliminary results on F-RAN cooperative computing. In this paper, we turn our attention to a multi-user F-RAN, IoT Device where the computing and communications resources are Master F-RAN Node inherently heterogeneous, making it challenging to quantify F-RAN Node the tradeoff therein. To achieve the ultra-low latency in such a scenario, we propose to consider the framework where multiple F-RAN nodes jointly execute distributed computing F-RAN after receiving the assigned computing tasks from one coordinator, called master F-RAN node, which communicates Fig. 1. Scenario of ultra-low latency service with F-RAN. with each F-RAN node wirelessly. This architecture targets a team work scenario so that there is a joint computing task where every cooperative F-RAN node is responsible for a sub task. In this way, the master F-RAN node should intelligently The reminder of this paper is organized as follows. Sec- decide which F-RAN node to be selected considering the tion II presents the system model and the formal formulation limited computing power and communication resources for of the optimization problem. In Section III, we show that the each F-RAN node. Specifically, more cooperative F-RAN problem is NP-hard and propose an efficient algorithm for nodes provide higher computing power and hence reduce total the special/general case with evaluated time complexity. Sim- computing latency. However, each cooperative F-RAN node ulation results and useful insights are discussed in Section IV. obtains fewer radio resources from the master F-RAN node Section V concludes this work. and as a result total communication latency will increase. Therefore, one main issue of cooperative task computing is II. SYSTEM MODEL AND PROBLEM FORMULATION FOR how to strike a good balance between computing power and LATENCY MINIMIZATION communication resources, contributing to total service latency. A. System Model Moreover, our target F-RAN scenario aims to serve multiple users simultaneously, which requires heterogeneous resource We consider a scenario with densely deployed F-RAN nodes allocation among all users. The latency-driven cooperative to serve ultra-low latency and computing-intensive services, task computing problem is firstly cast as an optimization e.g., Augmented Reality (AR). Since a single F-RAN node has problem, and an algorithm based on dynamic programming, only limited computing power, and often requires longer time namely, CTC-DP, is proposed for the cooperative task to complete extensive computing tasks, one potential solution computing in the special case with a single user. Next, we is to execute the tasks via distributed computing by multiple design a heuristic algorithm, CTC-All, which combines the F-RAN nodes. With this motivation, we propose to utilize CTC-DP approach with “one-for-all” concept to provide multiple F-RAN nodes to accelerate joint data processing and an approximate solution for both heterogeneous resource transmission for the ultra-low temporal latency. allocation and cooperative task computing in the general case In the scenario of multiple F-RAN nodes as shown in Fig. 1, with multiple users. In the multi-user CTC-All algorithm, the target users first send their data to the closest F-RAN node, the communication and computing resources for each user also known as the master F-RAN node which coordinates are pre-allocated by heterogeneous resource allocation, and with other F-RAN nodes. The master F-RAN node decides then the single-user CTC-DP with “one-for-all” concept which F-RAN node to be selected for service provision and is applied to solve cooperative task computing among all assign individual processing data/computing tasks. Upon the users based on the assigned heterogeneous resources. Since task completion on all F-RAN nodes, the master F-RAN node the total service latency is decided by the bottleneck of collects, unifies, and sends back the outcomes to the target the last user finishing his/her cooperative task computing, users. Finally, the target users execute the applications in their every user should be considerate of each other and seeks end-device within ultra-low latency. Compared with the input for a “win-win” solution as the strategy of “one-for-all”. We data size for each F-RAN node, the output data size is smaller conduct a series of experiments, based on practical parameter and its transmit time can be omitted.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-