Human-centric Composite Quality Modeling and Assessment for Virtual Desktop Clouds Yingxiao Xu, Prasad Calyam, David Welling, Saravanan Mohan, Alex Berryman, Rajiv Ramnath University of Missouri, Ohio Supercomputer Center/OARnet, The Ohio State University, USA; Fudan University, China [email protected]; [email protected]; fdwelling, smohan, [email protected]; [email protected] Abstract—There are several motivations (e.g., mobility, cost, security) that are fostering a trend to transition users’ traditional desktops to thin-client based virtual desktop clouds (VDCs). Such a trend has led to a rising importance for human-centric performance modeling and assessment within user communities in industry and academia that are increasingly adopting desktop virtualization. In this paper, we present a novel reference ar- chitecture and its easily-deployable implementation for modeling and assessment of objective user QoE (Quality of Experience) within VDCs, without the need for expensive and time-consuming subjective testing. The architecture novelty is in our approach to integrate finite state machine representations for user workload generation, and slow-motion benchmarking with deep packet inspection of application task performance affected by network health i.e., QoS (Quality of Service) variations to derive a “composite quality” metric model of user QoE. We show how this metric is customizable to a particular user group profile with different application sets, and can be used to: (i) identify domi- nant performance indicators for troubleshooting bottlenecks, and (ii) effectively obtain both ‘absolute’ and ‘relative’ objective user QoE measurements needed for pertinent selection/adaptation of thin-client encoding configurations within VDCs. We validate the effectiveness of our composite quality modeling and assessment Fig. 1. Virtual desktop cloud system methodology using subjective and objective user QoE measure- ments in a real-world VDC featuring RDP/PCoIP thin-client protocols, and actual users for a virtual classroom lab use case Fig. 1 shows the various system components in a VDC. within a federated university system. At the server-side, a hypervisor framework (e.g., ESXi, Xen) I. INTRODUCTION is used to create pools of virtual machines that host user VDs with popular applications (e.g., Excel, Internet Explorer, There are several motivations (e.g., mobility, cost, security) Media Player) as well as advanced applications (e.g., Matlab, that are fostering a trend to transition users’ traditional desk- Moldflow). Users of a common desktop pool use the same tops to thin-client based virtual desktop clouds (VDCs) [1] [2]. set of applications, but maintain their distinctive and personal With the increase in mobile devices with significant computa- datasets. The VDs on the server-side share common physical tion power and connections to high-speed wired/wireless net- hardware and attached storage drives. At the client-side, users works, thin-client technologies for virtual desktop (VD) access connect to a server-side unified resource broker via the Internet are being integrated into mobile devices. In addition, users are using various TCP (e.g., RDP) and UDP (e.g., PCoIP) based increasingly consuming data-intensive content that involve big thin-client devices. The unified resource broker handles all the data (e.g., scientific data analysis) and multimedia streaming connection requests through authentication of users by Active (e.g., IPTV) applications, and thin-clients are needed for these Directory (or other directory service) lookups, and allows applications that require sophisticated server-side computation authorized users to access their entitled VDs with appropriate platforms (e.g., GPUs). Further, using a thin-client may be resource allocation amongst distributed data centers. more cost effective than owning a full-blown PC due to lower To allocate and manage VDC resources for large-scale user device maintenance, and savings through central management workloads of desktop delivery with satisfactory user Quality of desktop support in terms of operating system, application of Experience (QoE), VDC service providers (CSPs) need and security upgrades at the server-side. to suitably provision and adapt the cloud platform CPU, memory and network resources to deliver satisfactory user- This material is based upon work supported by VMware and the perceived ‘interactive response times’ (a.k.a. timeliness) as National Science Foundation under award numbers CNS-1050225 well as ‘streaming multimedia quality’ (a.k.a. coding effi- and CNS-1205658. Any opinions, findings, and conclusions or rec- ommendations expressed in this publication are those of the author(s) ciency). They need to ensure satisfactory user QoE when and do not necessarily reflect the views of VMware or the National user workloads are bursty during “flash crowds” or “boot Science Foundation. storms”, and also when users access VDs from remote sites with varying end-to-end network path performance. For this, As part of the offline benchmarking methodology, we lever- they need tools for VDC capacity planning to avoid system and age finite state machine representations for user workload task network resource overprovisioning. For instance, they need state characterization coupled with slow-motion benchmark- frameworks and tools to benchmark VD application resource ing [4] for deep packet inspection of VD application task requirements so that they can provision adequate resources performance affected by QoS variations. To define VD applica- to meet user QoE expectations (e.g., < 500ms for MS Office tion task states and identify them within network traces during application open time). If excess system and network resources deep packet inspection, we use a concept of ‘marker packets’ are provisioned than the adequate resources, user will not that are instrumented within the traffic between the thin-client perceive the benefit (e.g., user will not perceive the difference and server-side VD ends. We describe how our framework if application open time is 250ms or 500ms). Overprovisioning implementation in the form of a “VDBench benchmarking can become expensive even at the scale of tens of users given engine” is easily-deployable within existing VD hypervsior each VD requires substantial resources (e.g., 1 GHz CPU, 2 environments (e.g., ESXi, Hyper-V, Xen) and can be used GB RAM, 2 Mbps end-to-end network bandwidth). Hence, to instrument a wide-variety of existing Windows and Linux CSPs need frameworks and tools to avoid overprovisioning, platform based thin-clients (e.g., embedded Windows 7, Win- and ultimately derive inherent benefits such as reduced data dows/Linux VNC, Linux Thinstation, Linux Rdesktop) [8] [9] center costs and energy savings. for monitoring VD user QoE through joint performance anal- CSPs also need frameworks and tools to monitor resource ysis of system, network and application context. allocations in an on-going manner so as to detect and trou- By using our novel offline benchmarking methodology bleshoot user QoE bottlenecks. Given the fact that CSPs to within a closed-network testbed, we derive a novel “composite a large extent can control the CPU and memory resource quality” metric model of user QoE and show how it is cus- allocations and right-provision them on the server-side, they tomizable to particular user group profiles with different appli- most critically need the frameworks and tools to detect and cation sets, and can be used to during online monitoring to: (i) troubleshoot network health related issues on the paths over identify dominant performance indicators for troubleshooting the Internet between the thin-client and the server-side VD. bottlenecks amongst numerous factors affecting user QoE in With pertinent network measurement data analysis enabled VD application tasks, and (ii) effectively obtain both absolute through the frameworks and tools, they can perform re- and relative objective user QoE measurements needed for source adaptations that involve selecting appropriate thin-client pertinent selection/adaptation of thin-client encoding configu- protocol configurations that are resilient to network health rations. The absolute objective user QoE measurements allow degradations, and deliver optimum user QoE. comparison of a thin-client protocol’s performance (e.g., RDP) It is important to note that the on-going monitoring should to that of another thin-client protocol (e.g., PCoIP) under a be done without expensive and time-consuming subjective particular QoS condition characterized by latency and packet testing involving actual users of the VDC. From the thin-client loss in the path between the thin-client and the server-side. perspective, remote display protocols are sensitive to network Whereas, the relative objective user QoE measurements allow health i.e., QoS (Quality of Service) variations and consume as comparison of a thin-client protocol’s performance for de- much end-to-end network bandwidth available. They employ graded QoS conditions with reference to ideal QoS conditions different underlying TCP or UDP based protocols that exhibit (e.g., low latency and zero loss). varying levels of user QoE robustness under degraded network We validate the effectiveness of our composite quality conditions [3]. Also, thin-client protocol configuration (and modeling and assessment methodology using subjective and resulting VD application performance) is highly dependent objective user QoE measurements in a real-world VDC fea- on application content
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-