
FoggyCache: Cross-Device Approximate Computation Reuse Peizhen Guo Bo Hu [email protected] [email protected] Yale University Yale University Rui Li Wenjun Hu [email protected] [email protected] Yale University Yale University Abstract ACM, New York, NY, USA, 16 pages. hps://doi.org/10.1145/3241539. 3241557 Mobile and IoT scenarios increasingly involve interactive and computation intensive contextual recognition. Existing optimizations typically resort to computation ooading or 1 Introduction simplied on-device processing. The vision of the Internet of Things has gradually transi- Instead, we observe that the same application is often tioned into reality, with solutions for smart scenarios and invoked on multiple devices in close proximity. Moreover, new IoT devices entering the mainstream consumer mar- the application instances often process similar contextual ket. Many of these upcoming applications revolve around data that map to the same outcome. interacting with the environment [5], providing personal In this paper, we propose cross-device approximate com- assistance to the device user [35], and/or automating what putation reuse, which minimizes redundant computation by used to be labor-intensive tasks [79]. harnessing the “equivalence” between dierent input values For example, speech recognition based assistance is now and reusing previously computed outputs with high con- common on smartphones (e.g., Siri, Cortana) and in homes dence. We devise adaptive locality sensitive hashing (A-LSH) (e.g., Alexa and Google Assistant). These applications sense and homogenized k nearest neighbors (H-kNN). The for- the environment, process the data on the local device or mer achieves scalable and constant lookup, while the latter a backend server, and then act on the result. Since contex- provides high-quality reuse and tunable accuracy guarantee. tual sensing and speech/image recognition are often the key We further incorporate approximate reuse as a service, steps for these applications, on-device processing tend to be called FoggyCache, in the computation ooading runtime. computationally expensive and energy-hungry in order to Extensive evaluation shows that, when given 95% accuracy achieve sucient accuracy. While ooading to a server [20] target, FoggyCache consistently harnesses over 90% of reuse addresses these two issues, latency becomes a concern given opportunities, which translates to reduced computation la- the real-time nature of the applications. Much prior eorts tency and energy consumption by a factor of 3 to 10. have investigated mechanisms to improve specic process- ing logic of either local computation [49] or ooading [19] ACM Reference Format: of individual applications. Peizhen Guo, Bo Hu, Rui Li, and Wenjun Hu. 2018. FoggyCache: Instead, we pursue an orthogonal avenue. A closer look Cross-Device Approximate Computation Reuse. In The 24th An- nual International Conference on Mobile Computing and Network- at these applications suggests there is redundancy in such ing (MobiCom ’18), October 29-November 2, 2018, New Delhi, India. computation across devices. The same applications are used by multiple devices over time, often in a similar context (e.g., common locations). Redundancy elimination across devices Permission to make digital or hard copies of all or part of this work for can then simultaneously achieve low latency and accurate personal or classroom use is granted without fee provided that copies results. This is a promising optimization technique indepen- are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation on the rst page. Copyrights dent of the specic processing logic. Note that this does not for components of this work owned by others than the author(s) must replace or favor either local computation or ooading. The be honored. Abstracting with credit is permitted. To copy otherwise, or goal is to avoid unnecessary computation, either locally or republish, to post on servers or to redistribute to lists, requires prior specic remotely, as opportunities arise. permission and/or a fee. Request permissions from [email protected]. However, there is a dening dierence between our cases MobiCom ’18, October 29-November 2, 2018, New Delhi, India and traditional redundancy elimination. We do not have exact © 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. matches between the input-output relations. Instead, the ACM ISBN 978-1-4503-5903-0/18/10...$15.00 most common input types, i.e., images, speech, and sensor hps://doi.org/10.1145/3241539.3241557 reading, come from analog sources. The input values are rarely exactly identical, but correlated temporally, spatially, server. To maximize reuse opportunities, we further opti- or semantically, and mapped to the same output. mize the client-server cache synchronization with stratied In other words, we need to gauge the reusability of the pre- cache warm-up on the client and speculative cache entry vious output based on the similarity of the input. We refer to generation on the server. this paradigm as fuzzy redundancy and highlight the notion FoggyCache is implemented on the Akka cluster frame- of approximate computation reuse. Section 2 discusses such work [2], running on Ubuntu Linux servers and Android redundancy in detail in the context of several motivating devices respectively. Using ImageNet [67], we show that scenarios. While traditional, precise redundancy elimination A-LSH achieves over 98% lookup accuracy while maintain- is well studied and widely employed in storage systems [24], ing constant time lookup performance. H-kNN achieves the networking [70], and data analytic systems [33, 65], existing pre-congured accuracy target (over 95% reuse accuracy) techniques are ill-suited to the IoT scenarios at hand due to and provides tunable performance. We further evaluate the the computation and the approximate matching involved. end-to-end performance with three benchmarks, simplied Approximate computation reuse involves several steps and versions of real applications corresponding to the motivating challenges: capturing and quantifying the input similarity in scenarios. Given a combination of standard image datasets, a metric space, fast search for the most similar records, and speech segments, and real video feeds, and an accuracy tar- reasoning about the quality of previous output for reuse. get of 95%, FoggyCache consistently harnesses over 90% of Step one is straightforward. Existing, domain-specic tech- all reuse opportunities, reducing computation latency and niques can already turn these raw input values into feature energy consumption by a factor of 3 to 10. vectors, and we can then dene a metric to compute the In summary, the paper makes the following contributions: distance between them, for example, the Euclidean distance. First, we observe cross-device fuzzy redundancy in upcom- There are two implications, however. First, leveraging these ing mobile and IoT scenarios, and highlight eliminating such feature extraction techniques decouples the application spe- redundancy as a promising optimization opportunity. cic processing from generic system-wide procedures appli- Second, we propose A-LSH and H-kNN to quantify and cable to any such applications. Second, the app developer can leverage the fuzzy redundancy for approximate computation use well-established techniques and libraries, and there is reuse, independent of the application scenarios. no need to manually annotate or manage the input features. Third, we design and implement FoggyCache that pro- The other two challenges arise from two fundamental con- vides approximate computation reuse as a service, which straints regardless of the underlying scenario: (i) The input achieves a factor of 3 to 10 reduced computation latency and data distributions are dynamic and not known in advance, energy consumption with little accuracy degradation. and (ii) similarity in the input does not directly guarantee the reusability among the output. 2 Motivation To address (i), we propose a variant of locality sensitive hashing (LSH), which is commonly used for indexing high- 2.1 Example scenarios dimensional data. The standard LSH is agnostic to the data Smart home. Many IoT devices connected to a smart home distribution and does not perform well for skewed or chang- service platform [22] run virtual assistance software that ing distributions. Therefore, our adaptive locality sensitive takes audio commands to control home appliances. The intel- hashing (A-LSH) dynamically tunes the indexing structure as ligence of such software is supported by inference functions, the data distribution varies, and achieves both very fast and such as speech recognition, stress detection, and speaker scalable lookup speed and constant lookup quality regardless identication [1]. Statistics [8] show that a small set of pop- of the exact data distribution. ular audio commands, e.g., “turn on the light”, are often re- For (ii), we propose a variant of the well-known k nearest peatedly invoked. Say two household members issue this neighbor (kNN) algorithm. kNN is a suitable baseline since it command to their respective device in dierent rooms. Cur- makes no assumptions about the input data distribution and rently, each command triggers the entire processing chain. works for almost all cases. However, kNN performs poorly However, processing both is unnecessary, as the two com- in a high-dimensional space due to the curse of dimension- mands are semantically
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-