Nnstreamer: Efficient and Agile Development of On-Device AI

Nnstreamer: Efficient and Agile Development of On-Device AI

NNStreamer: Efficient and Agile Development of On-Device AI Systems MyungJoo Ham∗, Jijoong Moon, Geunsik Lim, Jaeyun Jung, Hyoungjoo Ahn, Wook Song, Sangjung Woo, Parichay Kapoor, Dongju Chae, Gichan Jang, Yongjoo Ahn, and Jihoon Lee Samsung Research, Samsung Electronics Seoul, Korea {myungjoo.ham, jijoong.moon, geunsik.lim, jy1210.jung, hello.ahn, wook16.song, sangjung.woo, pk.kapoor, dongju.chae, gichan2.jang, yongjoo1.ahn, jhoon.it.lee}@samsung.com Abstract—We propose NNStreamer, a software system that AR Emoji [31], Animoji [1], robotic vacuums, and live video handles neural networks as filters of stream pipelines, applying processing. With more sophisticated AI applications, multiple the stream processing paradigm to deep neural network applica- input streams and neural networks may exist simultaneously tions. A new trend with the wide-spread of deep neural network applications is on-device AI. It is to process neural networks as in the complex camera systems of high-end smartphones on mobile devices or edge/IoT devices instead of cloud servers. of today. Numerous neural networks may share inputs, and Emerging privacy issues, data transmission costs, and operational outputs of a neural network may be inputs of others or the costs signify the need for on-device AI, especially if we deploy network itself. Composing a system with multiple networks a massive number of devices. NNStreamer efficiently handles allows training and reusing smaller networks, which may neural networks with complex data stream pipelines on devices, significantly improving the overall performance with minimal ef- reduce costs, increase performance, enhance robustness, or help forts. Besides, NNStreamer simplifies implementations and allows construct modular neural networks [33], [34], [36]. Managing reusing off-the-shelf media filters directly, which reduces develop- data flows and networks may become highly complicated mental costs significantly. We are already deploying NNStreamer with interconnections of networks and other nodes along with for a wide range of products and platforms, including the Galaxy fluctuating latency, complex topology, and synchronizations. series and various consumer electronic devices. The experimental results suggest a reduction in developmental costs and enhanced Such interconnections are data streams between nodes; thus, we performance of pipeline architectures and NNStreamer. It is an may describe each node as a filter and a system as a pipeline, open-source project incubated by Linux Foundation AI, available “pipe and filter architectures” [29]. to the public and applicable to various hardware and software The primary design choice is to employ and adapt a mul- platforms. Index Terms—neural network, on-device AI, stream processing, timedia stream processing framework for constructing neural pipe and filter architecture, open source software network pipelines, not constructing a new stream framework. The following significant problems and requirements, which I. INTRODUCTION are part of the observed ones of our on-device AI projects, have We have witnessed the proliferation of deep neural networks already been addressed by conventional multimedia frameworks in the last decade. With the ever-growing computing power, for years: embedded devices start to run neural networks, often assisted by hardware accelerators [2], [7], [18], [20], [21], [27], P1. Fetching and pre-processing input streams may be ex- [32], [38]. Such accelerators are already common in the tremely complicated; i.e., video inputs may have varying arXiv:2101.06371v1 [cs.LG] 16 Jan 2021 mobile industry [2], [32]. Running AI mechanisms directly formats, sizes, color balances, frame rates, and sources on embedded devices is called on-device AI [26]. On-device determined at run-time. Besides, with multiple data AI can be highly attractive with the following advantages of streams, processors, and algorithms, data rates and latency in-place data processing. may fluctuate and synchronizing data streams may become extremely difficult. • Avoid data privacy and protection issues by not sharing data with cloud servers. P2. Components should be highly portable. We have to reuse components and their pipelines for a wide range of • Reduce data transmissions, which can be crucial for processing video streams in real-time. products. P3. It should be easy to construct and modify pipelines • Save operating costs of servers, especially crucial with millions of devices deployed. even if there are filters executed in parallel requiring synchronization. The execution should be efficient for Limited computing power, high data bandwidth, and short embedded devices. response time are significant challenges of on-device AI: e.g., P4. We want to reuse a wide range of off-the-shelf multimedia ∗The corresponding author. filters. Some other significant problems and requirements are either along with hundreds of off-the-shelf filters, which NNStreamer introduced by reusing conventional multimedia stream process- inherits. GStreamer is highly modular; every filter and path ing frameworks (P5) or not addressed by such frameworks (P6 control is a plugin attachable in run-time. Various systems, and P7). whose reliability and performance are crucial, use GStreamer. P5. Input streams should support not only audio and video, but For example, the BBC uses GStreamer for its broadcasting also general tensors and binaries. It should also support systems [8]. Samsung (Tizen) and LG (WebOS) use it as recurrences, which are not allowed by conventional media the media engine of televisions. Centricular uses it for TVs, frameworks. set-top boxes, medical devices, in-vehicle infotainment, and P6. Different neural network frameworks (NNFW) such as on-demand streaming solutions [11], [12], [17], [39], [40]. TensorFlow and Caffe may coexist in prototypes. We FFmpeg [5], another popular multimedia framework, is not want to integrate the whole system as a pipeline for such modular, and everything is built-in; thus, it is not suitable for prototypes as well. our purposes. StageFright [13] is the multimedia framework P7. Easily import third-party components. Hardware acceler- of Android, depending on Android services. Unlike GStreamer, ators often require converted neural networks with their it is not portable for general Linux systems and does not allow dedicated format and libraries instead of general formats applications to construct arbitrary pipelines. AVFoundation [3] and NNFWs [9], [24]. is the multimedia framework of iOS and macOS. AVFoundation may provide input frames to Core ML [4], the machine We choose GStreamer [16] as the basis framework. learning framework of iOS and macOS, to construct a neural Gstreamer is a battle-proven multimedia framework for various network pipeline. However, app developers cannot apply neural products and services and has hundreds of off-the-shelf filters. networks as native filters of multimedia pipelines, and they It is highly portable and modular, and virtually everything can need to implement interconnections between neural networks be updated in a plug and play fashion. To address P1 to P7, and multimedia pipelines. DirectShow [6] is the multimedia we provide numerous GStreamer plugins, data types, and tools, framework of Windows. DirectShow and AVFoundation are described in Section III, which allow interacting with various proprietary software for proprietary platforms; thus, we cannot NNFWs, hardware accelerators, and other software components alter them for the given purposes. or manipulating stream paths and data. Google has proposed MediaPipe [25] to process neural Our major contributions include: networks as filters of pipelines. It supports Linux, Android, and • Show that applying stream processing paradigm to com- iOS, but it is not portable enough. Its dependency on Google’s plex multi-model and multi-modal AI systems is viable in-house tool, Bazel, and inflexible library requirements make and beneficial in various environments, and provide it not portable for embedded systems; i.e., it is hard to share an easy-to-use, efficient, portable, and ready-to-deploy system libraries with other software. MediaPipe re-implements solution. a pipeline framework and cannot reuse conventional media • Provide standard representations of tensor data streams filters; thus, P1 to P4 are only partially met while P5 is not that interconnect different frameworks and platforms and an issue. Initially, it has targeted server-side AI services, not off-the-shelf media filters to AI systems with minimal embedded devices; thus, P1, P2, and P4 might have been efforts, which efficiently allows processing complex not considered. Specifically, for in-house servers, they may pipelines and attaching AI mechanisms to applications. restrict input formats (P1 and P4 are irrelevant) and consider • Allow developers to add arbitrary neural network frame- homogeneous platforms and architectures (P2 is irrelevant). works, hardware accelerators, models, and other compo- Another issue is that MediaPipe allows only a specific version nents easily with the given frameworks and code templates. of TensorFlow as NNFWs; e.g., TensorFlow 2.1 for MediaPipe Then, make the proposed mechanism product-ready and 0.7.4. Such inflexibility makes integrating other NNFWs or release it to various platforms and products. hardware accelerators unnecessarily tricky. In Section IV, we NNStreamer is an open-source project incubated by Linux show an example (E4) of how critical this can be. We expect Foundation AI, released for Tizen, Android, Ubuntu,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us