Object Detection-Based Variable Quantization Processing

Object Detection-Based Variable Quantization Processing

Object Detection-Based Variable Quantization Processing Likun Liu? and Hua Qi? Kyushu University Abstract. In this paper, we propose a preprocessing method for conven- tional image and video encoders that can make these existing encoders content-aware. By going through our process, a higher quality parame- ter could be set on a traditional encoder without increasing the output size. A still frame or an image will firstly go through an object detector. Either the properties of the detection result will decide the parameters of the following procedures, or the system will be bypassed if no ob- ject is detected in the given frame. The processing method utilizes an adaptive quantization process to determine the portion of data to be dropped. This method is primarily based on the JPEG compression the- ory and is optimum for JPEG-based encoders such as JPEG encoders and the Motion JPEG encoders. However, other DCT-based encoders like MPEG part 2, H.264, etc. can also benefit from this method. In the experiments, we compare the MS-SSIM under the same bitrate as well as similar MS-SSIM but enhanced bitrate. As this method is based on human perception, even with similar MS-SSIM, the overall watching experience will be better than the direct encoded ones. 1 Introduction Presently, video content occupies more than 80% of global internet traffic [1]. With services delivered in video format grows exponentially, this percentage is expected to be higher in the foreseeable future. However, not all network bandwidths in the world have grown along with this trend. Thus, better encoders are needed to deliver these contents broader. Over the past few decades, numerous image and video encoders [2,3,4,5,6] have emerged to suits the needs. Consumer video services are primarily in lossy format utilizing lossy video codecs to save spaces and bandwidth. These lossy arXiv:2009.00189v1 [cs.CV] 1 Sep 2020 encoding methods are mainly based on the theory of human perception and heavily rely on processes like quantization to reduce the data size. For a single image or video, the quantization matrix is invariable since the same matrix is required to recover the image during the decoding stage. These kinds of com- pressing strategies globally apply the same process to every block in the frame regardless of the actual content. Recently, serval deep neural network (DNN) based auto-encoder for image compression [7,8,9,10,11,12,13,14,15] has achieved relatively high performance ? Both authors contributed equally to this research. 2 L. Liu, H. Qi in comparison with traditional methods. However, an inconvenient fact is that these encoders would consume a massive amount of computing power to achieve their goal. Such methods may be a viable option for service providers who pos- sess server clusters. Nevertheless, for applications like field live streaming, the bandwidth and the computing power may be heavily shackled as a result of the stringent on-site situation. Another technology that rises along with the mass utilization of GPU power is neural network-based object detection algorithms [16,17,18,19]. They have achieved a relatively high precision in comparison with traditional recognition methods, and in the wake of recent optimization and miniaturization [20,21,22,23], object detection tasks can be done within a reasonable computing resource. Fig. 1: Visual comparison between video encoders. Picture (a) is the uncom- pressed frame. (b)-(d) are the enlarged detailed comparison between H.264, H.265, and H.264 with our proposed preprocessing method, respectively. To the best of our knowledge, we present the first object detection based preprocessing method to make the encoders content-aware. Fig . 1 presents a visual comparison between the conventional encoders' direct encoding output and the encoding output with our preprocessing method. Noticing the subtle difference in the shadow area. With our preprocessing method, the encoders are able to preserve more details in region of interest. By keeping the detected objects in the scene untouched and process the remaining part with a relatively aggressive compressing approach, we were able to preserve more details of the Object Detection-Based Variable Quantization Processing 3 object in the scene to enhance the quality of the video under the same bandwidth condition. In short, we have moved the resource from a trivial background to the main objects in the scene. The ascendancies of our proposed method are as follows: { When integrating with a DCT-based encoder (JPEG, H.264, etc.), the quan- tization matrix of the encoder is no longer invariable for the whole image in equivalent. As negligible parts of a frame have already processed with an aggressive quantization matrix, a relatively high-quality factor could be set within the same compression rate. { The object identification process was achieved with YOLOv4 [23] networks, as this kind of object detection network is highly customizable in both net- work size and categories to detect, our method could adapt to different sce- narios accordingly. { The aggressiveness of the quantization matrix is automatically varied ac- cording to the content of the frame. Experimental results will show that our approach will have a relatively high SSIM and SNR on other methods; also, the framerate would be higher under the same condition. { We have implemented all our processes parallelly on TensorFlow Framework and have achieved a remarkable 200 times acceleration rate over the sequen- tial pure CPU version. The selection of the YOLOv4 detector allows the procedure to run at high speed even for a tiny GPU. 2 Related Work 2.1 Object Detection Visual recognition task has been a significant research hot-spot recently. Numer- ous novel methods have been proposed during the past few years. These proposed methods [16,17,18,19,24] have reached an increasingly insane level of accuracy with the expansion of the neural networks size in both layers and parameters. To the best of our knowledge, the worlds most accurate image recognition network at this time point is FixEfficientNet-L2 [24], which has reached 98.7% accuracy with 480 million parameters. However, these carefully designed high-accuracy networks are monumental and require a vast amount of computing resources on both the training and recognition process. This deficiency makes these nets unsuitable for video compression or live streaming in most of the scenarios since personal computers account for the vast majority of such applications. In contrast to these region proposal series of algorithms, a category of pure CNN based recognition algorithms stands out for its trade-off between accuracy and computing resources [20,21,22,23,25]. The recent YOLOv4 [23] can reach 65.7% AP50 accuracy on the MS COCO dataset and running at 65 FPS on a Tesla V100. This feature provides the possibility of utilizing this method within an affordable computing power range. Also, for tiny objects that are hard to be recognized by the YOLO detector, are likely to be the negligible objects in the scene in which insignificant to human perception. 4 L. Liu, H. Qi 2.2 Image Compression A considerable amount of image compression algorithms have been proposed as the multi-media content gradually occupies the network traffic. Traditional lossy compression methods [2,4] are consisting of carefully designed handcraft techniques. These techniques are a combination of human perception, signal processing, and experience. For instance, the most popular JPEG compression [2] utilizes the YUV color encoding system since the human eyes are more sensitive to the luminance than the color. A frame is separated and quantified in Y, Cb, Cr channels individually, and each channel is quantified with a separate quantization parameter. Although a large number of quantization parameters can be chosen for different tasks, one apparent defect of this method is that they rely on user designation (quality factor) or frequency evaluation. This would cause severe detail loss under a limited bandwidth scenario. Another category of image compression method that rise along with the neural networks is DNN-based image compression [7,8,9,10,11,12,13,14,15,26]. Some of these methods utilized recurrent neural networks(RNNs) [7,9,11] to build a progressive image compression scheme while other methods [8,10,12] exploits the power of the CNNs. These methods have achieved slightly higher performance than the traditional encoders in some applications. However, many of them are still suffering performance issues and cannot be applied to most of the scenarios. 2.3 Lossy Video Compression In the DCT based video encoding process, the quantization process may slightly vary from the image compression. For instance, H.264 [5] encoders utilize QP (Quantizer Parameters) for the quantization process. These QPs corresponding to a unique Qstep. In total, the H.264 encoder has 52 different Qstep values corresponds to 52 QPs. Similar to the JPEG quantization process, the luminance channel and the chromatic channels are treated with different QPs. In general, the Qsteps of the luminance channel ranges from 0 to 52, while the Qstep for the chromatic channels ranges from 0 to 39. However, they are still based on the same frequency theory, and our method will still function well on these encoders. The latest HEVC (H.265) [6] encoding utilizes roughly the same encoding framework as the H.264. However, in almost every module, the HEVC added new encoding methods, including quadtree-based block division, inter-frame merge, AMVP technology, variable-size DCT, cabac, loop filtering, SAO, etc. In theory, with our proposed processing method, the variable-size DCT will further enhance the encoding efficiency of the encoder. Both the AVC (H.264) and the HEVC (H.265) encoding supports lossless encoding, which will encode each frame as a lossless still frame.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us