Compression Algorithm in Mobile Packet Core

Total Page:16

File Type:pdf, Size:1020Kb

Compression Algorithm in Mobile Packet Core Master of Science in Computer Science September 2020 Compression Algorithm in Mobile Packet Core Lakshmi Nishita Poranki Faculty of Computing, Blekinge Institute of Technology, 371 79 Karlskrona, Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfilment of the requirements for the degree of Master of Science in Computer Science. The thesis is equivalent to 20 weeks of full time studies. The authors declare that they are the sole authors of this thesis and that they have not used any sources other than those listed in the bibliography and identified as references. They further declare that they have not submitted this thesis at any other institution to obtain a degree. Contact Information: Author(s): Lakshmi Nishita Poranki E-mail: [email protected] University advisor: Siamak Khatibi Department of Telecommunications Industrial advisor: Nils Ljungberg Ericsson, Gothenburg Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract Context: Data compression is the technique that is used for the fast transmission of the data and also to reduce the storage size of the transmitted data. Data compression is the massive and ubiquitous technology where almost every communication company make use of data compression. Data compression is categorized mainly into lossy and lossless data compression. Ericsson is one of the telecommunication company that deals with millions of user data and, all these data get compressed using the deflate compression algorithm. Due to the compression ratio and speed, the deflate algorithm is not optimal for the present use case(compress twice and decompress once) of Ericsson. This research is all about finding the best alternate algorithm which suits the current use case so that the deflate algorithm can replace it. Objectives: The objective of the research is to replace the deflate algorithm with the algorithm, which is useful for compressing the Serving GPRS Support Node-Mobility Manage- ment Entity(SGSN-MME) user data effectively. The main objectives to achieve this goal are: to investigate the better algorithm which fits the SGSN-MME compression patterns, to investigate the few alternate algorithms for deflate algorithm, the SGSN- MME dataset was used to perform experimentation, the experiment should perform by using all selected algorithms on the dataset, the results of the experiment were compared based on the compression factors, based on the performance of algorithm the deflate algorithm will get replaced with the suitable algorithm. Methods: In this research, a literature review performed to investigate the alternate algorithms for the deflate algorithm. After selecting the algorithm, an experiment conducted on the data which was provided by Ericsson AB, Gothenburg and based on the com- pression factors like compression ratio, compression speed the performance of the algorithm evaluated. Results: By analyzing the results of the experiment, Z-standard is the better performance algorithm with the optimal compression sizes, compression ratio, and compression speed. Conclusions: This research concludes by identifying an alternate algorithm that can replace the deflate algorithm and also which is suitable for the present Use case. Keywords: Compression Algorithms, Lossless Compression Algorithm, SGSN-MME node, Compression factors, performance of compression algorithm. ii Acknowledgments Firstly, I would like to express my prodigious gratitude to my university supervisor Prof. Siamak Khatibi, Department of Telecommunications for his worthwhile super- vision, patience, suggestions and incredible guidance throughout entire period of the thesis. I would also like to express my warmest gratitude to my Industrial Supervisors Erik Vargas and Nils Ljunberg for their guidance, support and insightful comments throughout my journey at Ericsson AB, Gothenburg. I would like to express my profound gratitude to my parents Siva Rama Prasad Poranki and Vijaya Lakshmi Poranki, my sister Snehitha Poranki and my colleague Lakshmi Venkata Sai Sri Tikkireddy for their persistent and unparalleled love and continuous support. Last but not least, I would like to thank all of my friends who stood beside me during my good and bad times and helped me a lot to complete my thesis and made my thesis journey so successful. Thank you very much, everyone!! iii Contents Abstract i Acknowledgments iii 1 Introduction 1 1.1 Problem Statement: ............................ 4 1.2 Outline: .................................. 4 2 Background 6 2.1SGSN:................................... 6 2.2 MME: ................................... 6 2.3 Ericsson SGSN components: ....................... 7 2.4 Performance factors ............................ 8 2.5 Types of Lossless Compression Algorithms: ............... 8 2.5.1 Huffman Algorithm ........................ 9 2.5.2 Arithmetic Coding ........................ 9 2.5.3 Shannon-Fano ........................... 10 2.5.4 Lempel-Ziv ............................ 10 2.5.5 LZ77-LZ78 ............................. 10 2.5.6 LZ4 ................................ 11 2.5.7 Brotli ............................... 12 2.5.8 Zlib ................................ 14 2.5.9 Z-standard ............................. 15 3 Methodology 17 3.1 Literature review ............................. 18 3.2 Experiment ................................ 20 3.2.1 Hypothesis ............................. 20 3.2.2 Experiment workspace ...................... 20 3.2.3 Dataset Creation ......................... 21 3.2.4 Dataset Pre-Processing ...................... 22 3.2.5 Experiment ............................ 23 3.2.6 Statistical Tests .......................... 24 4 Results and Analysis 26 4.1 Results for Z-standard: .......................... 26 4.2 Results for LZ4: .............................. 27 4.3 Results for Brotli: ............................. 28 iv 4.4 Results for Zlib (deflate): ......................... 29 4.5 Comparison of Compression Ratio .................... 29 4.6 Comparison of Compression speed .................... 31 4.7 Comparison of Space-saving ....................... 32 4.8 Result Analysis .............................. 33 4.9 Performing Statistical Tests ....................... 34 5 Discussion 36 5.1 Answering Research Questions ...................... 36 5.2 Validity Threats .............................. 37 6 Conclusion and Future Work 39 References 40 v List of Figures 1.1 Lossy compression ............................ 2 1.2 Lossy vs Lossless compression ...................... 3 2.1 Ericsson SGSN components ....................... 7 2.2 Types of Compression .......................... 9 2.3FlowchartofLZ4............................. 13 3.1 Dataset Creation ............................. 23 4.1 Z-standard compression ......................... 26 4.2 LZ4 compression ............................. 27 4.3 Brotli compression ............................ 28 4.4 Zlib compression ............................. 29 4.5 Comparison of compression ratios .................... 30 4.6 Comparison of compression speed .................... 31 4.7 Comparison of space saving ....................... 32 vi List of Tables 2.1 The data format of the LZ4 sequence .................. 12 4.1 Ranks for comparison of compression ratios of algorithms for each test case. .................................... 34 4.2 Average ranks of algorithms. ....................... 35 vii List of Abbreviations ANS Asymmetric Numeral System AP Application Procedures ASCII American Standard Code for Information Interchange BSD Berkeley Source Distribution DP Device Procedures ETS Erlang Term Storage FPGA Field Programmable Gate Array FSE Finite State Entropy GGSN Gateway GPRS Support node GIF Graphics Interchange Format GNU GNU’s not Unix GPRS General Packet Radio Service GSM Global System for mobile communication(2G) HTML Hyper Text Markup Language HTTP Hyper Text Transfer Protocol IP Internet Protocol IT Information Technology JPEG Joint Photographic Experts Group LTE Long Term Evolution(4G) LZW Lempel-Zive-Welch MME Mobility Management Entity MP3 MPEG Audio layer 3 MPEG Moving Picture Experts Group viii OTP Open Telecom Platform PIU’s PlugIn units SGSN Service GPRS Support Node SLR Systematic Literature Review tANS tabled variant of ANS TIFF Tagged Image File Format UE User Equipment WCDMA Wideband Code Division Multiple Access(3G) WEM Workspace Environment Management Zstd Z standard ix Chapter 1 Introduction Digital communication is a field that deals with the concept of transmitting and receiving digital data. Here, the digital data may contain different data formats like text data, audio, video, and images which are to be transmitted. Before transmis- sion, the complete data needs to convert into a binary format which is either a 0 or 1. Usually, the data bitstream consists of many data bits which may reach to millions of data bits in most of the cases. So, it is clear that the large files will take approximately minutes for the transmission [33]. If the massive amount of data should get transmitted, there is a chance of delay in reaching the destination. The data should get compressed to avoid the situations [44]. Incomparably the vast amount of the data received, processed and transmitted, which will affect the data transmission speed ability and also leads to the shortage of storage [57][66]. For a long time, compression is the domain that consists of a small group of engineers and scientists. But now, the data
Recommended publications
  • Data Compression: Dictionary-Based Coding 2 / 37 Dictionary-Based Coding Dictionary-Based Coding
    Dictionary-based Coding already coded not yet coded search buffer look-ahead buffer cursor (N symbols) (L symbols) We know the past but cannot control it. We control the future but... Last Lecture Last Lecture: Predictive Lossless Coding Predictive Lossless Coding Simple and effective way to exploit dependencies between neighboring symbols / samples Optimal predictor: Conditional mean (requires storage of large tables) Affine and Linear Prediction Simple structure, low-complex implementation possible Optimal prediction parameters are given by solution of Yule-Walker equations Works very well for real signals (e.g., audio, images, ...) Efficient Lossless Coding for Real-World Signals Affine/linear prediction (often: block-adaptive choice of prediction parameters) Entropy coding of prediction errors (e.g., arithmetic coding) Using marginal pmf often already yields good results Can be improved by using conditional pmfs (with simple conditions) Heiko Schwarz (Freie Universität Berlin) — Data Compression: Dictionary-based Coding 2 / 37 Dictionary-based Coding Dictionary-Based Coding Coding of Text Files Very high amount of dependencies Affine prediction does not work (requires linear dependencies) Higher-order conditional coding should work well, but is way to complex (memory) Alternative: Do not code single characters, but words or phrases Example: English Texts Oxford English Dictionary lists less than 230 000 words (including obsolete words) On average, a word contains about 6 characters Average codeword length per character would be limited by 1
    [Show full text]
  • Package 'Brotli'
    Package ‘brotli’ May 13, 2018 Type Package Title A Compression Format Optimized for the Web Version 1.2 Description A lossless compressed data format that uses a combination of the LZ77 algorithm and Huffman coding. Brotli is similar in speed to deflate (gzip) but offers more dense compression. License MIT + file LICENSE URL https://tools.ietf.org/html/rfc7932 (spec) https://github.com/google/brotli#readme (upstream) http://github.com/jeroen/brotli#read (devel) BugReports http://github.com/jeroen/brotli/issues VignetteBuilder knitr, R.rsp Suggests spelling, knitr, R.rsp, microbenchmark, rmarkdown, ggplot2 RoxygenNote 6.0.1 Language en-US NeedsCompilation yes Author Jeroen Ooms [aut, cre] (<https://orcid.org/0000-0002-4035-0289>), Google, Inc [aut, cph] (Brotli C++ library) Maintainer Jeroen Ooms <[email protected]> Repository CRAN Date/Publication 2018-05-13 20:31:43 UTC R topics documented: brotli . .2 Index 4 1 2 brotli brotli Brotli Compression Description Brotli is a compression algorithm optimized for the web, in particular small text documents. Usage brotli_compress(buf, quality = 11, window = 22) brotli_decompress(buf) Arguments buf raw vector with data to compress/decompress quality value between 0 and 11 window log of window size Details Brotli decompression is at least as fast as for gzip while significantly improving the compression ratio. The price we pay is that compression is much slower than gzip. Brotli is therefore most effective for serving static content such as fonts and html pages. For binary (non-text) data, the compression ratio of Brotli usually does not beat bz2 or xz (lzma), however decompression for these algorithms is too slow for browsers in e.g.
    [Show full text]
  • Lossless Audio Codec Comparison
    Contents Introduction 3 1 CD-audio test 4 1.1 CD's used . .4 1.2 Results all CD's together . .4 1.3 Interesting quirks . .7 1.3.1 Mono encoded as stereo (Dan Browns Angels and Demons) . .7 1.3.2 Compressibility . .9 1.4 Convergence of the results . 10 2 High-resolution audio 13 2.1 Nine Inch Nails' The Slip . 13 2.2 Howard Shore's soundtrack for The Lord of the Rings: The Return of the King . 16 2.3 Wasted bits . 18 3 Multichannel audio 20 3.1 Howard Shore's soundtrack for The Lord of the Rings: The Return of the King . 20 A Motivation for choosing these CDs 23 B Test setup 27 B.1 Scripting and graphing . 27 B.2 Codecs and parameters used . 27 B.3 MD5 checksumming . 28 C Revision history 30 Bibliography 31 2 Introduction While testing the efficiency of lossy codecs can be quite cumbersome (as results differ for each person), comparing lossless codecs is much easier. As the last well documented and comprehensive test available on the internet has been a few years ago, I thought it would be a good idea to update. Beside comparing with CD-audio (which is often done to assess codec performance) and spitting out a grand total, this comparison also looks at extremes that occurred during the test and takes a look at 'high-resolution audio' and multichannel/surround audio. While the comparison was made to update the comparison-page on the FLAC website, it aims to be fair and unbiased.
    [Show full text]
  • Cluster-Based Delta Compression of a Collection of Files Department of Computer and Information Science
    Cluster-Based Delta Compression of a Collection of Files Zan Ouyang Nasir Memon Torsten Suel Dimitre Trendafilov Department of Computer and Information Science Technical Report TR-CIS-2002-05 12/27/2002 Cluster-Based Delta Compression of a Collection of Files Zan Ouyang Nasir Memon Torsten Suel Dimitre Trendafilov CIS Department Polytechnic University Brooklyn, NY 11201 Abstract Delta compression techniques are commonly used to succinctly represent an updated ver- sion of a file with respect to an earlier one. In this paper, we study the use of delta compression in a somewhat different scenario, where we wish to compress a large collection of (more or less) related files by performing a sequence of pairwise delta compressions. The problem of finding an optimal delta encoding for a collection of files by taking pairwise deltas can be re- duced to the problem of computing a branching of maximum weight in a weighted directed graph, but this solution is inefficient and thus does not scale to larger file collections. This motivates us to propose a framework for cluster-based delta compression that uses text clus- tering techniques to prune the graph of possible pairwise delta encodings. To demonstrate the efficacy of our approach, we present experimental results on collections of web pages. Our experiments show that cluster-based delta compression of collections provides significant im- provements in compression ratio as compared to individually compressing each file or using tar+gzip, at a moderate cost in efficiency. A shorter version of this paper appears in the Proceedings of the 3rd International Con- ference on Web Information Systems Engineering (WISE), December 2002.
    [Show full text]
  • Arithmetic Coding
    Arithmetic Coding Arithmetic coding is the most efficient method to code symbols according to the probability of their occurrence. The average code length corresponds exactly to the possible minimum given by information theory. Deviations which are caused by the bit-resolution of binary code trees do not exist. In contrast to a binary Huffman code tree the arithmetic coding offers a clearly better compression rate. Its implementation is more complex on the other hand. In arithmetic coding, a message is encoded as a real number in an interval from one to zero. Arithmetic coding typically has a better compression ratio than Huffman coding, as it produces a single symbol rather than several separate codewords. Arithmetic coding differs from other forms of entropy encoding such as Huffman coding in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, a fraction n where (0.0 ≤ n < 1.0) Arithmetic coding is a lossless coding technique. There are a few disadvantages of arithmetic coding. One is that the whole codeword must be received to start decoding the symbols, and if there is a corrupt bit in the codeword, the entire message could become corrupt. Another is that there is a limit to the precision of the number which can be encoded, thus limiting the number of symbols to encode within a codeword. There also exist many patents upon arithmetic coding, so the use of some of the algorithms also call upon royalty fees. Arithmetic coding is part of the JPEG data format.
    [Show full text]
  • The Basic Principles of Data Compression
    The Basic Principles of Data Compression Author: Conrad Chung, 2BrightSparks Introduction Internet users who download or upload files from/to the web, or use email to send or receive attachments will most likely have encountered files in compressed format. In this topic we will cover how compression works, the advantages and disadvantages of compression, as well as types of compression. What is Compression? Compression is the process of encoding data more efficiently to achieve a reduction in file size. One type of compression available is referred to as lossless compression. This means the compressed file will be restored exactly to its original state with no loss of data during the decompression process. This is essential to data compression as the file would be corrupted and unusable should data be lost. Another compression category which will not be covered in this article is “lossy” compression often used in multimedia files for music and images and where data is discarded. Lossless compression algorithms use statistic modeling techniques to reduce repetitive information in a file. Some of the methods may include removal of spacing characters, representing a string of repeated characters with a single character or replacing recurring characters with smaller bit sequences. Advantages/Disadvantages of Compression Compression of files offer many advantages. When compressed, the quantity of bits used to store the information is reduced. Files that are smaller in size will result in shorter transmission times when they are transferred on the Internet. Compressed files also take up less storage space. File compression can zip up several small files into a single file for more convenient email transmission.
    [Show full text]
  • Dspic DSC Speex Speech Encoding/Decoding Library As a Development Tool to Emulate and Debug Firmware on a Target Board
    dsPIC® DSC Speex Speech Encoding/Decoding Library User’s Guide © 2008-2011 Microchip Technology Inc. DS70328C Note the following details of the code protection feature on Microchip devices: • Microchip products meet the specification contained in their particular Microchip Data Sheet. • Microchip believes that its family of products is one of the most secure families of its kind on the market today, when used in the intended manner and under normal conditions. • There are dishonest and possibly illegal methods used to breach the code protection feature. All of these methods, to our knowledge, require using the Microchip products in a manner outside the operating specifications contained in Microchip’s Data Sheets. Most likely, the person doing so is engaged in theft of intellectual property. • Microchip is willing to work with the customer who is concerned about the integrity of their code. • Neither Microchip nor any other semiconductor manufacturer can guarantee the security of their code. Code protection does not mean that we are guaranteeing the product as “unbreakable.” Code protection is constantly evolving. We at Microchip are committed to continuously improving the code protection features of our products. Attempts to break Microchip’s code protection feature may be a violation of the Digital Millennium Copyright Act. If such acts allow unauthorized access to your software or other copyrighted work, you may have a right to sue for relief under that Act. Information contained in this publication regarding device Trademarks applications and the like is provided only for your convenience The Microchip name and logo, the Microchip logo, dsPIC, and may be superseded by updates.
    [Show full text]
  • Hardware Based Compression in Ceph OSD with BTRFS
    Hardware Based Compression in Ceph OSD with BTRFS Weigang Li ([email protected]) Tushar Gohad ([email protected]) Data Center Group Intel Corporation 2016 Storage Developer Conference. © Intel Corp. All Rights Reserved. Credits This work wouldn’t have been possible without contributions from – Reddy Chagam ([email protected]) Brian Will ([email protected]) Praveen Mosur ([email protected]) Edward Pullin ([email protected]) 2 2016 Storage Developer Conference. © Intel Corp. All Rights Reserved. Agenda Ceph A Quick Primer Storage Efficiency and Security Features Offload Mechanisms – Software and Hardware Compression in Ceph OSD with BTRFS Compression in BTRFS and Ceph Hardware Acceleration with QAT PoC implementation Performance Results Key Takeaways 3 2016 Storage Developer Conference. © Intel Corp. All Rights Reserved. Ceph Open-source, object-based scale-out storage system Software-defined, hardware-agnostic – runs on commodity hardware Object, Block and File support in a unified storage cluster Highly durable, available – replication, erasure coding Replicates and re-balances dynamically 4 Image source: http://ceph.com/ceph-storage 2016 Storage Developer Conference. © Intel Corp. All Rights Reserved. Ceph Scalability – CRUSH data placement, no single POF Enterprise features – snapshots, cloning, mirroring Most popular block storage for Openstack use cases 10 years of hardening, vibrant community 5 Source: http://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 2016 Storage Developer Conference. © Intel Corp. All Rights Reserved. Ceph: Architecture OSD OSD OSD OSD OSD btrfs xfs ext4 POSIX Backend Backend Backend Backend Backend Bluestore KV DISK DISK DISK DISK DISK Commodity Servers M M M 6 2016 Storage Developer Conference.
    [Show full text]
  • Multimedia Compression Techniques for Streaming
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-8 Issue-12, October 2019 Multimedia Compression Techniques for Streaming Preethal Rao, Krishna Prakasha K, Vasundhara Acharya most of the audio codes like MP3, AAC etc., are lossy as Abstract: With the growing popularity of streaming content, audio files are originally small in size and thus need not have streaming platforms have emerged that offer content in more compression. In lossless technique, the file size will be resolutions of 4k, 2k, HD etc. Some regions of the world face a reduced to the maximum possibility and thus quality might be terrible network reception. Delivering content and a pleasant compromised more when compared to lossless technique. viewing experience to the users of such locations becomes a The popular codecs like MPEG-2, H.264, H.265 etc., make challenge. audio/video streaming at available network speeds is just not feasible for people at those locations. The only way is to use of this. FLAC, ALAC are some audio codecs which use reduce the data footprint of the concerned audio/video without lossy technique for compression of large audio files. The goal compromising the quality. For this purpose, there exists of this paper is to identify existing techniques in audio-video algorithms and techniques that attempt to realize the same. compression for transmission and carry out a comparative Fortunately, the field of compression is an active one when it analysis of the techniques based on certain parameters. The comes to content delivering. With a lot of algorithms in the play, side outcome would be a program that would stream the which one actually delivers content while putting less strain on the audio/video file of our choice while the main outcome is users' network bandwidth? This paper carries out an extensive finding out the compression technique that performs the best analysis of present popular algorithms to come to the conclusion of the best algorithm for streaming data.
    [Show full text]
  • A Novel Coding Architecture for Multi-Line Lidar Point Clouds
    This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 1 A Novel Coding Architecture for Multi-Line LiDAR Point Clouds Based on Clustering and Convolutional LSTM Network Xuebin Sun , Sukai Wang , Graduate Student Member, IEEE, and Ming Liu , Senior Member, IEEE Abstract— Light detection and ranging (LiDAR) plays an preservation of historical relics, 3D sensing for smart city, indispensable role in autonomous driving technologies, such as well as autonomous driving. Especially for autonomous as localization, map building, navigation and object avoidance. driving systems, LiDAR sensors play an indispensable role However, due to the vast amount of data, transmission and storage could become an important bottleneck. In this article, in a large number of key techniques, such as simultaneous we propose a novel compression architecture for multi-line localization and mapping (SLAM) [1], path planning [2], LiDAR point cloud sequences based on clustering and convolu- obstacle avoidance [3], and navigation. A point cloud consists tional long short-term memory (LSTM) networks. LiDAR point of a set of individual 3D points, in accordance with one or clouds are structured, which provides an opportunity to convert more attributes (color, reflectance, surface normal, etc). For the 3D data to 2D array, represented as range images. Thus, we cast the 3D point clouds compression as a range image instance, the Velodyne HDL-64E LiDAR sensor generates a sequence compression problem. Inspired by the high efficiency point cloud of up to 2.2 billion points per second, with a video coding (HEVC) algorithm, we design a novel compression range of up to 120 m.
    [Show full text]
  • GNUPG Open Source Encryption on HP Nonstop
    GNUPG Open Source Encryption on HP NonStop Damian Ward NonStop Solutions Architect / BITUG Vice Chairman Confidential Introduction What am I going to talk about today • About Me • About VocaLink • History of encryption • PGP to GNUPG • PGP encryption walkthrough • Installing GNUPG • Use Case • Questions..? Please feel free to ask as we go through the presentation. Confidential 2 Introduction About your presenter • Damian Ward • 20+ years HP NonStop and Payments experience • Career spanning: − Operations, Application Programming, System Management, Programme Management, Technical Specialist, Solutions Architect, Enterprise Architect, Infrastructure Architect • Specialities: − HP NonStop systems and architecture, Enterprise Architecture, Encryption, Availability Management, ATM Systems, Payments Processing, Capacity Planning, System modelling, Fraud, Mobile and Internet technologies, Programming, Emerging Technologies and Robotics • BITUG Vice Chairman 2011 • BITUG Chairman 2012 Confidential 3 Introduction VocaLink: cards processing landscape Direct connection to in house processing system Indirect ATM acquirer and card issuer connection (via VocaLinkCSB) FIS Connex Advantage Connections to Mobile Switch with resillient Operators Indirect ATM connection (via third telecommunication party processor) connections to each customer Connections to Direct connection overseas to Post Office via TNS CSB schemes and systems ATM and POS international banks acquiring and issuing connections via gateway connections to international schemes Confidential
    [Show full text]
  • Intel® Quick Sync Video and Ffmpeg Installation and Validation Guide
    White paper Intel® Quick Sync Video and FFmpeg Installation and Validation Guide Introduction Intel® Quick Sync Video technology on Intel® Iris™ Pro Graphics and Intel® HD graphics provides transcode acceleration on Linux* systems in FFmpeg* 2.8 and later editions. This paper is a detailed step-by-step guide to enabling h264_qsv, mpeg2_qsv, and hevc_qsv hardware accelerated codecs in the FFmpeg framework. For a quicker overview, please see this article. Performance note: The *_qsv implementations are intended to provide easy access to Intel hardware capabilities for FFmpeg users, but are less efficient than custom applications optimized for Intel® Media Server Studio. Document note: Monospace type = command line inputs/outputs. Pink = highlights to call special attention to important command line I/O details. Getting Started 1. Install Intel Media Server Studio for Linux. Download from software.intel.com/intel-media-server- studio. This is a prerequisite for the *_qsv codecs as it provides the foundation for encode acceleration. See the next chapter for more info on edition choices. Note: Professional edition install is required for hevc_qsv. 2. Get the latest FFmpeg source from https://www.FFmpeg.org/download.html. Intel Quick Sync Video support is available in FFmpeg 2.8 and later editions. The install steps outlined below were verified with ffmpeg release 3.2.2 3. Configure FFmpeg with “--enable –libmfx –enable-nonfree”, build, and install. This requires copying include files to /opt/intel/mediasdk/include/mfx and adding a libmfx.pc file. More details below. 4. Test transcode with an accelerated codec such as “-vcodec h264_qsv” on the FFmpeg command line.
    [Show full text]