Understanding Video Quality Many Organizations Are Addressing the Requirements for Video Solutions

Total Page:16

File Type:pdf, Size:1020Kb

Understanding Video Quality Many Organizations Are Addressing the Requirements for Video Solutions WHITE PAPER Video Solutions: Understanding Video Quality Many organizations are addressing the requirements for video solutions. While it is important to construct video applications support systems quickly to address immediate needs, network operators need to select the right system the fi rst time, to minimize the total life cycle cost of these systems. Abstract medium. NTSC defi nes an aspect ratio of 4:3, which Operators need to clearly understand the require- provides a theoretical maximum resolution of 720 ments of their organization. This includes an (horizontal) by 486 (vertical) when specifi ed in terms understanding of the video image quality. While of non-square (10:11) pixels. In square pixels this there are many other issues that will signifi cantly would be 648x486, or more commonly 640x480. impact network design, this paper provides infor- mation on factors that address image quality. The The resolution issue is complicated by additional factors discussed include: common resolution standards for video encoding. Common Intermediate Format (CIF) is 352x288 @ • Resolution 30fps. It represents a compromise between the • Frames Per Second (FPS) NTSC frame rate (30fps) and PAL resolution. In • Video Codec practice, producing 352x288 from a NTSC camera • Packets Per Second (PPS) source is diffi cult. As such North American surveil- • Bit Rate lance vendors have created a ‘modifi ed’ CIF format of 352x240 (“NTSC CIF”) which is easily derived An understanding of the effect of these factors from an NTSC video source by removing every on bandwidth requirements will enable network other vertical scan line. Alternatives are Quarter operators to correctly design communications CIF (QCIF) 176x144 (or 176x120), 4CIF 704x576 (or infrastructure networks that will support video 704x480), and 16CIF 1408x1152 (or 1408x960). applications. The interaction between encoded resolution and Intricacies of Video Solution Image Quality display resolution of the decoded video can be Operators need to select video performance that is dramatic. By understanding the device which will appropriate to the needs of the application. Different be used to view the decoded video, the network performance levels will require different cameras operator can know the requirement for image to collect images. In general, higher performance collection and processing. The resolution of the levels will require a higher bandwidth infrastructure display device has an impact on overall perceived to transport data from the camera to the video video quality, since the encoded video will need command center. Network operators need to clearly to be scaled accordingly. understand the camera performance required in order to design the communications infrastructure Frames Per Second (FPS) Frames Per second appropriately. If there is not suffi cient capacity at any (FPS) is the number of “snapshots” of the video point in the communications infrastructure, video scene in one second. Recall that modern fi lms are images may be delayed or lost, defeating the 24 FPS, NTSC is 30 FPS, and PAL is 25 FPS. Video purpose of a video solution. surveillance cameras can be confi gured for a range of FPS. In many cases, 10 FPS is suffi cient. Resolution Resolution is the number of “pixels” (picture elements) contained within each frame of Video Codecs One might think that stipulating the video. For example, The National Television System encoder’s resolution and Frames Per Second (FPS) Committee (NTSC) as a standard specifi es a manda- would exactly result in a bandwidth throughput tory 525 analog scan lines of vertical resolution, of need. However, even when stipulating both the res- which 486 are typically visible. Horizontal resolution olution and FPS, the IP camera may be confi gured is variable dependent on the recording or display for a particular bandwidth within some bandwidth 2 WHITE PAPER - Video Solutions: Understanding Video Image Quality range. There are tradeoffs between confi gurations. Data Bit Rate Packet size and transfer rate In general, higher bandwidth allocation for a given fps (Packets Per Second – PPS) is signifi cant for video and resolution usually will lead to better video quality. applications. The overhead of encapsulating a video stream into a packet stream can be sub- An encoder device collects and produces a com- stantial depending on the data requirements and pressed video stream from the camera. The most network confi guration. In general, encoded video common codecs used in video surveillance are will be encapsulated in Real-time Transport Protocol MJPEG, MPEG4-SP, and H.264. (See Table 1) (RTP), RTP encapsulated in User Datagram Protocol (UDP), and UDP encapsulated in Internet Protocol version 4 (IPv4). Additionally IP is encapsulated into Table 1: Common Surveillance Video Codecs some Data Link Layer protocol, such as Ethernet. Codec Type Codec Characteristics The overhead sum of Ethernet/IP/UDP/RTP is MJPEG • Independantly decodable 18+20+8+12 = 58 octets, plus the video payload. frames • Greatest Bandwidth The bit rate consumption in kbps is typically stated required for a given video for a codec and associated parameters (like fps and quality resolution). MPEG-4 • Frames are encoded in a Network Bandwidth It is intuitive that bandwidth Simple dependant manner will be predominantly fl owing in an uplink direction Profi le (SP) • Designed for motion video from the video encoder to the network. The net- • Most common in today’s work operator will need to allocate some bandwidth surveillance industry in the downlink direction from the network to the MPEG-4 • Builds upon MPEG-4 to camera. This downlink bandwidth is used to control Part 10, provide best quality vs. the camera for Pan, Tilt, Zoom (PTZ) and other H.264, AVC bandwidth required tradeoff functions. In many networks, the network operator can designate the up/down ratio of data to the 3 WHITE PAPER - Video Solutions: Understanding Video Image Quality Table 2: Typical Quality Settings and Network Bandwidth Consumption (MPEG4-SP) Frames per Seconds Resolution 5 FPS 10 FPS 15 FPS 25 FPS 30 FPS QCIF 25 kbps 50 kbps 75 kbps 125 kbps 150 kbps CIF 100 kbps 200 kbps 300 kbps 500 kbps 600 kbps 4CIF 400 kbps 800 kbps 1200 kbps 2000 kbps 2400 kbps cameras in terms of a percentage. For many video surveillance applications, it is common to have 10% downstream (control signals to the camera) and 90% upstream (video images from the camera). Putting It All Together While there are many factors affecting bandwidth consumption of a video stream, table 2 depicts typical quality settings and their use of network bandwidth for a variety of applications. In each case, an MPEG4-SP codec is used. Motorola, Inc. 1301 E. Algonquin Road, Schaumburg, Illinois 60196 U.S.A. www.motorola.com/motowi4 MOTOROLA and the stylized M Logo are registered in the U.S. Patent and Trademark Offi ce. All other products or service names are the property of their registered owners. © Motorola, Inc. 2008.
Recommended publications
  • Adaptive Quantization Matrices for High Definition Resolutions in Scalable HEVC
    Original citation: Prangnell, Lee and Sanchez Silva, Victor (2016) Adaptive quantization matrices for HD and UHD display resolutions in scalable HEVC. In: IEEE Data Compression Conference, Utah, United States, 31 Mar - 01 Apr 2016 Permanent WRAP URL: http://wrap.warwick.ac.uk/73957 Copyright and reuse: The Warwick Research Archive Portal (WRAP) makes this work by researchers of the University of Warwick available open access under the following conditions. Copyright © and all moral rights to the version of the paper presented here belong to the individual author(s) and/or other copyright owners. To the extent reasonable and practicable the material made available in WRAP has been checked for eligibility before being made available. Copies of full items can be used for personal research or study, educational, or not-for profit purposes without prior permission or charge. Provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way. Publisher’s statement: © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. A note on versions: The version presented here may differ from the published version or, version of record, if you wish to cite this item you are advised to consult the publisher’s version.
    [Show full text]
  • Dell 24 USB-C Monitor - P2421DC User’S Guide
    Dell 24 USB-C Monitor - P2421DC User’s Guide Monitor Model: P2421DC Regulatory Model: P2421DCc NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Copyright © 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. 2020 – 03 Rev. A01 Contents About your monitor ......................... 6 Package contents . 6 Product features . .8 Identifying parts and controls . .9 Front view . .9 Back view . 10 Side view. 11 Bottom view . .12 Monitor specifications . 13 Resolution specifications . 14 Supported video modes . 15 Preset display modes . 15 MST Multi-Stream Transport (MST) Modes . 16 Electrical specifications. 16 Physical characteristics. 17 Environmental characteristics . 18 Power management modes . 19 Plug and play capability . 25 LCD monitor quality and pixel policy . 25 Maintenance guidelines . 25 Cleaning your monitor. .25 Setting up the monitor...................... 26 Attaching the stand . 26 │ 3 Connecting your monitor . 28 Connecting the DP cable . 28 Connecting the monitor for DP Multi-Stream Transport (MST) function . 28 Connecting the USB Type-C cable . 29 Connecting the monitor for USB-C Multi-Stream Transport (MST) function. 30 Organizing cables . 31 Removing the stand . 32 Wall mounting (optional) . 33 Operating your monitor ..................... 34 Power on the monitor . 34 USB-C charging options . 35 Using the control buttons . 35 OSD controls . 36 Using the On-Screen Display (OSD) menu .
    [Show full text]
  • MPEG Video in Software: Representation, Transmission, and Playback
    High Speed Networking and Multimedia Computing, IS&T/SPIE Symp. on Elec. Imaging Sci. & Tech., San Jose, CA, February 1994. MPEG Video in Software: Representation, Transmission, and Playback Lawrence A. Rowe, Ketan D. Patel, Brian C Smith, and Kim Liu Computer Science Division - EECS University of California Berkeley, CA 94720 ([email protected]) Abstract A software decoder for MPEG-1 video was integrated into a continuous media playback system that supports synchronized playing of audio and video data stored on a file server. The MPEG-1 video playback system supports forward and backward play at variable speeds and random positioning. Sending and receiving side heuristics are described that adapt to frame drops due to network load and the available decoding capacity of the client workstation. A series of experiments show that the playback system adds a small overhead to the stand alone software decoder and that playback is smooth when all frames or very few frames can be decoded. Between these extremes, the system behaves reasonably but can still be improved. 1.0 Introduction As processor speed increases, real-time software decoding of compressed video is possible. We developed a portable software MPEG-1 video decoder that can play small-sized videos (e.g., 160 x 120) in real-time and medium-sized videos within a factor of two of real-time on current workstations [1]. We also developed a system to deliver and play synchronized continuous media streams (e.g., audio, video, images, animation, etc.) on a network [2].Initially, this system supported 8kHz 8-bit audio and hardware-assisted motion JPEG compressed video streams.
    [Show full text]
  • Digital Video Quality Handbook (May 2013
    Digital Video Quality Handbook May 2013 This page intentionally left blank. Executive Summary Under the direction of the Department of Homeland Security (DHS) Science and Technology Directorate (S&T), First Responders Group (FRG), Office for Interoperability and Compatibility (OIC), the Johns Hopkins University Applied Physics Laboratory (JHU/APL), worked with the Security Industry Association (including Steve Surfaro) and members of the Video Quality in Public Safety (VQiPS) Working Group to develop the May 2013 Video Quality Handbook. This document provides voluntary guidance for providing levels of video quality in public safety applications for network video surveillance. Several video surveillance use cases are presented to help illustrate how to relate video component and system performance to the intended application of video surveillance, while meeting the basic requirements of federal, state, tribal and local government authorities. Characteristics of video surveillance equipment are described in terms of how they may influence the design of video surveillance systems. In order for the video surveillance system to meet the needs of the user, the technology provider must consider the following factors that impact video quality: 1) Device categories; 2) Component and system performance level; 3) Verification of intended use; 4) Component and system performance specification; and 5) Best fit and link to use case(s). An appendix is also provided that presents content related to topics not covered in the original document (especially information related to video standards) and to update the material as needed to reflect innovation and changes in the video environment. The emphasis is on the implications of digital video data being exchanged across networks with large numbers of components or participants.
    [Show full text]
  • Detectability Model for the Evaluation of Lossy Compression Methods on Radiographic Images Vivek Ramaswami Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1996 Detectability model for the evaluation of lossy compression methods on radiographic images Vivek Ramaswami Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Electrical and Electronics Commons Recommended Citation Ramaswami, Vivek, "Detectability model for the evaluation of lossy compression methods on radiographic images" (1996). Retrospective Theses and Dissertations. 250. https://lib.dr.iastate.edu/rtd/250 This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Detectability model for the evaluation of lossy compression methods on radiographic images by Vivek Ramaswami A thesis submitted to the graduate faculty in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Major: Electrical Engineering Major Professors: Satish S. Udpa and Joseph N. Gray Iowa State University Ames, Iowa 1996 Copyright © Vivek Ramaswami, 1996. All rights reserved. 11 Graduate College Iowa State University This is to certify that the Master's thesis of Vivek Ramaswami has met the t hesis requirements of Iowa State University Signature redacted for privacy 111 TABLE OF CONTENTS ACKNOWLEDGEMENTS IX 1 INTRODUCTION . 1 2 IMAGE COMPRESSION METHODS 5 201 Introduction o 0 0 0 0 0 5 2o2 Vector quantization 0 0 ..... 5 20201 Classified vector quantizer 6 20202 Coding of shade blocks o .....
    [Show full text]
  • (A/V Codecs) REDCODE RAW (.R3D) ARRIRAW
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]
  • EN User Manual 1 Customer Care and Warranty 28 Troubleshooting & Faqs 32 Table of Contents
    499P9 www.philips.com/welcome EN User manual 1 Customer care and warranty 28 Troubleshooting & FAQs 32 Table of Contents 1. Important ...................................... 1 1.1 Safety precautions and maintenance ................................. 1 1.2 Notational Descriptions ............ 3 1.3 Disposal of product and packing material .......................................... 4 2. Setting up the monitor .............. 5 2.1 Installation .................................... 5 2.2 Operating the monitor .............. 9 2.3 Built-in Windows Hello™ pop- up webcam .................................14 2.4 MultiClient Integrated KVM .....16 2.5 MultiView ..................................... 17 2.6 Remove the Base Assembly for VESA Mounting ..........................18 3. Image Optimization ..................19 3.1 SmartImage .................................19 3.2 SmartContrast ............................20 3.3 Adaptive Sync .............................21 4. HDR ............................................. 22 5. Technical Specifications ......... 23 5.1 Resolution & Preset Modes ...26 6. Power Management ................ 27 7. Customer care and warranty . 28 7.1 Philips’ Flat Panel Displays Pixel Defect Policy ..............................28 7.2 Customer Care & Warranty ...... 31 8. Troubleshooting & FAQs ......... 32 8.1 Troubleshooting ........................ 32 8.2 General FAQs ............................. 33 8.3 Multiview FAQs .........................36 1. Important discoloration and damage to the 1. Important monitor. This electronic
    [Show full text]
  • HERO6 Black Manual
    USER MANUAL 1 JOIN THE GOPRO MOVEMENT facebook.com/GoPro youtube.com/GoPro twitter.com/GoPro instagram.com/GoPro TABLE OF CONTENTS TABLE OF CONTENTS Your HERO6 Black 6 Time Lapse Mode: Settings 65 Getting Started 8 Time Lapse Mode: Advanced Settings 69 Navigating Your GoPro 17 Advanced Controls 70 Map of Modes and Settings 22 Connecting to an Audio Accessory 80 Capturing Video and Photos 24 Customizing Your GoPro 81 Settings for Your Activities 26 Important Messages 85 QuikCapture 28 Resetting Your Camera 86 Controlling Your GoPro with Your Voice 30 Mounting 87 Playing Back Your Content 34 Removing the Side Door 5 Using Your Camera with an HDTV 37 Maintenance 93 Connecting to Other Devices 39 Battery Information 94 Offloading Your Content 41 Troubleshooting 97 Video Mode: Capture Modes 45 Customer Support 99 Video Mode: Settings 47 Trademarks 99 Video Mode: Advanced Settings 55 HEVC Advance Notice 100 Photo Mode: Capture Modes 57 Regulatory Information 100 Photo Mode: Settings 59 Photo Mode: Advanced Settings 61 Time Lapse Mode: Capture Modes 63 YOUR HERO6 BLACK YOUR HERO6 BLACK 1 2 4 4 3 11 2 12 5 9 6 13 7 8 4 10 4 14 6 1. Shutter Button [ ] 6. Latch Release Button 10. Speaker 2. Camera Status Light 7. USB-C Port 11. Mode Button [ ] 3. Camera Status Screen 8. Micro HDMI Port 12. Battery 4. Microphone (cable not included) 13. microSD Card Slot 5. Side Door 9. Touch Display 14. Battery Door For information about mounting items that are included in the box, see Mounting (page 87).
    [Show full text]
  • The Evolutionof Premium Vascular Ultrasound
    Ultrasound EPIQ 5 The evolution of premium vascular ultrasound Philips EPIQ 5 ultrasound system The new challenges in global healthcare Unprecedented advances in premium ultrasound performance can help address the strains on overburdened hospitals and healthcare systems, which are continually being challenged to provide a higher quality of care cost-effectively. The goal is quick and accurate diagnosis the first time and in less time. Premium ultrasound users today demand improved clinical information from each scan, faster and more consistent exams that are easier to perform, and allow for a high level of confidence, even for technically difficult patients. 2 Performance More confidence in your diagnoses even for your most difficult cases EPIQ 5 is the new direction for premium vascular ultrasound, featuring an exceptional level of clinical performance to meet the challenges of today’s most demanding practices. Our most powerful architecture ever applied to vascular ultrasound EPIQ performance touches all aspects of acoustic acquisition and processing, allowing you to truly experience the evolution to a more definitive modality. Carotid artery bulb Superficial varicose veins 3 The evolution in premium vascular ultrasound Supported by our family of proprietary PureWave transducers and our leading-edge Anatomical Intelligence, this platform offers our highest level of premium performance. Key trends in global ultrasound • The need for more definitive premium • A demand to automate most operator ultrasound with exceptional image functions
    [Show full text]
  • Impact of Adaptation Dimensions on Video Quality
    Impact of Adaptation Dimensions on Video Quality Jens Brandt and Lars Wolf Institute of Operating Systems and Computer Networks (IBR), Technische Universitat¨ Braunschweig, Germany [email protected] Abstract—The number and types of mobile devices which well as the network conditions, may vary more often over time are capable of presenting digital video streams is increasing compared to static devices. Thus, the experience of watching constantly. In most cases the devices are trade-offs between video streams on mobile devices differs to a great extent from powerful all-purpose computers and small mobile devices which are ubiquitously available and range from cellular phones to those scenarios related to static displays usually found in the notebooks. This great heterogeneity of mobile devices makes area of home entertainment. For sequences from soccer games video streaming to such devices a challenging task for content McCarthy et al. observed that potential users preferred lower providers. Each single device has its own capabilities and indi- frame rates but higher detail resolution on mobile devices [2]. vidual requirements, which need to be considered when sending For other genres similar findings were presented for scalable a video stream to it. Thus, to support a great range of different devices, the video streams need to be adapted to the requirements video coding by Eichhorn and Ni in [3]. Besides concentration of each device. To get an idea of how different adaptation methods solely on the temporal resolution of a video stream, in our may affect the experience of users watching a streamed video on a work we additionally investigated the effect of adapting the mobile device, we inspect the influence of three major adaptation spatial and the detail resolution as well.
    [Show full text]
  • On the Accuracy of Video Quality Measurement Techniques
    On the accuracy of video quality measurement techniques Deepthi Nandakumar Yongjun Wu Hai Wei Avisar Ten-Ami Amazon Video, Bangalore, India Amazon Video, Seattle, USA Amazon Video, Seattle, USA Amazon Video, Seattle, USA, [email protected] [email protected] [email protected] [email protected] Abstract —With the massive growth of Internet video and therefore measuring and optimizing for video quality in streaming, it is critical to accurately measure video quality these high quality ranges is an imperative for streaming service subjectively and objectively, especially HD and UHD video which providers. In this study, we lay specific emphasis on the is bandwidth intensive. We summarize the creation of a database measurement accuracy of subjective and objective video quality of 200 clips, with 20 unique sources tested across a variety of scores in this high quality range. devices. By classifying the test videos into 2 distinct quality regions SD and HD, we show that the high correlation claimed by objective Globally, a healthy mix of devices with different screen sizes video quality metrics is led mostly by videos in the SD quality and form factors is projected to contribute to IP traffic in 2021 region. We perform detailed correlation analysis and statistical [1], ranging from smartphones (44%), to tablets (6%), PCs hypothesis testing of the HD subjective quality scores, and (19%) and TVs (24%). It is therefore necessary for streaming establish that the commonly used ACR methodology of subjective service providers to quantify the viewing experience based on testing is unable to capture significant quality differences, leading the device, and possibly optimize the encoding and delivery to poor measurement accuracy for both subjective and objective process accordingly.
    [Show full text]
  • A Deblocking Filter Hardware Architecture for the High Efficiency
    A Deblocking Filter Hardware Architecture for the High Efficiency Video Coding Standard Cláudio Machado Diniz1, Muhammad Shafique2, Felipe Vogel Dalcin1, Sergio Bampi1, Jörg Henkel2 1Informatics Institute, PPGC, Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, Brazil 2Chair for Embedded Systems (CES), Karlsruhe Institute of Technology (KIT), Germany {cmdiniz, fvdalcin, bampi}@inf.ufrgs.br; {muhammad.shafique, henkel}@kit.edu Abstract—The new deblocking filter (DF) tool of the next encoder configuration: (i) Random Access (RA) configuration1 generation High Efficiency Video Coding (HEVC) standard is with Group of Pictures (GOP) equal to 8 (ii) Intra period2 for one of the most time consuming algorithms in video decoding. In each video sequence is defined as in [8] depending upon the order to achieve real-time performance at low-power specific frame rate of the video sequence, e.g. 24, 30, 50 or 60 consumption, we developed a hardware accelerator for this filter. frames per second (fps); (iii) each sequence is encoded with This paper proposes a high throughput hardware architecture four different Quantization Parameter (QP) values for HEVC deblocking filter employing hardware reuse to QP={22,27,32,37} as defined in the HEVC Common Test accelerate filtering decision units with a low area cost. Our Conditions [8]. Fig. 1 shows the accumulated execution time architecture achieves either higher or equivalent throughput (in % of total decoding time) of all functions included in C++ (4096x2048 @ 60 fps) with 5X-6X lower area compared to state- class TComLoopFilter that implement the DF in HEVC of-the-art deblocking filter architectures. decoder software. DF contributes to up to 5%-18% to the total Keywords—HEVC coding; Deblocking Filter; Hardware decoding time, depending on video sequence and QP.
    [Show full text]