8K Tvs Top TV Line-Ups for a Reason Chris Chinnock, , Insight Media May 21, 2019 Analyst Blog, Broadcast, Consumer Electronics, Display Industry

Total Page:16

File Type:pdf, Size:1020Kb

8K Tvs Top TV Line-Ups for a Reason Chris Chinnock, , Insight Media May 21, 2019 Analyst Blog, Broadcast, Consumer Electronics, Display Industry 8K TVs Top TV Line-ups for a Reason Chris Chinnock, , Insight Media May 21, 2019 Analyst Blog, Broadcast, Consumer Electronics, Display Industry The latest and greatest thing in TVs are new models with 8K resolution. That’s 7680×4320 pixels or four times as many pixels as a 4K TV and 16 times as many as a 1080p TV. Do these new 8K TVs offer a differentiated experience? Can you see the difference? Is there 8K content and does it matter? Is it worth the higher price? These are all valid questions to ask if you are in the market for a new TV. In this article, I’ll talk about some of the common misconceptions around 8K, discuss what is included in these new 8K TVs and conclude with the items to look for if you want to consider buying one. You Can’t See the Difference – or Can You? Much of the criticism of 8K TV has been that you can’t see the extra pixel when viewing the TV from typical viewing distances of 8-10 feet. This assessment is based upon standard measure of visual acuity – i.e. how well we can see based on the Snellen eyechart. The argument is that adjacent pixels in an 8K TV are so close that we simply can’t resolve them. While the science behind this conclusion is solid, human vision is far more complex than the simple acuity metric might suggest. The reality is that you can see the difference between a 4K and 8K TV image. NHK has done research to compare displayed images at various CPDs (cycles per degree or line pairs per degree) to real objects. The goal was to see at what CPD the viewer now thinks the displayed image looks like the real object. Their research suggests that at around 150 CPD, displayed images now look like the real object. This clearly suggests that there is more to vison than simple acuity or the Snellen eye chart test where 20/20 vision corresponds to CPD=30. If simple visual acuity does not full describe our ability to “see” resolution, what are the mechanisms? There appear to be two other factors at play: Vernier acuity and the brain. Vernier or hyperacuity refers to the ability to discern slight misalignments between lines – an ability that is not possible according to simple acuity descriptions of human vision. Hyperacuity means we can perceive fine details even at fairly long viewing distances. A classic way to prove this is to show to two line-pairs. One pair has two black lines on a light background with the two lines perfectly parallel. The second pair can be misaligned by just a single pixel and many people can see this – even at some distance away. Because modern displays are pixelated, a line that is not parallel will have stair-stepping. While our normal visual acuity may not see this, our Vernier acuity can. As a result, if 4K and 8K images are displayed on 4K and 8K displays of the same size and viewed at the same distance – all other factors being equal – the 8K image will look sharper or crisper vs. a 4K image even when viewed at 8-10 feet. That’s because the spacing of the pixels on the 8K TV is half that on the 4K TV so there will be reduced stair-stepping – i.e. the 8K display will create a smoother line compared to the 4K display in this example. It would also appear that the brain does processing on the data sent from the eyes. In the above example, the reduced stair-stepping is reinforced in the brain to create a more analog-like image and hence an increase in the “realness” of the image. Given a high-resolution input to the eyes, the brain does a good job of filling in any missing details (and must work less hard compared to lower resolution inputs). This processing also creates an increased sense of depth. In other words, simple visual acuity does not tell the whole story on why 8K images look better than 4K even at longer distances. Higher order processes come into play that increase the sense of depth and realness of such images. Two recent studies are confirming this. In one study by Dr. YungKyong Park of Ewha Womans University in Seoul, side by side 4K and 8K 65” TVs were set up and calibrated at 500 nits of peak luminance. Observers were pre-tested to be sure their simple acuity was 20/20 and that they had normal color vision. All 120 observers sat 9 feet from the displays in a dark room – typical nighttime TV viewing conditions. All participants were shown the same 16 images and 3 videos representing a diverse range of visuals. As a result of the study, 8K displays performance was rated 35% higher— with perceived image quality increasing by 30% and depth perception increasing 60% from 4K to 8K. What’s most fascinating is that rather than pointing out the increased sharpness or contrast of the image associated with higher resolution, participants highlighted the main differences to be those related to sensory perceptions – i.e. objects look cooler, warmer, more delicious, heavier. The researchers concluded that this hyperrealism effect connects the perceptual aspects on the image (contrast, color expression and resolution) with the cognitive aspects perceived by the brain (weight, temperature, reality, space, depth and high image quality). That’s very interesting that the increased resolution has a stronger emotional impact. A separate study by Dr. Kyoung-Min Lee of Seoul National University looked at the effects of super-resolution (8K) displays from the point of view of the brain. His main conclusions were: • Super-resolution reduces information loss thus creating a more realistic image • Super-resolution displays increase the dynamic signal-to-noise ratio reducing cognitive loading and increasing the immersive effect Having more pixels reduces jaggies in lines and moire effects leading to naturally sharper edges. These sharper edges make it easier to see separate objects allowing for an increased sense of depth. This effect is evident on native 8K content and on upscale/restored content as well. But more pixels are also very beneficial for creating more realism in colors. Subtle hue changes can result in banding of these colors even if there is sufficient bit depth. Having four pixels to change the hue instead of one leads to a smoother and more lifelike image. This is illustrated in the graphic below. The left shows slight changed in hue while the left side shows the 8-bit RGB values for each color. The top image represents a 6×6 matrix of pixel that represents the 8K case. A slight change in green hue runs left to right with a slight change in red hue from top to bottom. The bottom case is a 4K display where the average of 4 pixels is represented in this 3×3 matrix of pixels. This shows there can be more banding in a 4K display vs. an 8K display. A smart image restoration algorithm can try to reproduce the more subtle hue changes in the 8K example from a 4K input source. The two images below are screen shots from a Samsung 8K TV with and without the image restoration algorithms applied to show the reduction in banding artifacts for subtle hue changes. Glints or specular reflections also add realism to an image, but these are often tiny parts of the image. Being able to define a high luminance glint in finer detail with an 8K vs. 4K display allows such subtle components to be more accurately reproduced – and increasing realism. All of these benefits are resolution dependent and apply even if the comparing 4K and 8K high dynamic range (HDR) images. Yes, some of these benefits are subtle, but the brain is remarkable and can processes such subtle improvements to create a more realistic and immersive image with more emotional impact. What About 8K Content? It is true that is limited native 8K content today, but that was also true of native 4K content 5 years ago. Today, there is a decent amount of native 4K HDR content available and I believe 8K content will come along at a similar pace in the coming years. Japan is already broadcast 8K content on an 8K satellite channel and is gearing up to broadcast the full 2020 Summer Olympics from Tokyo in 8K. With this precedent set, it will be hard for major sporting events to now not be captured in 8K. China is expected to be a major market for 8K TVs so content creation is expected to heat up there soon, and also in Korea. In Europe, Rakuten in Spain has announced the first 8K streaming service and SES Astra may be offering 8K satellite service soon as well. Streaming service providers led the adoption of 4K and I expect them to lead with the adoption of 8K. None have made public announcements yet, but with the introduction of improved compression technologies in the next 2 years allowing 8K streaming at acceptable data rates, don’t be surprised to see these companies vying to be the leader in 8K streaming. And, Sony has announced that the next PlayStation platform, PS-5, will be 8K capable. This is also expected to arrive in 2020. As you can see, all the pieces are coming together to drive creation of 8K content in 2020 and beyond. But the reality is that almost all content for the next 2 years is going to be in 4K and 2K resolution, so isn’t that a problem? In short, no.
Recommended publications
  • HDR PROGRAMMING Thomas J
    April 4-7, 2016 | Silicon Valley HDR PROGRAMMING Thomas J. True, July 25, 2016 HDR Overview Human Perception Colorspaces Tone & Gamut Mapping AGENDA ACES HDR Display Pipeline Best Practices Final Thoughts Q & A 2 WHAT IS HIGH DYNAMIC RANGE? HDR is considered a combination of: • Bright display: 750 cm/m 2 minimum, 1000-10,000 cd/m 2 highlights • Deep blacks: Contrast of 50k:1 or better • 4K or higher resolution • Wide color gamut What’s a nit? A measure of light emitted per unit area. 1 nit (nt) = 1 candela / m 2 3 BENEFITS OF HDR Improved Visuals Richer colors Realistic highlights More contrast and detail in shadows Reduces / Eliminates clipping and compression issues HDR isn’t simply about making brighter images 4 HUNT EFFECT Increasing the Luminance Increases the Colorfulness By increasing luminance it is possible to show highly saturated colors without using highly saturated RGB color primaries Note: you can easily see the effect but CIE xy values stay the same 5 STEPHEN EFFECT Increased Spatial Resolution More visual acuity with increased luminance. Simple experiment – look at book page indoors and then walk with a book into sunlight 6 HOW HDR IS DELIVERED TODAY High-end professional color grading displays - Dolby Pulsar (4000 nits), Dolby Maui, SONY X300 (1000 nit OLED) UHD TVs - LG, SONY, Samsung… (1000 nits, high contrast, UHD-10, Dolby Vision, etc) Rec. 2020 UHDTV wide color gamut SMPTE ST-2084 Dolby Perceptual Quantizer (PQ) Electro-Optical Transfer Function (EOTF) SMPTE ST-2094 Dynamic metadata specification 7 REAL WORLD VISIBLE
    [Show full text]
  • A High Dynamic Range Video Codec Optimized by Large Scale Testing
    A HIGH DYNAMIC RANGE VIDEO CODEC OPTIMIZED BY LARGE-SCALE TESTING Gabriel Eilertsen? Rafał K. Mantiuky Jonas Unger? ? Media and Information Technology, Linkoping¨ University, Sweden y Computer Laboratory, University of Cambridge, UK ABSTRACT 2. BACKGROUND While a number of existing high-bit depth video com- With recent advances in HDR imaging, we are now at a stage pression methods can potentially encode high dynamic range where high-quality videos can be captured with a dynamic (HDR) video, few of them provide this capability. In this range of up to 24 stops [1, 2]. However, storage and distri- paper, we investigate techniques for adapting HDR video bution of HDR video still mostly rely on formats for static for this purpose. In a large-scale test on 33 HDR video se- images, neglecting the inter-frame relations that could be ex- quences, we compare 2 video codecs, 4 luminance encoding plored. Although the first steps are being taken in standard- techniques (transfer functions) and 3 color encoding meth- izing HDR video encoding, where MPEG recently released a ods, measuring quality in terms of two objective metrics, call for evidence for HDR video coding, most existing solu- PU-MSSIM and HDR-VDP-2. From the results we design tions are proprietary and HDR video compression software is an open source HDR video encoder, optimized for the best not available on Open Source terms. compression performance given the techniques examined. The challenge in HDR video encoding, as compared to Index Terms— High dynamic range (HDR) video, HDR standard video, is in the wide range of possible pixel values video coding, perceptual image metrics and their linear floating point representation.
    [Show full text]
  • SMPTE ST 2094 and Dynamic Metadata
    SMPTE Standards Update SMPTE Professional Development Academy – Enabling Global Education SMPTE Standards Webcast Series SMPTE Professional Development Academy – Enabling Global Education SMPTE ST 2094 and Dynamic Metadata Lars Borg Principal Scientist Adobe linkedin: larsborg My first TV © 2017 by the Society of Motion Picture & Television Engineers®, Inc. (SMPTE®) SMPTE Standards Update Webcasts Professional Development Academy Enabling Global Education • Series of quarterly 1-hour online (this is is 90 minutes), interactive webcasts covering select SMPTE standards • Free to everyone • Sessions are recorded for on-demand viewing convenience SMPTE.ORG and YouTube © 2017 by the Society of Motion Picture & Television Engineers®, Inc. (SMPTE®) © 2016• Powered by SMPTE® Professional Development Academy • Enabling Global Education • www.smpte.org © 2017 • Powered by SMPTE® Professional Development Academy | Enabling Global Education • www.smpte.org SMPTE Standards Update SMPTE Professional Development Academy – Enabling Global Education Your Host Professional Development Academy • Joel E. Welch Enabling Global Education • Director of Education • SMPTE © 2017 by the Society of Motion Picture & Television Engineers®, Inc. (SMPTE®) 3 Views and opinions expressed during this SMPTE Webcast are those of the presenter(s) and do not necessarily reflect those of SMPTE or SMPTE Members. This webcast is presented for informational purposes only. Any reference to specific companies, products or services does not represent promotion, recommendation, or endorsement by SMPTE © 2017 • Powered by SMPTE® Professional Development Academy | Enabling Global Education • www.smpte.org SMPTE Standards Update SMPTE Professional Development Academy – Enabling Global Education Today’s Guest Speaker Professional Development Academy Lars Borg Enabling Global Education Principal Scientist in Digital Video and Audio Engineering Adobe © 2017 by the Society of Motion Picture & Television Engineers®, Inc.
    [Show full text]
  • Perceptual Signal Coding for More Efficient Usage of Bit Codes Scott Miller Mahdi Nezamabadi Scott Daly Dolby Laboratories, Inc
    Perceptual Signal Coding for More Efficient Usage of Bit Codes Scott Miller Mahdi Nezamabadi Scott Daly Dolby Laboratories, Inc. What defines a digital video signal? • SMPTE 292M, SMPTE 372M, HDMI? – No, these are interface specifications – They don’t say anything about what the RGB or YCbCr values mean • Rec601 or Rec709? – Not really, these are encoding specifications which define the OETF (Opto-Electrical Transfer Function) used for image capture – Image display ≠ inverse of image capture © 2012 SMPTE · e 2012 Annual Technical Conference & Exhibition · www.smpte2012.org What defines a digital video signal? • This does! • The EOTF (Electro-Optical Transfer Function) is what really matters – Content is created by artists while viewing a display – So the reference display defines the signal © 2012 SMPTE · e 2012 Annual Technical Conference & Exhibition · www.smpte2012.org Current Video Signals • “Gamma” nonlinearity – Came from CRT (Cathode Ray Tube) physics – Very much the same since the 1930s – Works reasonably well since it is similar to human visual sensitivity (with caveats) • No actual standard until last year! (2011) – Finally, with CRTs almost extinct the effort was made to officially document their response curve – Result was ITU-R Recommendation BT.1886 © 2012 SMPTE · e 2012 Annual Technical Conference & Exhibition · www.smpte2012.org Recommendation ITU-R BT.1886 EOTF © 2012 SMPTE · e 2012 Annual Technical Conference & Exhibition · www.smpte2012.org If gamma works so well, why change? • Gamma is similar to perception within limits
    [Show full text]
  • DVB SCENE | March 2017
    Issue No. 49 DVBSCENE March 2017 Delivering the Digital Standard www.dvb.org GAME ON FOR UHD Expanding the Scope of HbbTV Emerging Trends in Displays Next Gen Video Codec 04 Consuming Media 06 In My Opinion 07 Renewed Purpose for DVB 09 Discovering HDR 10 Introducing NGA 13 VR Scorecard 08 12 15 14 Next Gen In-Flight Connectivity SECURING THE CONNECTED FUTURE The world of video is becoming more connected. And next- generation video service providers are delivering new connected services based on software and IP technologies. Now imagine a globally interconnected revenue security platform. A cloud-based engine that can optimize system performance, proactively detect threats and decrease operational costs. Discover how Verimatrix is defining the future of pay-TV revenue security. Download our VCAS™ FOR BROADCAST HYBRID SOLUTION BRIEF www.verimatrix.com/hybrid The End of Innovation? A Word From DVB A small step for mankind, but a big step for increase in resolution. Some claim that the the broadcast industry might seem like an new features are even more important than SECURING overstatement, however that’s exactly what an increase in spatial resolution as happened on 17 November when the DVB appreciation is not dependent on how far Steering Board approved the newly updated the viewer sits from the screen. While the audio – video specification, TS 101 154, display industry is going full speed ahead thereby enabling the rollout of UHD and already offers a wide range of UHD/ THE CONNECTED FUTURE services. The newly revised specification is HDR TVs, progress in the production Peter Siebert the result of enormous effort from the DVB domain is slower.
    [Show full text]
  • Visualization & Analysis of HDR/WCG Content
    Visualization & Analysis of HDR/WCG Content Application Note Visualization & analysis of HDR/WCG content Introduction The adoption of High Dynamic Range (HDR) and Wide Colour Gamut (WCG) content is accelerating for both 4K/UHD and HD applications. HDR provides a greater range of luminance, with more detailed light and dark picture elements, whereas WCG allows a much wider range of colours to be displayed on a television screen. The combined result is more accurate, more immersive broadcast content. Hand-in-hand with exciting possibilities for viewers, HDR and WCG also bring additional challenges with respect to the management of video brightness and colour space for broadcasters and technology manufacturers. To address these issues, an advanced set of visualization, analysis and monitoring tools is required for HDR and WCG enabled facilities, and to test video devices for compliance. 2 Visualization & analysis of HDR/WCG content HDR: managing a wider range of luminance Multiple HDR formats are now used consumer LCD and OLED monitors do not worldwide. In broadcast, HLG10, PQ10, go much past 1000 cd/m2. S-log3, S-log3 (HDR Live), HDR10, HDR10+, SLHDR1/2/3 and ST 2094-10 are all used. It is therefore crucial to accurately measure luminance levels during HDR production to Hybrid Log-Gamma (HLG), developed by the avoid displaying washed out, desaturated BBC and NHK, provides some backwards images on consumers’ television compatibility with existing Standard Dynamic displays. It’s also vital in many production Range (SDR) infrastructure. It is a relative environments to be able to manage both system that offers ease of luminance HDR and SDR content in the same facility mapping in the SDR zone.
    [Show full text]
  • Effect of Peak Luminance on Perceptual Color Gamut Volume
    https://doi.org/10.2352/issn.2169-2629.2020.28.3 ©2020 Society for Imaging Science and Technology Effect of Peak Luminance on Perceptual Color Gamut Volume Fu Jiang,∗ Mark D. Fairchild,∗ Kenichiro Masaoka∗∗; ∗Munsell Color Science Laboratory, Rochester Institute of Technology, Rochester, New York, USA; ∗∗NHK Science & Technology Research Laboratories, Setagaya, Tokyo, Japan Abstract meaningless in terms of color appearance. In this paper, two psychophysical experiments were con- Color appearance models, as proposed and tested by CIE ducted to explore the effect of peak luminance on the perceptual Technical Committee 1-34, are required to have predictive corre- color gamut volume. The two experiments were designed with lates of at least the relative appearance attributes of lightness, two different image data rendering methods: clipping the peak chroma, and hue [3]. CIELAB is the widely used and well- luminance and scaling the image luminance to display’s peak lu- known example of such a color appearance space though its orig- minance capability. The perceptual color gamut volume showed inal goal was supra-threshold color-difference uniformity. How- a close linear relationship to the log scale of peak luminance. ever, CIELAB does not take the other attributes of the view- The results were found not consistent with the computational ing environment and background, such as absolute luminance 3D color appearance gamut volume from previous work. The level, into consideration. CIECAM02 is CIE-recommended difference was suspected to be caused by the different perspec- color appearance model for more complex viewing situations tives between the computational 3D color appearance gamut vol- [4].
    [Show full text]
  • Measuring Perceptual Color Volume V7.1
    EEDS TO CHANG Perceptual Color Volume Measuring the Distinguishable Colors of HDR and WCG Displays INTRODUCTION Metrics exist to describe display capabilities such as contrast, bit depth, and resolution, but none yet exist to describe the range of colors produced by high-dynamic-range and wide-color-gamut displays. This paper proposes a new metric, millions of distinguishable colors (MDC), to describe the range of both colors and luminance levels that a display can reproduce. This new metric is calculated by these steps: 1) Measuring the XYZ values for a sample of colors that lie on the display gamut boundary 2) Converting these values into a perceptually-uniform color representation 3) Computing the volume of the 3D solid The output of this metric is given in millions of distinguishable colors (MDC). Depending on the chosen color representation, it can relate to the number of just-noticeable differences (JND) that a display (assuming sufficient bit-depth) can produce. MEASURING THE GAMUT BOUNDARY The goal of this display measurement is to have a uniform sampling of the boundary capabilities of the display. If the display cannot be modeled by a linear combination of RGB primaries, then a minimum of 386 measurements are required to sample the input RGB cube (9x9x6). Otherwise, the 386 measurements may be simulated through five measurements (Red, Green, Blue, Black, and White). We recommend the following measuring techniques: - For displaying colors, use a center-box pattern to cover 10% area of the screen - Using a spectroradiometer with a 2-degree aperture, place it one meter from the display - To exclude veiling-glare, use a flat or frustum tube mask that does not touch the screen Version 7.1 Dolby Laboratories, Inc.
    [Show full text]
  • Impact Analysis of Baseband Quantizer on Coding Efficiency for HDR Video
    1 Impact Analysis of Baseband Quantizer on Coding Efficiency for HDR Video Chau-Wai Wong, Member, IEEE, Guan-Ming Su, Senior Member, IEEE, and Min Wu, Fellow, IEEE Abstract—Digitally acquired high dynamic range (HDR) video saving the running time of the codec via computing numbers baseband signal can take 10 to 12 bits per color channel. It is in a smaller range, ii) handling the event of instantaneous economically important to be able to reuse the legacy 8 or 10- bandwidth shortage as a coding feature provided in VC-1 bit video codecs to efficiently compress the HDR video. Linear or nonlinear mapping on the intensity can be applied to the [15]–[17], or iii) removing the color precision that cannot be baseband signal to reduce the dynamic range before the signal displayed by old screens. is sent to the codec, and we refer to this range reduction step as Hence, it is important to ask whether reducing the bitdepth a baseband quantization. We show analytically and verify using for baseband signal is bad for coding efficiency measured test sequences that the use of the baseband quantizer lowers in HDR. Practitioners would say “yes”, but if one starts to the coding efficiency. Experiments show that as the baseband quantizer is strengthened by 1.6 bits, the drop of PSNR at a tackle this question formally, the answer is not immediately high bitrate is up to 1.60 dB. Our result suggests that in order clear as the change of the rate-distortion (RD) performance to achieve high coding efficiency, information reduction of videos is non-trivial: reducing the bitdepth for baseband signal while in terms of quantization error should be introduced in the video maintaining the compression strength of the codec will lead to codec instead of on the baseband signal.
    [Show full text]
  • Quick Reference HDR Glossary
    Quick Reference HDR Glossary updated 11.2018 Quick Reference HDR Glossary technicolor Contents 1 AVC 10 MaxCLL Metadata 1 Bit Depth or Colour Depth 10 MaxFALL Metadata 2 Bitrate 10 Nits (cd/m2) 2 Color Calibration of Screens 10 NRT Workflow 2 Contrast Ratio 11 OETF 3 CRI (Color Remapping 11 OLED Information) 11 OOTF 3 DCI-P3, D65-P3, ST 428-1 11 Peak Code Value 3 Dynamic Range 11 Peak Display Luminance 4 EDID 11 PQ 4 EOTF 12 Quantum Dot (QD) Displays 4 Flicker 12 Rec. 2020 or BT.2020 4 Frame Rate 13 Rec.709 or BT.709 or sRGB 5 f-stop of Dynamic Range 13 RT (Real-Time) Workflow 5 Gamut or Color Gamut 13 SEI Message 5 Gamut Mapping 13 Sequential Contrast / 6 HDMI Simultaneous Contrast 6 HDR 14 ST 2084 6 HDR System 14 ST 2086 7 HEVC 14 SDR/SDR System 7 High Frame Rate 14 Tone Mapping/ Tone Mapping 8 Image Resolution Operator (TMO) 8 IMF 15 Ultra HD 8 Inverse Tone Mapping (ITM) 15 Upscaling / Upconverting 9 Judder/Motion Blur 15 Wide Color Gamut (WCG) 9 LCD 15 White Point 9 LUT 16 XML 1 Quick Reference HDR Glossary AVC technicolor Stands for Advanced Video Coding. Known as H.264 or MPEG AVC, is a video compression format for the recording, compression, and distribution of video content. AVC is best known as being one of the video encoding standards for Blu-ray Discs; all Blu-ray Disc players must be able to decode H.264. It is also widely used by streaming internet sources, such as videos from Vimeo, YouTube, and the iTunes Store, web software such as the Adobe Flash Player and Microsoft Silverlight, and also various HDTV broadcasts over terrestrial (ATSC, ISDB-T, DVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S and DVB-S2).
    [Show full text]
  • High Dynamic Range Broadcasting
    Connecting IT to Broadcast High Dynamic Range Broadcasting Essential EG Guide ESSENTIAL GUIDES Essential Guide High Dynamic Range Broadcasting - 11/2019 Introduction Television resolutions have increased HDR is rapidly being adopted by OTT from the analog line and field rates of publishers like Netflix to deliver a the NTSC and PAL systems through premium product to their subscribers. HD to the 3840 x 2160 pixels of UHD, Many of the cost and workflow with 8K waiting in the wings. However, issues around the simulcasting of the dynamic range of the system was live broadcasts in HDR and standard very much set by the capabilities of the definition have also been solved. cathode ray tube, the display device of analog television. The latest displays, The scales have been lifted from our LCD and OLED, can display a wider eyes, as imaging systems can now dynamic range than the old standards deliver a more realistic dynamic range to permit. The quest for better pixels, driven the viewer. by the CE vendors, video publishers— OTT and broadcast—as well as directors, This Essential Guide, supported by AJA has led for improvements beyond just Video Systems, looks at the challenges increasing resolution. and the technical solutions to building an HDR production chain. Following the landmark UHD standard, David Austerberry BT.2020, it was rapidly accepted that David Austerberry to omit the extension of dynamic range was more than an oversight. BT.2100 soon followed, adding high dynamic range (HDR) with two different systems. Perceptual Quantisation (PQ) is based on original research from Dolby and others, Hybrid Log Gamma (HLG) offers compatibility with legacy systems.
    [Show full text]
  • High Dynamic Range Versus Standard Dynamic Range Compression Efficiency
    HIGH DYNAMIC RANGE VERSUS STANDARD DYNAMIC RANGE COMPRESSION EFFICIENCY Ronan Boitard? Mahsa T. Pourazad?y Panos Nasiopoulos? ? University of British Columbia, Vancouver, Canada yTELUS Communications Inc., Vancouver, Canada ABSTRACT encoding HDR content natively, for example using HDR10 High Dynamic Range (HDR) image and video technology [3], results in lower bit-rates than the SDR version [4]. aims at conveying the full range of perceptible shadow and These results are intuitively contradictory, more information highlight details with sufficient precision. HDR is regarded using less bandwidth. In this article, we explain in detail by many experts as the next evolution in digital media. why a pipeline such as HDR10 can be more efficient than However, industrial broadcasters have concerns regarding broadcasting the SDR version. the bandwidth overhead that this new technology entails. The rest of this paper is organized as follows. Section II While many consider that broadcasting HDR content would introduces the concept of color pixels encoding. Section III increase bandwidth requirements by around 20%, this num- outlines the difference between the traditional SDR and the ber is based on studies where, in addition to the SDR main emerging HDR pipelines. Section IV provides a statistical stream, HDR-related side information is conveyed. A recent analysis of both HDR and SDR content to illustrate that more subjective evaluation reported that encoding HDR video visual information does not mean higher entropy signal. content in a single layer might require less bandwidth than Finally, Section V concludes this article. its associated SDR version. Similar results were discussed in II. COLOR PIXEL ENCODING the MPEG ad-hoc group on High Dynamic Range and Wide Color Gamut.
    [Show full text]