Video Quality in the Digital OR
Total Page:16
File Type:pdf, Size:1020Kb
Barco | Whitepaper Video Quality in the digital OR http://www.barco.com/healthcare Barco | Whitepaper Table of content Introduction 3 Enabling bright outcomes 3 Importance of Color quality during Surgery 3 Video integration in the Digital OR 3 Resolution 4 Resolution Size 4 Scaling and aspect ratio 5 Deinterlacing 6 Framerate 8 Introduction 8 Framerate at high resolution 8 Effect on latency 8 Effect on eye-fatigue 9 Color encoding schemes 10 RGB 10 YUV 10 Chroma Subsampling 11 Color Gamut 13 Definition 13 Viewing Angle 15 Color Calibration 16 White Balance 16 Color calibration on Surgical displays 16 Color Depth 17 Dithering 18 Compression 19 Latency 19 Conclusion 20 About the author 21 References 21 Definitions 21 P 2 / 21 Barco | Whitepaper Introduction Enabling bright outcomes The human eye is a fantastically complicated piece of instrumentation that performs its own image processing before the signal is relayed to the brain. The eye is very sensitive to color, motion, light, etc. The scope of this document is to explain how to provide the best possible quality to the eye of the Surgeon, to make sure the best outcomes can be reached for the patient. Importance of Color quality during Surgery More and more surgery is done with minimally invasive procedures, which limit the size of incisions needed and lessen wound healing time, associated pain and risk of infection. Minimally invasive procedures have been enabled by the advance of various medical technologies and instrumentation, including Endoscopic cameras and surgical displays. Some procedures are only performed this way, for example laparoscopy and arthroscopy. As explained in the Whitepaper “An Introduction to Color for Medical Imaging”1, surgery and examinations that use endoscopes need an exact representation of colors. The endoscope combined with the display can be considered an extension of the doctor’s eyes. The color of a wound for example provides an indication if a wound is healing. Video integration in the Digital OR In the surgical industry, multiple manufacturers produce different types of video sources and surgical displays. At first glance, the spec sheets of these sources and displays might look similar, but the quality specs of these products can be very different. It’s important to understand these specs when choosing the best possible product for your OR. Today a modern OR is often installed with a medical video integration system, connecting all components in the OR, including the video sources and displays. Again there are multiple manufacturers producing different types of integrated solutions which might look similar at first glance, but when looking at the quality specs there will be important differences. Does it make sense to purchase a 4K Endoscopic camera and connect an HD display to it? Is it a good choice to purchase best-in-class sources and displays but compromise on quality in the video distribution system? Is it an issue to have multiple displays in the OR, each showing the same source with different color settings? This document describes the various video quality specs to consider when choosing the best product for your needs. P 3 / 21 Barco | Whitepaper Resolution Resolution Size Over the years, many different video resolutions were generated by different medical instrumentation. While the aspect ratio may vary, the general trend is that newer applications use higher resolutions. The table below shows some commonly used resolutions and their aspect ratios. Many applications used in the OR have a long lifetime. Therefore there are still many devices in use today with lower resolutions and legacy video interfaces (e.g. S-Video, Composite and VGA). The content sent out by these devices is however very important for many procedures. Therefore the video integration system in the Digital OR must be capable of supporting these legacy video interfaces and resolutions, along with newer technology. The benefits of 4K are described in the whitepaper “4K in the operating room”2. 4K surgical displays developed today typically have a native timing of UHD (3840x2160) or True 4K (4096x2160). Since today’s ORs have devices with a variety of interfaces and resolutions, look for an integration solution which can support a wide range of video interfaces and resolutions for all your devices, both old and new. Confirm that this solution does not change the resolution at any point in the video chain, except at the display where it is scaled to the display resolution. P 4 / 21 Barco | Whitepaper Scaling and aspect ratio When an image is upscaled from a low resolution source to a higher resolution display, the scaling algorithm must invent pixels which are as close as possible to how adjacent pixels look. When an image is downscaled from a high resolution source to a lower resolution display, the scaling algorithm must calculate average pixel values to output the image correctly. By doing that, the quality of the image is reduced, because in most cases all pixels must be recalculated. There are many techniques to make this scaling as good as possible, with errors low enough for the result to appear visually lossless. If a scaled image is scaled a second time this becomes worse, because the errors in the scaling algorithm get multiplied. Therefore an installation should aim to scale the video as little as possible. The worst-case scenario would be a system with a lot of scalers and conversion boxes which scale the image multiple times before it is shown on a display. The best-case scenario is a system where the resolution of the source is identical to the resolution of the display. In such a scenario, the image does not need to be scaled and there is no loss in image quality. When scaling is needed, the aspect ratio should always be preserved to ensure an accurate image. The images below show an example of scaling done with an incorrect aspect ratio. Correct aspect ratio Incorrect aspect ratio - stretched When the aspect ratio of the source is not the same as the aspect ratio of the display, black borders must be added on top and bottom or left and right of the image to ensure the aspect ratio is correct. Look for a video integration solution with as few conversion boxes as possible, and with displays that match the resolution of your most critical sources (typically the endoscopic camera). Confirm that the aspect ratio is not changed at any point in the video chain. P 5 / 21 Barco | Whitepaper Deinterlacing Video interfaces can use two types of scanning methods to draw the picture: interlaced and progressive. All digital displays are progressive-scan displays. Even if the video interface provides the video with an interlaced scanning, the display will convert this to progressive scan before it is shown on the display. This process is called deinterlacing. • Progressive scan In this scanning method, the entire image is drawn at once. No deinterlacing is required. • Interlaced scan In this scanning method, only half of the picture is sent every frame. In the first field, the odd lines are transmitted. In the second field, the even lines are transmitted. The two fields combined result in a full frame. Field 1 Field 2 An interlaced image which is transmitted at 60 fields per second results in 30 complete frames per second. When video transmission was invented, the electronics in displays were not fast enough to process 60 complete images per second. Also the bandwidth of interfaces to transmit the image to the display did not allow this. With an interlaced scanning method, only half of the image needs to be processed. In those days, the display first drew the odd lines and then the even lines. Another benefit that this gave is that the image was refreshed 60 times per second even though only 30 frames were sent. Even though the complete frame did not receive new video content, the viewer perceived this as a higher framerate, resulting in less flickering and more fluent video. Today, digital displays convert the video to progressive before displaying the image. Therefore this benefit is not relevant anymore. Interlaced video is common on analog broadcast standards like PAL/NTSC and used on early computer systems. Interlaced video is rarely used in digital video interfaces, except the SDI standard. The DVI standard does not define interlaced timings, but some CCUs send out 1080i on this interface. P 6 / 21 Barco | Whitepaper • Deinterlacing A deinterlacing algorithm will buffer one field and combine it with the other field to create a full frame. There are different techniques to do this, each with their own advantages and disadvantages. Almost all deinterlacing techniques add some latency because the display needs both fields before it can send the image to the display. • Weaving Weaving is done by simply combining the two fields. The advantage is that the quality of the source image is not touched: the output shows exactly what the source provided. A disadvantage is that fast moving parts in the frame will create an artefact called “combing”. • Blending Blending is done by calculating the average pixel values for consecutive frames. Combing is avoided this way, but another artefact called “ghosting” appears: fast moving parts will be shown two times. This method also causes a quality loss in the complete image, because the image must be downsized and then upsized, losing vertical resolution. Another disadvantage is that this causes some extra latency. • Selective blending This method is a combination of weaving and blending. Areas of the frame that don’t change use weaving. Areas of the frame that change use blending. To be able to do this, a motion detection algorithm must be used to indicate what parts of the frame move and which don’t.