ECE 634 – Digital Video Systems Spring 2021

Total Page:16

File Type:pdf, Size:1020Kb

ECE 634 – Digital Video Systems Spring 2021 ECE 634 – Digital Video Systems Spring 2021 Fengqing Maggie Zhu Assistant Professor of ECE MSEE 334 [email protected] Video Basics ECE634 – Spring 2021 Jan 19, 2021 1 Outline • Color Perception • Video Capture and Display • Analog and Digital Video ECE634 – Spring 2021 Jan 19, 2021 2 Color Perception ECE634 – Spring 2021 Jan 19, 2021 3 Light Sources • Illuminating sources: – Emit light (e.g. the sun, light bulb, TV monitors) – Perceived color depends on the emitted freq. – Follows additive rule • R+G+B=White • Reflecting sources: – Reflect an incoming light (e.g. the color dye, matte surface, cloth) – Perceived color depends on reflected freq. (= emitted freq. - absorbed freq.) – Follows subtractive rule • R+G+B=Black ECE634 – Spring 2021 Jan 19, 2021 4 Eyes – Human Visual System • Most intelligent and high-end camera Lens: Cornea, Lens Lens Control: Zonula (muscle group) Aperture Control: Iris, Pupil Photo Sensor: Retina, Fovea ECE634 – Spring 2021 Jan 19, 2021 5 Human Perception of Color • Photo receptors in the retina (the surface of the rear of the eye ball) – Cones: function under bright light, can perceive color tone • Red (~570 nm), green (~535 nm), blue (~445 nm) cones • Passed via optic nerve fibers to the visual cortex for processing – Rods: work under low light, can only perceive luminance information • Color sensation of human: – Luminance: brightness – Chrominance: hue (color tone), saturation (color purity) ECE634 – Spring 2021 Jan 19, 2021 6 Spectral Sensitivity Curves ECE634 – Spring 2021 Jan 19, 2021 7 Trichromatic Color Mixing • Trichromatic color mixing theory – Any color can be obtained by mixing three properly chosen primary colors with a right proportion C = åTkCk , Tk : Tristimulus values k=1,2,3 • Primary colors for illuminating sources – Red, Green, Blue (RGB) – Color monitor works by exciting red, green, blue phosphors using separate electronic guns • Primary colors for reflecting sources – Cyan, Magenta, Yellow (CMY) – Color printer works by using cyan, magenta, yellow and black (CMYK) dyes ECE634 – Spring 2021 Jan 19, 2021 8 Color Representation Models • Specify the tristimulus values associated with the three primary colors – RGB – CMY • Specify the luminance and chrominance – HSI (Hue, saturation, intensity) – YIQ (used in NTSC color TV) – YCbCr (used in digital color TV) • Amplitude specification: – 8 bits for each color component, or 24 bits total for each pixel – Total of 16 million colors – A true RGB color display of size 1Kx1K requires a display bufFer memory size of 3 MB ECE634 – Spring 2021 Jan 19, 2021 9 Color Space Conversion • Conversion between different primary sets is linear (3x3 matrix) • Conversion between primary and XYZ/YIQ/YUV are also linear – XYZ colors are not realizable by actual stimuli – YIQ and YUV are derived from the XYZ coordinate • Conversion to LSI/Lab are nonlinear – Coordinate Euclidean distance proportional to actual color difference ECE634 – Spring 2021 Jan 19, 2021 10 Choosing Color Coordinates • For display or printing: RGB or CMY, to produce more colors • For analyzing color differences: HSI, for linear relationship • For processing perceptually meaningful color: L*a*b* • For transmission or storage: YIQ or YUV, for a less redundant representation ECE634 – Spring 2021 Jan 19, 2021 11 Color in Images and Videos • Images are commonly RGB, and each pixel location has 3 colors – (this is ignoring Bayer color sampling) – BE CAREFUL!!! OpenCV loads images as BGR • Videos are commonly YUV or YCbCr, and there are fewer color pixels than luminance pixels – OpenCV will automatically convert videos in YUV into consecutive images of RGB, upsampling the color information ECE634 – Spring 2021 Jan 19, 2021 12 Video Capture and Display ECE634 – Spring 2021 Jan 19, 2021 13 Plenoptic Function (Light Field) • Measures the intensity of light that passes through a particular point in space • Every possible viewing position, with any viewing angle, at every moment in time – 3 location coordinates – 2 angular directions – Time – Wavelength • Light field (plenoptic) camera Adelson and Bergen ’91 ECE634 – Spring 2021 Jan 19, 2021 14 Image Formation (Pinhole Camera) A video records the emitted and/or reflected light intensity from the objects in the scene that is observed by a viewing system (a human eye or a camera) 3-D point X Y x = F , y = F Z Z Camera center The image of an object is reversed from its 3-D position. The object appears 2-D smaller when it is farther away. Image image plane ECE634 – Spring 2021 Jan 19, 2021 15 Video Signal • Real-world scene is a continuous 3-D signal (temporal, horizontal, vertical) • Film records samples in time but continuous in space (typically 24 frames/sec) • Analog video samples in time and samples vertically; continuous horizontally (about 30 frames/sec or higher) – Number of lines controls the maximum vertical frequency that can be displayed for a given viewing distance – Video-raster = 1-D signal consisting of scan lines from successive frames • Digital video: samples in time, vertically, and horizontally ECE634 – Spring 2021 Jan 19, 2021 16 Progressive Scanning Scan lines Horizontal retrace Vertical retrace • Progressive scan: – Captures consecutive lines – Captures a complete frame every D t sec – Also referred to as sequential or non-interlaced – Used by TVs, monitors, video projectors ECE634 – Spring 2021 Jan 19, 2021 17 Interlaced Scanning E A C B Even field Odd field (Horizontal & vertical retrace not shown) • Interlaced scan: D F – Captures alternate lines (each frame split into two fields) • Odd lines are captured (odd field), then even lines (even field) – Captures a complete frame every D t sec – Used in analog television ECE634 – Spring 2021 Jan 19, 2021 18 Why Interlace? • To provide a trade-off between temporal and vertical resolution, for a given, fixed data rate (number of line/sec) • Interlace Artifact field 0 field 1 field 2 field 3 frame 0 frame 1 ECE634 – Spring 2021 Jan 19, 2021 19 Capture Color • Sensors – CCD: Charge-Coupled Devices – CMOS: complementary Metal-Oxide-Semiconductor • Bayer Grid • Demosaicing • Dynamic Range ECE634 – Spring 2021 Jan 19, 2021 20 Video Display • CRT (cathode ray tube) vs LCD (liquid crystal display) vs LED (light emitting diode) • Gamma correction: non-linear relation between camera output signal and actual color values ECE634 – Spring 2021 Jan 19, 2021 21 Analog Video ECE634 – Spring 2021 Jan 19, 2021 22 History of TV in US • 1941: First NTSC broadcast, monochrome – 4:3 aspect ratio; Interlacing – 60 Hz (60 fields per second) – 525 lines but only 480 active lines • 1953: Color NTSC – Backwards compatible with black and white TVs • 1993: Grand Alliance forms to design HDTV • 1996: First public broadcast of HDTV • 2000: First HDTV Superbowl transmission • 2009: Last analog transmission ECE634 – Spring 2021 Jan 19, 2021 23 TV at Purdue • Roscoe George develops the first electronic television receiver in 1929 https://www.earlytelevision.org/pdf/roscoe_george_and_television.pdf ECE634 – Spring 2021 Jan 19, 2021 24 Video Terminology • Component video – Three color components stored/transmitted separately – Use either RGB or YIQ (YUV) or YCrCb coordinate – Betacam (professional tape recorder) use this format • Composite video – Convert RGB to YIQ (YUV) – Multiplexing YIQ into a single signal – Used in most consumer analog video devices • S-video – Y and C (I and Q) are stored separately – Used in consumer video devices ECE634 – Spring 2021 Jan 19, 2021 25 25 TV Broadcasting and Receiving Lu m i n a n c e , RG B Chrominance, ---> Aud io Modulation YC 1 C 2 Multiplexing YC 1 C 2 De- De- ---> Multiplexing Modulation RG B ECE634 – Spring 2021 Jan 19, 2021 26 Why not using RGB directly? • R,G,B components are correlated – Transmitting R,G,B components separately is redundant – More efficient use of bandwidth is desired • RGB->YC1C2 transformation – Decorrelating: Y,C1,C2 are uncorrelated – C1 and C2 require lower bandwidth – Y (luminance) component can be received by B/W TV sets • YIQ in NTSC – I: orange-to-cyan – Q: green-to-purple (human eye is less sensitive) • Q can be further bandlimited than I – Phase=Arctan(Q/I) = hue, Magnitude=sqrt (I^2+Q^2) = saturation – Hue is better retained than saturation ECE634 – Spring 2021 Jan 19, 2021 27 Different Color TV Systems Parameters NTSC PAL SECAM Field Rate (Hz) 59.95 (60) 50 50 Line Number/Frame 525 625 625 Line Rate (Line/s) 15,750 15,625 15,625 Color Coordinate YIQ YUV YDbDr Luminance Bandwidth (MHz) 4.2 5.0/5.5 6.0 Chrominance Bandwidth (MHz) 1.5(I)/0.5(Q) 1.3(U,V) 1.0 (U,V) Color Subcarrier (MHz) 3.58 4.43 4.25(Db),4.41(Dr) Color Modulation QAM QAM FM Audio Subcarrier 4.5 5.5/6.0 6.5 Total Bandwidth (MHz) 6.0 7.0/8.0 8.0 ECE634 – Spring 2021 Jan 19, 2021 28 Digital Video ECE634 – Spring 2021 Jan 19, 2021 29 ECE634 – Spring 2021 pel 122 525 lines 480 lines video of analog encoding Digital ITU 525/60: 60 field/s NTSC 858 pels 720 pels Ac tive Area - R BT.601 VideoFormatR BT.601 Jan 19, pel 16 2021 pel 132 625 lines 576 lines 625/50: 50 field/s PAL/SECAM 864 pels 720 pels Ac tive Area pel 12 30 Color Coordinate - YCbCr • Scaled and shifted versions of analog YUV so the values are in the range of (0, 255) ECE634 – Spring 2021 Jan 19, 2021 31 Chrominance Subsampling Formats 4:4:4 4:2:2 4:1:1 4:2:0 For every 2x2 Y Pixels For every 2x2 Y Pixels For every 4x1 Y Pixels For every 2x2 Y Pixels 4 Cb & 4 Cr Pixel 2 Cb & 2 Cr Pixel 1 Cb & 1 Cr Pixel 1 Cb & 1 Cr Pixel (No subsampling) (Subsampling by 2:1 (Subsampling by 4:1 (Subsampling by 2:1 both horizontally only) horizontally only) horizontally and vertically) Y Pixel Cb and Cr Pixel ECE634 – Spring 2021 Jan 19, 2021 32 Digital Formats – Pixel === “picture element”, a point sample • Digital video is a sequence of frames (x,y,t) • Often denoted {lines}{i,p} or {lines}{i,p}{fps} – 1080i, 720p, 1080p60 • Temporal resolutions – Video • 25, 30, 60 frames per second (Fps) • 50, 60, 120 fields per second – Film: 24, 48 fps – Animation: often lower • Why use more FPS? ECE634 – Spring 2021 Jan 19, 2021 33 Spatial Resolutions • 2K, 4K, 8K, etc.
Recommended publications
  • An Improved SPSIM Index for Image Quality Assessment
    S S symmetry Article An Improved SPSIM Index for Image Quality Assessment Mariusz Frackiewicz * , Grzegorz Szolc and Henryk Palus Department of Data Science and Engineering, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland; [email protected] (G.S.); [email protected] (H.P.) * Correspondence: [email protected]; Tel.: +48-32-2371066 Abstract: Objective image quality assessment (IQA) measures are playing an increasingly important role in the evaluation of digital image quality. New IQA indices are expected to be strongly correlated with subjective observer evaluations expressed by Mean Opinion Score (MOS) or Difference Mean Opinion Score (DMOS). One such recently proposed index is the SuperPixel-based SIMilarity (SPSIM) index, which uses superpixel patches instead of a rectangular pixel grid. The authors of this paper have proposed three modifications to the SPSIM index. For this purpose, the color space used by SPSIM was changed and the way SPSIM determines similarity maps was modified using methods derived from an algorithm for computing the Mean Deviation Similarity Index (MDSI). The third modification was a combination of the first two. These three new quality indices were used in the assessment process. The experimental results obtained for many color images from five image databases demonstrated the advantages of the proposed SPSIM modifications. Keywords: image quality assessment; image databases; superpixels; color image; color space; image quality measures Citation: Frackiewicz, M.; Szolc, G.; Palus, H. An Improved SPSIM Index 1. Introduction for Image Quality Assessment. Quantitative domination of acquired color images over gray level images results in Symmetry 2021, 13, 518. https:// the development not only of color image processing methods but also of Image Quality doi.org/10.3390/sym13030518 Assessment (IQA) methods.
    [Show full text]
  • COLOR SPACE MODELS for VIDEO and CHROMA SUBSAMPLING
    COLOR SPACE MODELS for VIDEO and CHROMA SUBSAMPLING Color space A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components (e.g. RGB and CMYK are color models). However, a color model with no associated mapping function to an absolute color space is a more or less arbitrary color system with little connection to the requirements of any given application. Adding a certain mapping function between the color model and a certain reference color space results in a definite "footprint" within the reference color space. This "footprint" is known as a gamut, and, in combination with the color model, defines a new color space. For example, Adobe RGB and sRGB are two different absolute color spaces, both based on the RGB model. In the most generic sense of the definition above, color spaces can be defined without the use of a color model. These spaces, such as Pantone, are in effect a given set of names or numbers which are defined by the existence of a corresponding set of physical color swatches. This article focuses on the mathematical model concept. Understanding the concept Most people have heard that a wide range of colors can be created by the primary colors red, blue, and yellow, if working with paints. Those colors then define a color space. We can specify the amount of red color as the X axis, the amount of blue as the Y axis, and the amount of yellow as the Z axis, giving us a three-dimensional space, wherein every possible color has a unique position.
    [Show full text]
  • Colour Space Conversion
    Colour 1 Shan He 2013 What is light? 'White' light is a combination of many different light wavelengths 2 Shan He 2013 Colour spectrum – visible light • Colour is how we perceive variation in the energy (oscillation wavelength) of visible light waves/particles. Ultraviolet Infrared 400 nm Wavelength 700 nm • Our eye perceives lower energy light (longer wavelength) as so-called red; and higher (shorter wavelength) as so-called blue • Our eyes are more sensitive to light (i.e. see it with more intensity) in some frequencies than in others 3 Shan He 2013 Colour perception . It seems that colour is our brain's perception of the relative levels of stimulation of the three types of cone cells within the human eye . Each type of cone cell is generally sensitive to the red, blue and green regions of the light spectrum Images of living human retina with different cones sensitive to different colours. 4 Shan He 2013 Colour perception . Spectral sensitivity of the cone cells overlap somewhat, so one wavelength of light will likely stimulate all three cells to some extent . So by mixing multiple components pure light (i.e. of a fixed spectrum) and altering intensity of the mixed components, we can vary the overall colour that is perceived by the brain. 5 Shan He 2013 Colour perception: Eye of Brain? • Stroop effect 6 Shan He 2013 Do you see same colours I see? • The number of color-sensitive cones in the human retina differs dramatically among people • Language might affect the way we perceive colour 7 Shan He 2013 Colour spaces .
    [Show full text]
  • Er.Heena Gulati, Er. Parminder Singh
    International Journal of Computer Science Trends and Technology (IJCST) – Volume 3 Issue 2, Mar-Apr 2015 RESEARCH ARTICLE OPEN ACCESS SWT Approach For The Detection Of Cotton Contaminants Er.Heena Gulati [1], Er. Parminder Singh [2] Research Scholar [1], Assistant Professor [2] Department of Computer Science and Engineering Doaba college of Engineering and Technology Kharar PTU University Punjab - India ABSTRACT Presence of foreign fibers’ & cotton contaminants in cotton degrades the quality of cotton The digital image processing techniques based on computer vision provides a good way to eliminate such contaminants from cotton. There are various techniques used to detect the cotton contaminants and foreign fibres. The major contaminants found in cotton are plastic film, nylon straps, jute, dry cotton, bird feather, glass, paper, rust, oil grease, metal wires and various foreign fibres like silk, nylon polypropylene of different colors and some of white colour may or may not be of cotton itself. After analyzing cotton contaminants characteristics adequately, the paper presents various techniques for detection of foreign fibres and contaminants from cotton. Many techniques were implemented like HSI, YDbDR, YCbCR .RGB images are converted into these components then by calculating the threshold values these images are fused in the end which detects the contaminants .In this research the YCbCR , YDbDR color spaces and fusion technique is applied that is SWT in the end which will fuse the image which is being analysis according to its threshold value and will provide good results which are based on parameters like mean ,standard deviation and variance and time. Keywords:- Cotton Contaminants; Detection; YCBCR,YDBDR,SWT Fusion, Comparison I.
    [Show full text]
  • Measuring Perceived Color Difference Using YIQ Color Space
    Programación Matemática y Software (2010) Vol. 2. No 2. ISSN: 2007-3283 Recibido: 17 de Agosto de 2010 Aceptado: 25 de Noviembre de 2010 Publicado en línea: 30 de Diciembre de 2010 Measuring perceived color difference using YIQ NTSC transmission color space in mobile applications Yuriy Kotsarenko, Fernando Ramos TECNOLOGICO DE DE MONTERREY, CAMPUS CUERNAVACA. Resumen: En este trabajo varias fórmulas están introducidas que permiten calcular la medir la diferencia entre colores de forma perceptible, utilizando el espacio de colores YIQ. Las formulas clásicas y sus derivados que utilizan los espacios CIELAB y CIELUV requieren muchas transformaciones aritméticas de valores entrantes definidos comúnmente con los componentes de rojo, verde y azul, y por lo tanto son muy pesadas para su implementación en dispositivos móviles. Las fórmulas alternativas propuestas en este trabajo basadas en espacio de colores YIQ son sencillas y se calculan rápidamente, incluso en tiempo real. La comparación está incluida en este trabajo entre las formulas clásicas y las propuestas utilizando dos diferentes grupos de experimentos. El primer grupo de experimentos se enfoca en evaluar la diferencia perceptible utilizando diferentes fórmulas, mientras el segundo grupo de experimentos permite determinar el desempeño de cada una de las fórmulas para determinar su velocidad cuando se procesan imágenes. Los resultados experimentales indican que las formulas propuestas en este trabajo son muy cercanas en términos perceptibles a las de CIELAB y CIELUV, pero son significativamente más rápidas, lo que los hace buenos candidatos para la medición de las diferencias de colores en dispositivos móviles y aplicaciones en tiempo real. Abstract: An alternative color difference formulas are presented for measuring the perceived difference between two color samples defined in YIQ color space.
    [Show full text]
  • Measuring Perceived Color Difference Using YIQ NTSC Transmission Color Space in Mobile Applications
    Programación Matemática y Software (2010) Vol.2. Num. 2. Dirección de Reservas de Derecho: 04-2009-011611475800-102 Measuring perceived color difference using YIQ NTSC transmission color space in mobile applications Yuriy Kotsarenko, Fernando Ramos TECNOLÓGICO DE DE MONTERREY, CAMPUS CUERNAVACA. Resumen. En este trabajo varias formulas están introducidas que permiten calcular la medir la diferencia entre colores de forma perceptible, utilizando el espacio de colores YIQ. Las formulas clásicas y sus derivados que utilizan los espacios CIELAB y CIELUV requieren muchas transformaciones aritméticas de valores entrantes definidos comúnmente con los componentes de rojo, verde y azul, y por lo tanto son muy pesadas para su implementación en dispositivos móviles. Las formulas alternativas propuestas en este trabajo basadas en espacio de colores YIQ son sencillas y se calculan rápidamente, incluso en tiempo real. La comparación está incluida en este trabajo entre las formulas clásicas y las propuestas utilizando dos diferentes grupos de experimentos. El primer grupo de experimentos se enfoca en evaluar la diferencia perceptible utilizando diferentes formulas, mientras el segundo grupo de experimentos permite determinar el desempeño de cada una de las formulas para determinar su velocidad cuando se procesan imágenes. Los resultados experimentales indican que las formulas propuestas en este trabajo son muy cercanas en términos perceptibles a las de CIELAB y CIELUV, pero son significativamente más rápidas, lo que los hace buenos candidatos para la medición de las diferencias de colores en dispositivos móviles y aplicaciones en tiempo real. Abstract. An alternative color difference formulas are presented for measuring the perceived difference between two color samples defined in YIQ color space.
    [Show full text]
  • Color Images, Color Spaces and Color Image Processing
    color images, color spaces and color image processing Ole-Johan Skrede 08.03.2017 INF2310 - Digital Image Processing Department of Informatics The Faculty of Mathematics and Natural Sciences University of Oslo After original slides by Fritz Albregtsen today’s lecture ∙ Color, color vision and color detection ∙ Color spaces and color models ∙ Transitions between color spaces ∙ Color image display ∙ Look up tables for colors ∙ Color image printing ∙ Pseudocolors and fake colors ∙ Color image processing ∙ Sections in Gonzales & Woods: ∙ 6.1 Color Funcdamentals ∙ 6.2 Color Models ∙ 6.3 Pseudocolor Image Processing ∙ 6.4 Basics of Full-Color Image Processing ∙ 6.5.5 Histogram Processing ∙ 6.6 Smoothing and Sharpening ∙ 6.7 Image Segmentation Based on Color 1 motivation ∙ We can differentiate between thousands of colors ∙ Colors make it easy to distinguish objects ∙ Visually ∙ And digitally ∙ We need to: ∙ Know what color space to use for different tasks ∙ Transit between color spaces ∙ Store color images rationally and compactly ∙ Know techniques for color image printing 2 the color of the light from the sun spectral exitance The light from the sun can be modeled with the spectral exitance of a black surface (the radiant exitance of a surface per unit wavelength) 2πhc2 1 M(λ) = { } : λ5 hc − exp λkT 1 where ∙ h ≈ 6:626 070 04 × 10−34 m2 kg s−1 is the Planck constant. ∙ c = 299 792 458 m s−1 is the speed of light. ∙ λ [m] is the radiation wavelength. ∙ k ≈ 1:380 648 52 × 10−23 m2 kg s−2 K−1 is the Boltzmann constant. T ∙ [K] is the surface temperature of the radiating Figure 1: Spectral exitance of a black body surface for different body.
    [Show full text]
  • Ipf-White-Paper-Farbsensorik Ohne
    ipf_Brief_u_Rechn_Bg_Verw_27_04_09.qx7:IPF_Bb_Verwaltung_070206.qx5 27.04.2009 16:04 Uhr Seite 1 12 White Paper ‘True color’ sensors - See colors like a human does Author: Dipl.-Ing. Christian Fiebach Management Assistant © ipf electronic 2013 ipf_Brief_u_Rechn_Bg_Verw_27_04_09.qx7:IPF_Bb_Verwaltung_070206.qx5 27.04.2009 16:04 Uhr Seite 1 White Paper: ‘True color’ sensors - See colors like a human does _____________________________________________________________________ table of contents introduction 3 ‘normal observer’ determines the average value 4 Different color perception 5 standard colormetric system 6 three-dimensional color system 7 other color systems 9 sensors for ‘True color’detection 10 stating L*a*b* values is not possible with color sensors 11 2D presentations 12 3D presentations 13 different models for different tasks 14 the evaluation of ‘primary light sources’ 14 large detection areas 14 suitable for every surface 15 averages cope with difficult surfaces 15 application examples 16 1. type based selection of glass bottels 16 2. color markings on stainless steel 18 3. automated checking of paint 20 © ipf electronic 2013 2 ipf_Brief_u_Rechn_Bg_Verw_27_04_09.qx7:IPF_Bb_Verwaltung_070206.qx5 27.04.2009 16:04 Uhr Seite 1 White Paper: ‘True color’ sensors - See colors like a human does _____________________________________________________________________ introduction Applications where the color of certain objects has to be securely checked repeatedly present sensors with immense challenges. In particular, the different properties of artificial surfaces make it harder to reliably evaluate color. Why is that? And what solutions are available? In order to come closer to answering these questions, one must first know what abilities the human eye is capable of in the field of color recognition.
    [Show full text]
  • Basics of Video
    Basics of Video Yao Wang Polytechnic University, Brooklyn, NY11201 [email protected] Video Basics 1 Outline • Color perception and specification (review on your own) • Video capture and disppy(lay (review on your own ) • Analog raster video • Analog TV systems • Digital video Yao Wang, 2013 Video Basics 2 Analog Video • Video raster • Progressive vs. interlaced raster • Analog TV systems Yao Wang, 2013 Video Basics 3 Raster Scan • Real-world scene is a continuous 3-DsignalD signal (temporal, horizontal, vertical) • Analog video is stored in the raster format – Sampling in time: consecutive sets of frames • To render motion properly, >=30 frame/s is needed – Sampling in vertical direction: a frame is represented by a set of scan lines • Number of lines depends on maximum vertical frequency and viewingg, distance, 525 lines in the NTSC s ystem – Video-raster = 1-D signal consisting of scan lines from successive frames Yao Wang, 2013 Video Basics 4 Progressive and Interlaced Scans Progressive Frame Interlaced Frame Horizontal retrace Field 1 Field 2 Vertical retrace Interlaced scan is developed to provide a trade-off between temporal and vertical resolution, for a given, fixed data rate (number of line/sec). Yao Wang, 2013 Video Basics 5 Waveform and Spectrum of an Interlaced Raster Horizontal retrace Vertical retrace Vertical retrace for first field from first to second field from second to third field Blanking level Black level Ӈ Ӈ Th White level Tl T␷ T␷ ⌬t 2 ⌬ t (a) Խ⌿( f )Խ f 0 fl 2fl 3fl fmax (b) Yao Wang, 2013 Video Basics 6 Color
    [Show full text]
  • Basics of Video
    Analog and Digital Video Basics Nimrod Peleg Update: May. 2006 1 Video Compression: list of topics • Analog and Digital Video Concepts • Block-Based Motion Estimation • Resolution Conversion • H.261: A Standard for VideoConferencing • MPEG-1: A Standard for CD-ROM Based App. • MPEG-2 and HDTV: All Digital TV • H.263: A Standard for VideoPhone • MPEG-4: Content-Based Description 2 1 Analog Video Signal: Raster Scan 3 Odd and Even Scan Lines 4 2 Analog Video Signal: Image line 5 Analog Video Standards • All video standards are in • Almost any color can be reproduced by mixing the 3 additive primaries: R (red) , G (green) , B (blue) • 3 main different representations: – Composite – Component or S-Video (Y/C) 6 3 Composite Video 7 Component Analog Video • Each primary is considered as a separate monochromatic video signal • Basic presentation: R G B • Other RGB based: – YIQ – YCrCb – YUV – HSI To Color Spaces Demo 8 4 Composite Video Signal Encoding the Chrominance over Luminance into one signal (saving bandwidth): – NTSC (National TV System Committee) North America, Japan – PAL (Phased Alternation Line) Europe (Including Israel) – SECAM (Systeme Electronique Color Avec Memoire) France, Russia and more 9 Analog Standards Comparison NTSC PAL/SECAM Defined 1952 1960 Scan Lines/Field 525/262.5 625/312.5 Active horiz. lines 480 576 Subcarrier Freq. 3.58MHz 4.43MHz Interlacing 2:1 2:1 Aspect ratio 4:3 4:3 Horiz. Resol.(pel/line) 720 720 Frames/Sec 29.97 25 Component Color TUV YCbCr 10 5 Analog Video Equipment • Cameras – Vidicon, Film, CCD) • Video Tapes (magnetic): – Betacam, VHS, SVHS, U-matic, 8mm ...
    [Show full text]
  • 39 Color Variations in Pseudo Color Processing of Graphical Images
    International Journal of Advanced Research and Development ISSN: 2455-4030 www.newresearchjournal.com/advanced Volume 1; Issue 2; February 2016; Page No. 39-43 Color variations in pseudo color processing of graphical images using multicolor perception 1 Selvapriya B, 2 Raghu B 1 Research Scholar, Department of Computer Science and Engineering, Bharath University, Chennai, India 2 Professor and Dean, Department of Computer Science and Engineering, Sri Raman jar Engineering College, Chennai, India Abstract In digital image processing, image enhancement is employed to give a better look to an image. Color is one of the best ways to visually enhance an image. Pseudo-color image processing assigns color to grayscale images. This is useful because the human eye can distinguish between millions of colures but relatively few shades of gray. Pseudo-coloring has many applications on images from devices capturing light outside the visible spectrum, for example, infrared and X-ray. A Color model is a specification of a color coordinate system and the subset of visible colors in this coordinate system. The key observation in this work is variation of colors in Pseudo color images using multicolor perception. This technique can be successfully applied to a variety of gray scale images and videos. Keywords: Pseudo color, color models, reference grey images, multicolor perception, image processing, computer graphics. 1. Introduction on the observer. Taking these advantages of human visual This paper is based on the idea that the human visual system perception to enhance an image a technique is applied which is more responsive to color than binary or monochrome is called pseudo color.
    [Show full text]
  • Video CSEE W4840
    Video CSEE W4840 Prof. Stephen A. Edwards Columbia University Spring 2014 Television: 1939 Du Mont Model 181 Inside a CRT London Science Museum/renaissancechambara Inside a CRT Ehsan Samuel, Technological and Psychophysical Considerations for Digital Mammographic Displays, RadioGraphics. 25, March 2005. Vector Displays Raster Scanning Raster Scanning Raster Scanning Raster Scanning Raster Scanning NTSC or RS-170 Originally black-and-white 60 Hz vertical scan frequency 15.75 kHz horizontal frequency 15:75 kHz = 262:5 lines per field 60 Hz White 1 V Black 0.075 V Blank 0 V Sync − 0.4 V A Line of B&W Video White Black Blank Sync H Front Porch 0.02H Blanking 0.16H Sync 0.08H Back porch 0.06H Interlaced Scanning Interlaced Scanning Interlaced Scanning Interlaced Scanning Interlaced Scanning Interlaced Scanning Color Television Color added later: had to be backwards compatible. Solution: continue to transmit a “black-and-white” signal and modulate two color signals on top of it. RGB vs. YIQ colorspaces 2 0:30 0:59 0:11 3 2 R 3 2 Y 3 4 0:60 −0:28 −0:32 5 4 G 5 = 4 I 5 0:21 −0:52 0:31 B Q Y baseband 4 MHz “black-and-white” signal I as 1.5 MHz, Q as 0.5 MHz at 90◦: modulated at 3.58 MHz CIE Color Matching Curves YIQ color space with Y=0.5 International Standards lines active vertical aspect horiz. frame lines res. ratio res. rate NTSC 525 484 242 4:3 427 29.94 Hz PAL 625 575 290 4:3 425 25 Hz SECAM 625 575 290 4:3 465 25 Hz PAL: Uses YUV instead of YIQ, flips phase of V every other line SECAM: Transmits the two chrominance signals on alternate lines;
    [Show full text]