IEE 5037 Multimedia Communications Lecture 2: Basics of Video

Total Page:16

File Type:pdf, Size:1020Kb

IEE 5037 Multimedia Communications Lecture 2: Basics of Video Dept. Electronics Engineering, Dept. Electronics IEE 5037 Multimedia Communications National Chiao Tung University Lecture 2: Basics of Video Reading: ch.1. video processing and communications Adapted from Prof. Yao Wang’s slides And Prof. Hang’s slides 1 Outline • Color perception and specification • Video capture and display • Analog raster video • Analog TV systems • Digital video Video Basics 1. Color Perception and Specification • Light -> color perception • Human perception of color • Type of light sources • Trichromatic color mixing theory • Specification of color – Tristimulus representation – Luminance/Chrominance representation • Color coordinate conversion Video Basics 3 Light is part of the EM wave from [Gonzalez02] Video Basics 4 Illuminating and Reflecting Light • Illuminating sources: – emit light (e.g. the sun, light bulb, TV monitors) – perceived color depends on the emitted freq. – follows additive rule • R+G+B=White • Reflecting sources: – reflect an incoming light (e.g. the color dye, matte surface, cloth) – perceived color depends on reflected freq (=emitted freq- absorbed freq.) – follows subtractive rule • R+G+B=Black Video Basics 5 Color Representation (N&H, Sec. 1.8) • Colors (human perception) are related to the wavelength of light, but they are subjective, depending upon the physical structure of human eye. • Human retina contains 3 different color receptor (cones): red, green, and blue Æ Basis of trichromatic theory. • Tristimulus: Any color appeared to human eye can be specified by a weighted combination of 3 primary colors. Video Basics 6 Eye Anatomy From http://www.stlukeseye.com/Anatomy.asp Video Basics 7 Eye vs. Camera Camera components Eye components Lens Lens, cornea Shutter Iris, pupil Film Retina Cable to transfer images Optic nerve send the info to the brain Video Basics 8 Human Perception of Color • Retina contains photo receptors – Cones: day vision, can perceive color tone • Red, green, and blue cones • Different cones have different frequency responses • Tri-receptor theory of color vision [Young1802] – Rods: night vision, perceive brightness From only http://www.macula.org/anatomy /retinaframe.html • Color sensation is characterized by – Luminance (brightness) – Chrominance • Hue (color tone) • Saturation (color purity) Video Basics 9 Human Vision • Cross-section of the human eye (N&H, p.263) Cone: fat, clustered around fovea; color, daylight Rod: slender, dense; intensity, low light Video Basics 10 The Retina • Schematic diagram of the retina (N&H, p. 263) Video Basics 11 Retina Circuitry • Photoreceptors first respond to the light. The produced signals are “processed” by the bipolar and ganglion cells. Then, visual information (pulses) are passed through optical nerves into the brain. (Carlson, p.145~) Video Basics 12 Primary Visual Path • The left half of the visual fields of both eyes are sent to the right hemisphere. (Carlson, p.149) Video Basics 13 Color Receptors • Relative absorbance of light of various wavelengths by rods and cones (Carlson, p.155) Video Basics 14 Frequency Responses of Cones from [Gonzalez02] Ci = ∫C(λ)ai (λ)dλ, i = r, g,b, y Video Basics 15 Cones Sensitivity: R, G, B and Luminous 100 Blueϫ20 Luminous Luminosity function 80 Red Green 60 G R 40 Relative sensitivity B 20 0 400 500 600 700 Wavelength Ci = ∫C(λ)ai (λ)dλ, i = r, g,b, y Video Basics 16 Color Hue Specification Video Basics 17 Trichromatic Theory • Select 3 primary stimuli, for example, R (700.0), G (546.1) ,and B (435.8) Any color S can be represented as combination of these 3 primaries R, G, and B. S = Rs ⋅R + Gs ⋅G + Bs ⋅B • Any 3 independent colors can be selected as primaries as long as one is not a mix of the other two. • Different sets of primaries are related by linear transformations. Video Basics 18 Trichromatic Color Mixing • Trichromatic color mixing theory – Any color can be obtained by mixing three primary colors with a right proportion C = ∑TkCk , Tk : Tristimulus values k=1,2,3 • Primary colors for illuminating sources: – Red, Green, Blue (RGB) – Color monitor works by exciting red, green, blue phosphors using separate electronic guns • Primary colors for reflecting sources (also known as secondary colors): – Cyan, Magenta, Yellow (CMY) – Color printer works by using cyan, magenta, yellow and black (CMYK) dyes Video Basics 19 RGB vs CMY Video Basics 20 red Green Blue Video Basics 21 Color Representation Models • Specify the tristimulus values associated with the three primary colors –RGB –CMY • Specify the luminance and chrominance – HSI (Hue, saturation, intensity) – YIQ (used in NTSC color TV) – YCbCr (used in digital color TV) • Amplitude specification: – 8 bits for each color component, or 24 bits total for each pixel – Total of 16 million colors – A true RGB color display of size 1Kx1K requires a display buffer memory size of 3 MB Video Basics 22 Color Coordinate Conversion • Conversion between different primary sets are linear (3x3 matrix) • Conversion between primary and XYZ/YIQ/YUV are also linear • Conversion to LSI/Lab are nonlinear – LSI and Lab coordinates • coordinate Euclidean distance proportional to actual color difference • Conversion formulae between many color coordinates can be found in [Gonzalez92] Video Basics 23 CIE Color Specification ─ CIE (Commission Internationale de L'eclairage─ International Commission on Illumination) • R (700.0), G (546.1), and B (435.8) ─1931 • X, Y, Z ─ 1931; (1) all color matching functions are positive, and (2) Y = Luminance. (N&H, p. 51) ⎡X ⎤ ⎡2.365 −.515 0.005 ⎤⎡R⎤ ⎡R⎤ ⎢Y ⎥ = ⎢−.897 1.426 −.014⎥⎢G⎥ = M −1 ⎢G⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣⎢Z ⎦⎥ ⎣⎢−.468 0.089 1.009 ⎦⎥⎣⎢B⎦⎥ ⎣⎢B⎦⎥ Video Basics 24 CIE Color Specification (cont.) (Basis vectors transformation) ⎡R⎤ ⎡0.490 0.177 0.000⎤⎡X ⎤ ⎡X ⎤ ⎢G⎥ = ⎢0.310 0.813 0.010⎥⎢Y ⎥ = M ⎢Y ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣⎢B⎦⎥ ⎣⎢0.200 0.010 0.990⎦⎥⎣⎢Z ⎦⎥ ⎣⎢Z ⎦⎥ Tristimulus values transformation: (Coordinates transformation) ⎡X ⎤ ⎡R⎤ ⎡R⎤ ⎡X ⎤ ⎢Y ⎥ = M −1 ⎢G⎥; ⎢G⎥ = M ⎢Y ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣⎢Z ⎦⎥ ⎣⎢B⎦⎥ ⎣⎢B⎦⎥ ⎣⎢Z ⎦⎥ Video Basics 25 Chromaticity Coordinates X x = X +Y + Z Y y = X +Y + Z Z z = X +Y + Z Note: Hence, (x,y,Y) completely specifies a color. 1931 CIE-xy chromaticity diagram (N&H, p. 51) Video Basics 26 CIE Chromaticity Diagram CIE XYZ diagram CIE L*a*b diagram Concept chart of L*a*b* L: luminance a: from green to red b: from blue to yellow Source: http://www.experience.epson.com.au/help/understandingcolour/COL_G/0503_3.htm Video Basics 27 Just Noticeable Color Difference • MacAdam's ellipses (just noticeable regions, ten times enlarged) on CIE-xy chromaticity diagram (N&H, p.293) Video Basics 28 2. Video Capture and Display • Light reflection physics • Imaging operator • Color capture • Color display • Component vs. composite video Video Basics 29 Video Capture ? • For natural images we need a light source (λ: wavelength of the source) . – E(x, y, z, λ): incident light on a point (x, y, z world coordinates of the point) • Each point in the scene has a reflectivity function. – r(x, y, z, λ): reflectivity function • Light reflects from a point and the reflected light is captured by an imaging device. – c(x, y, z, λ)=E(x, y, z, λ) × r(x, y, z, λ): reflected light. Courtesy of Onur Guleryuz Video Basics 30 More on Video Capture • Reflected light to camera – Camera absorption function ψ (X,t) = ∫C(X,t,λ)ac (λ)dλ – Projection from 3-D to 2-D X→x P ψ (P(X),t) =ψ (X,t) or ψ (x,t) =ψ (P−1(x),t) • The projection operator is non-linear – Perspective projection – Othographic projection Video Basics 31 Perspective Projection Model Y X 3-D point Y Z X X Z C y F x Camera center X Y x = F , y = F Z Z x y x The image of an object is reversed from its 2-D 3-D position. The object appears smaller image Image when it is farther away. plane Video Basics 32 How to Capture Color • Need three types of sensors • Complicated digital processing is incorporated in advanced cameras 2fs,1/fs,2 f f Rate ( fs,1) s,1 s,1 Image 2fs,1 Digital CN conv. CCDs enhancer output 2fs,1 13.5 MHz Lens R G D/A Analog CN & A/D CS output Nonlinear processing Pre-process Interpolation Color corrector Analog process B Matrix & encoder D/A Viewfinder output Figure 1.2 Schematic block diagram of a professional color video camera. Reprinted from Y. Hashimoto, M. Yamamoto, and T. Asaida, Cameras and display systems, IEEE (July 1995), 83(7):1032–43. Copyright 1995 IEEE. Video Basics 33 Video Display • CRT vs LCD • Need three light sources projecting red, green, blue components respectively Video Basics 34 3. Analog Video • Video raster • Progressive vs. interlaced raster • Analog TV systems Video Basics 35 Raster Scan • Real-world scene is a continuous 3-D signal (temporal, horizontal, vertical) • Analog video is stored in the raster format – Sampling in time: consecutive sets of frames • To render motion properly, >=30 frame/s is needed – Sampling in vertical direction: a frame is represented by a set of scan lines • Number of lines depends on maximum vertical frequency and viewing distance, 525 lines in the NTSC system – Video-raster = 1-D signal consisting of scan lines from successive frames . Video Basics 36 Two Scan Methods: Progressive and Interlaced Scans Progressive Frame Interlaced Frame Horizontal retrace Field 1 Field 2 Vertical retrace Interlaced scan is developed to provide a trade-off between temporal and vertical resolution, for a given, fixed data rate (number of line/sec). Video Basics 37 Two Scan Methods: Progressive and Interlaced Scans Video Basics 38 Color TV Broadcasting and Receiving Lu m i n a n c e , RG B Chrominance, ---> Aud io Modulation YC
Recommended publications
  • COLOR SPACE MODELS for VIDEO and CHROMA SUBSAMPLING
    COLOR SPACE MODELS for VIDEO and CHROMA SUBSAMPLING Color space A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components (e.g. RGB and CMYK are color models). However, a color model with no associated mapping function to an absolute color space is a more or less arbitrary color system with little connection to the requirements of any given application. Adding a certain mapping function between the color model and a certain reference color space results in a definite "footprint" within the reference color space. This "footprint" is known as a gamut, and, in combination with the color model, defines a new color space. For example, Adobe RGB and sRGB are two different absolute color spaces, both based on the RGB model. In the most generic sense of the definition above, color spaces can be defined without the use of a color model. These spaces, such as Pantone, are in effect a given set of names or numbers which are defined by the existence of a corresponding set of physical color swatches. This article focuses on the mathematical model concept. Understanding the concept Most people have heard that a wide range of colors can be created by the primary colors red, blue, and yellow, if working with paints. Those colors then define a color space. We can specify the amount of red color as the X axis, the amount of blue as the Y axis, and the amount of yellow as the Z axis, giving us a three-dimensional space, wherein every possible color has a unique position.
    [Show full text]
  • Colour Space Conversion
    Colour 1 Shan He 2013 What is light? 'White' light is a combination of many different light wavelengths 2 Shan He 2013 Colour spectrum – visible light • Colour is how we perceive variation in the energy (oscillation wavelength) of visible light waves/particles. Ultraviolet Infrared 400 nm Wavelength 700 nm • Our eye perceives lower energy light (longer wavelength) as so-called red; and higher (shorter wavelength) as so-called blue • Our eyes are more sensitive to light (i.e. see it with more intensity) in some frequencies than in others 3 Shan He 2013 Colour perception . It seems that colour is our brain's perception of the relative levels of stimulation of the three types of cone cells within the human eye . Each type of cone cell is generally sensitive to the red, blue and green regions of the light spectrum Images of living human retina with different cones sensitive to different colours. 4 Shan He 2013 Colour perception . Spectral sensitivity of the cone cells overlap somewhat, so one wavelength of light will likely stimulate all three cells to some extent . So by mixing multiple components pure light (i.e. of a fixed spectrum) and altering intensity of the mixed components, we can vary the overall colour that is perceived by the brain. 5 Shan He 2013 Colour perception: Eye of Brain? • Stroop effect 6 Shan He 2013 Do you see same colours I see? • The number of color-sensitive cones in the human retina differs dramatically among people • Language might affect the way we perceive colour 7 Shan He 2013 Colour spaces .
    [Show full text]
  • Er.Heena Gulati, Er. Parminder Singh
    International Journal of Computer Science Trends and Technology (IJCST) – Volume 3 Issue 2, Mar-Apr 2015 RESEARCH ARTICLE OPEN ACCESS SWT Approach For The Detection Of Cotton Contaminants Er.Heena Gulati [1], Er. Parminder Singh [2] Research Scholar [1], Assistant Professor [2] Department of Computer Science and Engineering Doaba college of Engineering and Technology Kharar PTU University Punjab - India ABSTRACT Presence of foreign fibers’ & cotton contaminants in cotton degrades the quality of cotton The digital image processing techniques based on computer vision provides a good way to eliminate such contaminants from cotton. There are various techniques used to detect the cotton contaminants and foreign fibres. The major contaminants found in cotton are plastic film, nylon straps, jute, dry cotton, bird feather, glass, paper, rust, oil grease, metal wires and various foreign fibres like silk, nylon polypropylene of different colors and some of white colour may or may not be of cotton itself. After analyzing cotton contaminants characteristics adequately, the paper presents various techniques for detection of foreign fibres and contaminants from cotton. Many techniques were implemented like HSI, YDbDR, YCbCR .RGB images are converted into these components then by calculating the threshold values these images are fused in the end which detects the contaminants .In this research the YCbCR , YDbDR color spaces and fusion technique is applied that is SWT in the end which will fuse the image which is being analysis according to its threshold value and will provide good results which are based on parameters like mean ,standard deviation and variance and time. Keywords:- Cotton Contaminants; Detection; YCBCR,YDBDR,SWT Fusion, Comparison I.
    [Show full text]
  • Ipf-White-Paper-Farbsensorik Ohne
    ipf_Brief_u_Rechn_Bg_Verw_27_04_09.qx7:IPF_Bb_Verwaltung_070206.qx5 27.04.2009 16:04 Uhr Seite 1 12 White Paper ‘True color’ sensors - See colors like a human does Author: Dipl.-Ing. Christian Fiebach Management Assistant © ipf electronic 2013 ipf_Brief_u_Rechn_Bg_Verw_27_04_09.qx7:IPF_Bb_Verwaltung_070206.qx5 27.04.2009 16:04 Uhr Seite 1 White Paper: ‘True color’ sensors - See colors like a human does _____________________________________________________________________ table of contents introduction 3 ‘normal observer’ determines the average value 4 Different color perception 5 standard colormetric system 6 three-dimensional color system 7 other color systems 9 sensors for ‘True color’detection 10 stating L*a*b* values is not possible with color sensors 11 2D presentations 12 3D presentations 13 different models for different tasks 14 the evaluation of ‘primary light sources’ 14 large detection areas 14 suitable for every surface 15 averages cope with difficult surfaces 15 application examples 16 1. type based selection of glass bottels 16 2. color markings on stainless steel 18 3. automated checking of paint 20 © ipf electronic 2013 2 ipf_Brief_u_Rechn_Bg_Verw_27_04_09.qx7:IPF_Bb_Verwaltung_070206.qx5 27.04.2009 16:04 Uhr Seite 1 White Paper: ‘True color’ sensors - See colors like a human does _____________________________________________________________________ introduction Applications where the color of certain objects has to be securely checked repeatedly present sensors with immense challenges. In particular, the different properties of artificial surfaces make it harder to reliably evaluate color. Why is that? And what solutions are available? In order to come closer to answering these questions, one must first know what abilities the human eye is capable of in the field of color recognition.
    [Show full text]
  • Basics of Video
    Basics of Video Yao Wang Polytechnic University, Brooklyn, NY11201 [email protected] Video Basics 1 Outline • Color perception and specification (review on your own) • Video capture and disppy(lay (review on your own ) • Analog raster video • Analog TV systems • Digital video Yao Wang, 2013 Video Basics 2 Analog Video • Video raster • Progressive vs. interlaced raster • Analog TV systems Yao Wang, 2013 Video Basics 3 Raster Scan • Real-world scene is a continuous 3-DsignalD signal (temporal, horizontal, vertical) • Analog video is stored in the raster format – Sampling in time: consecutive sets of frames • To render motion properly, >=30 frame/s is needed – Sampling in vertical direction: a frame is represented by a set of scan lines • Number of lines depends on maximum vertical frequency and viewingg, distance, 525 lines in the NTSC s ystem – Video-raster = 1-D signal consisting of scan lines from successive frames Yao Wang, 2013 Video Basics 4 Progressive and Interlaced Scans Progressive Frame Interlaced Frame Horizontal retrace Field 1 Field 2 Vertical retrace Interlaced scan is developed to provide a trade-off between temporal and vertical resolution, for a given, fixed data rate (number of line/sec). Yao Wang, 2013 Video Basics 5 Waveform and Spectrum of an Interlaced Raster Horizontal retrace Vertical retrace Vertical retrace for first field from first to second field from second to third field Blanking level Black level Ӈ Ӈ Th White level Tl T␷ T␷ ⌬t 2 ⌬ t (a) Խ⌿( f )Խ f 0 fl 2fl 3fl fmax (b) Yao Wang, 2013 Video Basics 6 Color
    [Show full text]
  • 39 Color Variations in Pseudo Color Processing of Graphical Images
    International Journal of Advanced Research and Development ISSN: 2455-4030 www.newresearchjournal.com/advanced Volume 1; Issue 2; February 2016; Page No. 39-43 Color variations in pseudo color processing of graphical images using multicolor perception 1 Selvapriya B, 2 Raghu B 1 Research Scholar, Department of Computer Science and Engineering, Bharath University, Chennai, India 2 Professor and Dean, Department of Computer Science and Engineering, Sri Raman jar Engineering College, Chennai, India Abstract In digital image processing, image enhancement is employed to give a better look to an image. Color is one of the best ways to visually enhance an image. Pseudo-color image processing assigns color to grayscale images. This is useful because the human eye can distinguish between millions of colures but relatively few shades of gray. Pseudo-coloring has many applications on images from devices capturing light outside the visible spectrum, for example, infrared and X-ray. A Color model is a specification of a color coordinate system and the subset of visible colors in this coordinate system. The key observation in this work is variation of colors in Pseudo color images using multicolor perception. This technique can be successfully applied to a variety of gray scale images and videos. Keywords: Pseudo color, color models, reference grey images, multicolor perception, image processing, computer graphics. 1. Introduction on the observer. Taking these advantages of human visual This paper is based on the idea that the human visual system perception to enhance an image a technique is applied which is more responsive to color than binary or monochrome is called pseudo color.
    [Show full text]
  • Color Space to Detect Skin Image: the Procedure and Implication
    Scientific Journal of Informatics Vol. 4, No. 2, November 2017 p-ISSN 2407-7658 http://journal.unnes.ac.id/nju/index.php/sji e-ISSN 2460-0040 Color Space to Detect Skin Image: The Procedure and Implication 1 2 3 Sukmawati Nur Endah , Retno Kusumaningrum , Helmie Arif Wibawa 1,2,3Computer Science Department, Universitas Diponegoro, Indonesia E-mail: [email protected],[email protected],[email protected] Abstract Skin detection is one of the processes to detect the presence of pornographic elements in an image. The most suitable feature for skin detection is the color feature. To be able to represent the skin color properly, it is needed to be processed in the appropriate color space. This study examines some color spaces to determine the most appropriate color space in detecting skin color. The color spaces in this case are RGB, HSV, HSL, YIQ, YUV, YCbCr, YPbPr, YDbDr, CIE XYZ, CIE L*a*b*, CIE L*u* v*, and CIE L*ch. Based on the test results using 400 image data consisting of 200 skin images and 200 non-skin images, it is obtained that the most appropriate color space to detect the color is CIE L*u*v*. Keywords: Skin Detection, Color Feature, Color Space Coversion 1. INTRODUCTION Skin is a part of human body. In an image, skin is able to present the existence of human. Furthermore, it can also be determined how many the skin is exposed in an image. The more skin is exposed, the more naked the human object in the image.
    [Show full text]
  • Habilitation `A Diriger Des Recherches from Image Coding And
    Habilitation `aDiriger des Recherches From image coding and representation to robotic vision Marie BABEL Universit´ede Rennes 1 June 29th 2012 Bruno Arnaldi, Professor, INSA Rennes Committee chairman Ferran Marques, Professor, Technical University of Catalonia Reviewer Beno^ıtMacq, Professor, Universit´eCatholique de Louvain Reviewer Fr´ed´ericDufaux, CNRS Research Director, Telecom ParisTech Reviewer Charly Poulliat, Professor, INP-ENSEEIHT Toulouse Examiner Claude Labit, Inria Research Director, Inria Rennes Examiner Fran¸oisChaumette, Inria Research Director, Inria Rennes Examiner Joseph Ronsin, Professor, INSA Rennes Examiner IRISA UMR CNRS 6074 / INRIA - Equipe Lagadic IETR UMR CNRS 6164 - Equipe Images 2 Contents 1 Introduction 3 1.1 An overview of my research project ........................... 3 1.2 Coding and representation tools: QoS/QoE context .................. 4 1.3 Image and video representation: towards pseudo-semantic technologies . 4 1.4 Organization of the document ............................. 5 2 Still image coding and advanced services 7 2.1 JPEG AIC calls for proposal: a constrained applicative context ............ 8 2.1.1 Evolution of codecs: JPEG committee ..................... 8 2.1.2 Response to the call for JPEG-AIC ....................... 9 2.2 Locally Adaptive Resolution compression framework: an overview . 10 2.2.1 Principles and properties ............................ 11 2.2.2 Lossy to lossless scalable solution ........................ 12 2.2.3 Hierarchical colour region representation and coding . 12 2.2.4 Interoperability ................................. 13 2.3 Quadtree Partitioning: principles ............................ 14 2.3.1 Basic homogeneity criterion: morphological gradient . 14 2.3.2 Enhanced color-oriented homogeneity criterion . 15 2.3.2.1 Motivations .............................. 15 2.3.2.2 Results ................................ 16 2.4 Interleaved S+P: the pyramidal profile ........................
    [Show full text]
  • Impact of Color Space on Human Skin Color Detection Using an Intelligent System
    Recent Advances in Image, Audio and Signal Processing Impact of Color Space on Human Skin Color Detection Using an Intelligent System HANI K. AL-MOHAIR, JUNITA MOHAMAD-SALEH* AND SHAHREL AZMIN SUANDI School Of Electrical & Electronic Engineering Universiti Sains Malaysia 14300 Nibong Tebal, Pulau Pinang MALAYSIA [email protected] Abstract: - Skin detection is a primary step in many applications such as face detection and online pornography filtering. Due to the robustness of color as a feature of human skin, skin detection techniques based on skin color information have gained much attention recently. Many researches have been done on skin color detection over the last years. However, there is no consensus on what color space is the most appropriate for skin color detection. Several comparisons between different color spaces used for skin detection are made, but one important question still remains unanswered is, “what is the best color space for skin detection. In this paper, a comprehensive comparative study using the Multi Layer Perceptron neural network MLP is used to investigate the effect of color-spaces on overall performance of skin detection. To increase the accuracy of skin detection the median filter and the elimination steps are implemented for all color spaces. The experimental results showed that the YIQ color space gives the highest separability between skin and non-skin pixels among the different color spaces tested. Key-Words: - skin detection, color-space, Neural Network 1 Introduction Image segmentation is a process of dividing an image into non-overlapping regions consisting of groups of connected homogeneous pixels. Typical parameters that define the homogeneity of a region in a segmentation process are color, depth of layers, gray levels, texture, etc [1].
    [Show full text]
  • Adaptive Color Space Model Based on Dominant Colors for Image and Video
    Adaptive color space model based on dominant colors for image and video... Madenda S., Darmayantie A. Adaptive color space model based on dominant colors for image and video compression performance improvement S. Madenda 1, A. Darmayantie 1 1 Computer Engineering Department, Gunadarma University, Jl. Margonda Raya. No. 100, Depok – Jawa Barat, Indonesia Abstract This paper describes the use of some color spaces in JPEG image compression algorithm and their impact in terms of image quality and compression ratio, and then proposes adaptive color space models (ACSM) to improve the performance of lossy image compression algorithm. The proposed ACSM consists of, dominant color analysis algorithm and YCoCg color space family. The YCoCg color space family is composed of three color spaces, which are YCcCr, YCpCg and YCyCb. The dominant colors analysis algorithm is developed which enables to automatically select one of the three color space models based on the suitability of the dominant colors contained in an image. The experimental results using sixty test images, which have varying colors, shapes and textures, show that the proposed adaptive color space model provides improved performance of 3 % to 10 % better than YCbCr, YDbDr, YCoCg and YCgCo-R color spaces family. In addition, the YCoCg color space family is a discrete transformation so its digital electronic implementation requires only two adders and two subtractors, both for forward and inverse conversions. Keywords: colors dominant analysis, adaptive color space, image compression, image quality, compression ratio. Citation: Madenda S, Darmayantie A. Adaptive color space model based on dominant colors for image and video compression performance improvement. Computer Optics 2021; 45(3): 405- 417.
    [Show full text]
  • VLSI Implementation of Efficient Video Processor for Worldwide TV-OUT
    VLSI Implementation of Efficient Video Processor for Worldwide TV-OUT Sungmok Lee#1, Jeonguk Im#2 , Jingun Song#3, Joohyun Kim *4, Bongsoon Kang#5 #Department of Electronics Engineering, Dong-A University 840 Hadan-dong, Saha-gu, Busan, Korea {1jackal13, 2hammer18, 3sjg123}@didec.donga.ac.kr [email protected] * SAMSUNG Electro-Mechanics Co. Ltd 314 Maetan-3-dong, Yeong-tong-gu, Suwon, Korea [email protected] Abstract-This paper presents the integrated architecture of compact architecture to reduce production costs and to cover worldwide TV-OUT video processor that can be used throughout worldwide TV-OUT formats. the world. SECAM, one of the color TV standards, uses FM modulation by contrast with NTSC and PAL. So SECAM must TABLE I have an anti-cloche filter, but the filter recommended by standard SUMMARY OF WORLDWIDE COLOR TV SYSTEM is not easy to implement the hardware due to the sharpness of the Modulation Sub-carrier Output formats frequency response. We modified characteristic of anti-cloche Method frequency(MHz) filter and modulation method so as to be identical with a result M, J/NTSC AM 3.5795454545 required by standard so that it is easy to design the hardware. 4.43/NTSC AM 4.43361875 The hardwired TV-out function is requisite since portable devices are widely getting popularized. Therefore, we propose TV-out M/PAL AM 3.5756118881 video processor that has low hardware complexity, which can be B,D,G,H,I, N/PAL AM 4.43361875 applied to various portable multimedia applications. Combination N/PAL AM 3.58206525 SECAM FM 4.40625 or 4.25 I.
    [Show full text]
  • Skin Detection in Image and Video Founded in Clustering and Region Growing
    SKIN DETECTION IN IMAGE AND VIDEO FOUNDED IN CLUSTERING AND REGION GROWING ABM Rezbaul Islam Dissertation Prepared for the Degree of DOCTOR OF PHILOSOPHY UNIVERSITY OF NORTH TEXAS August 2019 APPROVED: Bill Buckles, Major Professor Armin R. Mikler, Committee Member Robert Akl, Committee Member Kamesh Namuduri, Committee Member Barrett Bryant, Chair of the Department of Computer Science and Engineering Hanchen Huang, Dean of the College of Engineering Victor Prybutok, Dean of the Toulouse Graduate School Islam, ABM Rezbaul. Skin Detection in Image and Video Founded in Clustering and Region Growing. Doctor of Philosophy (Computer Science and Engineering), August 2019, 107 pp., 18 tables, 54 figures, 97 numbered references. Researchers have been involved for decades in search of an efficient skin detection method. Yet current methods have not overcome the major limitations. To overcome these limitations, in this dissertation, a clustering and region growing based skin detection method is proposed. These methods together with a significant insight result in a more effective algorithm. The insight concerns a capability to define dynamically the number of clusters in a collection of pixels organized as an image. In clustering for most problem domains, the number of clusters is fixed a priori and does not perform effectively over a wide variety of data contents. Therefore, in this dissertation, a skin detection method has been proposed using the above findings and validated. This method assigns the number of clusters based on image properties and ultimately allows freedom from manual thresholding or other manual operations. The dynamic determination of clustering outcomes allows for greater automation of skin detection when dealing with uncertain real-world conditions.
    [Show full text]