Color Feature Integration with Directional Ringlet Intensity

Total Page:16

File Type:pdf, Size:1020Kb

Color Feature Integration with Directional Ringlet Intensity COLOR FEATURE INTEGRATION WITH DIRECTIONAL RINGLET INTENSITY FEATURE TRANSFORM FOR ENHANCED OBJECT TRACKING Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for The Degree of Master of Science in Electrical Engineering By Kevin Thomas Geary Dayton, Ohio December, 2016 COLOR FEATURE INTEGRATION WITH DIRECTIONAL RINGLET INTENSITY FEATURE TRANSFORM FOR ENHANCED OBJECT TRACKING Name: Geary, Kevin Thomas APPROVED BY: _______________________________ _______________________________ Vijayan K. Asari, Ph.D. Eric J. Balster, Ph.D. Advisory Committee Chairman Committee Member Professor Associate Professor Electrical and Computer Engineering Electrical and Computer Engineering _______________________________ Theus H. Aspiras, Ph.D. Committee Member Research Engineer and Adjunct Faculty Electrical and Computer Engineering _______________________________ _______________________________ Robert J. Wilkens, Ph.D., P.E. Eddy M. Rojas, Ph.D., M.A., P.E. Associate Dean for Research and Innovation Dean, School of Engineering Professor School of Engineering ii ABSTRACT COLOR FEATURE INTEGRATION WITH DIRECTIONAL RINGLET INTENSITY FEATURE TRANSFORM FOR ENHANCED OBJECT TRACKING Name: Geary, Kevin Thomas University of Dayton Advisor: Dr. Vijayan K. Asari Object tracking, both in wide area motion imagery (WAMI) and in general use cases, is often subject to many different challenges, such as illumination changes, background variation, rotation, scaling, and object occlusions. As WAMI datasets become more common, so too do color WAMI datasets. When color data is present, it can offer very strong potential features to enhance the capabilities of an object tracker. A novel color histogram-based feature descriptor is proposed in this thesis research to improve the accuracy of object tracking in challenging sequences where color data is available. The use of a three dimensional color histogram is explored, and various color spaces are tested. It is found to be effective but overly costly in terms of calculation time when comparing reference features to the test features. A reduced, two dimensional histogram is proposed, created from three channel color spaces by removing the intensity/luminosity channel before calculating the histogram. The two dimensional histogram is also evaluated as a feature for object tracking, and it is found that the HSV two dimensional histogram iii performs significantly better than other color space histograms, and that the two dimensional histogram performs at a level very near that of the three dimensional histogram, but an order of magnitude less complex in the feature distance calculation. The proposed color feature descriptor is then integrated with the Directional Ringlet Intensity Feature Transform (DRIFT) object tracker. The two dimensional HSV color histogram is enhanced further by making use of the DRIFT Gaussian ringlets as a mask for the histogram, resulting in a set of weighted histograms as the color feature descriptor. This is calculated alongside the existing DRIFT features of intensity and Kirsch mask edge detection. The distance scores for the color feature and DRIFT features are calculated separately, given the same weight, and then added together to form the final hybrid feature distance score. The combined proposed object tracker, C-DRIFT, is evaluated on both challenging WAMI data sequences and challenging general case tracking sequences that include head, body, object, and vehicle tracking. The evaluation results show that the proposed C-DRIFT algorithm significantly improves on the average accuracy of the DRIFT algorithm. Future work on the integrated algorithm includes integrated scale change handling created from a hybrid of normalized color histograms and existing DRIFT rescaling methods. iv TABLE OF CONTENTS ABSTRACT ……………………………………………………………………………. iii LIST OF ILLUSTRATIONS …………………………………………………………… vii LIST OF TABLES …………………………………………………………………… viii CHAPTER 1: INTRODUCTION………………………………………………………... 1 CHAPTER 2: LITERATURE SURVEY ………………………………………………... 5 2.1 Image Registration Techniques ………………………………………………… 5 2.2 Tracking Algorithm Feature Extraction …………………………………………. 6 2.3 Color Feature Extraction ………………………………………………………… 8 CHAPTER 3: OVERVIEW OF DRIFT ALGORITHM ……………………………...... 10 3.1 DRIFT Tracking Technique …………………………………………………….. 10 3.2 DRIFT Tracking Results ……………………………………………………….. 19 CHAPTER 4: COLOR HISTOGRAM BASED FEATURES FOR TRACKING ….….. 22 4.1 Three Dimensional Color Histogram Features ………………………………… 22 4.2 Two Dimensional Color Histogram Features ………………………………….. 29 4.3 Color Feature Fusion with DRIFT ……………………………………………... 31 CHAPTER 5: OBJECT TRACKING EVALUATIONS …………………..…………… 36 5.1 Datasets and Testing Setup …………………………………………………….. 36 5.2 Testing Strategies and Results …………………………………………………..39 v 5.3 Discussion …………………………………………………………………... 43 CHAPTER 6: CONCLUSION ………………………….……………………………… 45 REFERENCES …………………………………………………………………………. 48 vi LIST OF ILLUSTRATIONS 1.1 Comparison of grayscale intensities of colorful cars ………………………………... 3 1.2 Color feature extraction process …………………………………………………….. 4 3.1 DRIFT object tracking method …………………………………………………….. 11 3.2 STTF nonlinear enhancement process ….…………………………………………... 12 3.3 Structure of the DRIFT feature descriptor ……………………..…………………… 14 3.4 Gaussian ring kernel, with rings 휌 = 3 ……….………….………………………….. 15 3.5 Kirsch compass kernels …...………………………………………………………… 16 3.6 Object tracking on CLIF and LAIR datasets ………………………………………. 21 4.1 RGB cube histogram ……………………………………………………………….. 23 4.2 Structure of color space testing tracking algorithm ………………………………... 27 4.3 Sample frame from Egtest01 dataset …….…………………………………………. 28 4.4 Heat map of HSV 2D histogram …………………………………………………… 30 4.5 Diagram of Color DRIFT Structure ………………………………………………… 32 4.6 FAST feature comparison between pair of frames ………………………………… 35 5.1 Frames from object tracking sequences corresponding to Table 5.1 ………………. 37 5.2 Frames from object tracking sequences corresponding to Tables 5.2 and 5.3 …….. 38 5.3 Plot of thresholded overlap success for Visual Tracker sets ……………………….. 40 5.4 Plot of thresholded center error success for Visual Tracker sets …………………... 41 vii LIST OF TABLES 3.1 Object Tracking Frame Detection Accuracy for CLIF and LAIR sets …................... 20 4.1 3D histogram tracking results by color space ……………………………………… 28 4.2 2D histogram tracking results by color space ……………………………………… 31 4.3 Comparison of tracker time per frame for 3D and 2D histograms ………………… 31 5.1 VIVID object tracking overlap …………………………………………………… 39 5.2 Visual Tracker Benchmark object tracking frame detection accuracy …………… 42 5.3 Visual Tracker Benchmark object tracking average center error ………………… 43 viii CHAPTER 1 INTRODUCTION In recent years, wide area motion imagery (WAMI) data has become increasingly common, with an abundance of use cases. These uses can range from surveillance related tasks, to search and rescue operations, to traffic pattern analysis. In many use cases, tracking of objects, especially vehicles, is necessary. Because of the large area covered by such imagery, computer vision techniques for automated tracking become increasingly important. However, object tracking in WAMI data is a challenging task due to many factors. These factors can include camera motion, variances in object illumination, object occlusion, changes in object scale, object rotation, and variations in background. Additionally, the resolution of individual objects in such imagery tends to be very low. This is because while the WAMI images themselves are very high resolution, the images are captured from high in the air. Even when multiple sensors are deployed in an array to capture the highest possible resolution image of the area, the resolution of objects such as vehicles will remain relatively low. WAMI data can also contain sequences where, due to complex lighting conditions, the contrast of an object can be very similar to that of the background. When any number of these complicating factors is present in the imagery, it becomes much more difficult to accurately track an object, compromising the purpose of the WAMI data. 1 Many of these challenging conditions are overcome when using the Directional Ringlet Intensity Feature Transform (DRIFT) tracking algorithm [1-2]. The DRIFT algorithm consists of several stages. The tracker is initialized with the location of the target object and the size of its bounding box. Before reference features are calculated, the image goes through a preprocessing step in which intensity illumination and spatial enhancement is applied. The reference features of the object are then created from the grayscale image using intensity features, and edge features are created using a Kirsch mask. These features are then filtered using Gaussian ringlet filters to create the completed model [3]. Tracking then begins by applying the same image enhancement techniques to the incoming frame. An enhanced sliding window search approach is used in the search area, with the same intensity and edge features being calculated for each candidate region. Once all candidates have been accumulated, the earth mover’s distance (EMD) is calculated to determine the distance score between each region and the reference. If the lowest distance is below a threshold, the location of that candidate region is recorded as the object location. If the distance score is below a second threshold, the reference model is updated. Lastly, a Kalman filter is applied
Recommended publications
  • Towards Robust Audio-Visual Speech Recognition
    Research Collection Doctoral Thesis Towards Robust Audio-Visual Speech Recognition Author(s): Naghibi, Tofigh Publication Date: 2015 Permanent Link: https://doi.org/10.3929/ethz-a-010492674 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library DISS. ETH NO. 22867 Towards Robust Audio-Visual Speech Recognition A thesis submitted to attain the degree of DOCTOR OF SCIENCES of ETH ZURICH (Dr. sc. ETH Zurich) presented by TOFIGH NAGHIBI MSc, Sharif University of Technology, Iran born June 8, 1983 citizen of Iran accepted on the recommendation of Prof. Dr. Lothar Thiele, examiner Prof. Dr. Gerhard Rigoll, co-examiner Dr. Beat Pfister, co-examiner 2015 Acknowledgements I would like to thank Prof. Dr. Lothar Thiele and Dr. Beat Pfister for super- vising my research. Beat has always allowed me complete freedom to define and explore my own directions in research. While this might prove difficult and sometimes even frustrating to begin with, I have come to appreciate the wisdom of his way since it not only encouraged me to think for myself, but also gave me enough self-confidence to tackle open problems that otherwise I would not dare to touch. The former and current members of the ETH speech group helped me a lot in various ways. In particular, I would like to thank Sarah Hoffmann who was my computer science and cultural mentor. Her vast knowledge in pro- gramming and algorithm design and her willingness to share all these with me made my electrical engineer-to-computer scientist transformation much easier.
    [Show full text]
  • Colour Spaces - a Review of Historic and Modern Colour Models* GD Hastings and a Rubin
    S Afr Optom 2012 71(3) 133-143 Colour spaces - a review of historic and modern colour models* GD Hastings and A Rubin Department of Optometry, University of Johannesburg, PO Box 524, Auckland Park, 2006 South Africa <[email protected]> <[email protected]> Received 12 April 2012; revised version accepted 31 August 2012 Abstract colour models and a brief history of the advance- ments that have led to our current understanding Although colour is one of the most interest- of the complicated phenomenon of colour. (S Afr ing and integral parts of vision, most models and Optom 2012 71(3) 133-143) methods of colourimetry (the measurement of col- our) available to describe and quantify colour have Key words: Colour space, colour model, col- been developed outside of optometry. This article ourimetry, colour discrimination. presents a summary of some of the most popular Introduction textiles, and computer graphics industries because as these industries became more sophisticated, more The description and quantification of colour, and universal, and more commercial, models and systems the even more complicated matter of colour percep- needed to be evolved to allow an increasingly more tion, have interested philosophers and scientists for precise and comprehensive description of colour and over 2500 years, since at least the times of the ancient colour perception. Greeks1. Colour is a fundamental part of the sense of Some of the earliest attempts at representing colour sight and has a fascinating effect on how people per- seem to have been inspired by the change of night ceive the world - the poet Julian Grenfell even went to day and involve a linear (one-dimensional) colour so far as to say that “Life is Colour”2.
    [Show full text]
  • Shadow Detection on Multispectral Images A
    SHADOW DETECTION ON MULTISPECTRAL IMAGES A THESIS SUBMITTED TO THE GRADUATE SCHOOL OF NATURAL AND APPLIED SCIENCES OF MIDDLE EAST TECHNICAL UNIVERSITY BY HAZAN DAGLAYAN˘ SEVIM˙ IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN INFORMATION SYSTEMS SEPTEMBER 2015 Approval of the thesis: SHADOW DETECTION ON MULTISPECTRAL IMAGES submitted by HAZAN DAGLAYAN˘ SEVIM˙ in partial fulfillment of the requirements for the degree of Master of Science in Information Systems Department, Middle East Technical University by, Prof. Dr. Nazife Baykal Dean, Graduate School of Informatics Prof. Dr. Yasemin Yardımcı Çetin Head of Department, Information Systems Prof. Dr. Yasemin Yardımcı Çetin Supervisor, Information Systems, METU Examining Committee Members: Assoc. Prof. Dr. Altan Koçyigit˘ Information Systems, METU Prof. Dr. Yasemin Yardımcı Çetin Information Systems, METU Assist. Prof. Dr. Ersin Karaman MIS Department, Atatürk University Assoc. Prof. Dr. Banu Günel Kılıç Information Systems, METU Assoc. Prof. Dr. Alptekin Temizel Modeling and Simulation, METU Date: I hereby declare that all information in this document has been obtained and presented in accordance with academic rules and ethical conduct. I also declare that, as required by these rules and conduct, I have fully cited and referenced all material and results that are not original to this work. Name, Last Name: HAZAN DAGLAYAN˘ SEVIM˙ Signature : v ABSTRACT SHADOW DETECTION ON MULTISPECTRAL IMAGES DAGLAYAN˘ SEVIM,˙ HAZAN M.S., Department of Information Systems Supervisor : Prof. Dr. Yasemin Yardımcı Çetin September 2015, 93 pages Shadows caused by clouds, mountains or high human-made structures pose chal- lenges for identification of objects from satellite or aerial images since they dete- riorate and mask true spectral properties of the objects.
    [Show full text]
  • A Low-Cost Real Color Picker Based on Arduino
    Sensors 2014, 14, 11943-11956; doi:10.3390/s140711943 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article A Low-Cost Real Color Picker Based on Arduino Juan Enrique Agudo 1, Pedro J. Pardo 1,*, Héctor Sánchez 1, Ángel Luis Pérez 2 and María Isabel Suero 2 1 University Center of Merida, University of Extremadura, Sta. Teresa de Jornet, 38, Mérida 06800, Spain; E-Mails: [email protected] (J.E.A.); [email protected] (H.S.) 2 Physics Department, University of Extremadura, Avda. Elvas s/n, Badajoz 06006, Spain; E-Mails: [email protected] (A.L.P.); [email protected] (M.I.S.) * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +34-924-289-300 (ext. 82622); Fax: +34-924-301-212. Received: 18 March 2014; in revised form: 2 July 2014 / Accepted: 3 July 2014 / Published: 7 July 2014 Abstract: Color measurements have traditionally been linked to expensive and difficult to handle equipment. The set of mathematical transformations that are needed to transfer a color that we observe in any object that doesn’t emit its own light (which is usually called a color-object) so that it can be displayed on a computer screen or printed on paper is not at all trivial. This usually requires a thorough knowledge of color spaces, colorimetric transformations and color management systems. The TCS3414CS color sensor (I2C Sensor Color Grove), a system for capturing, processing and color management that allows the colors of any non-self-luminous object using a low-cost hardware based on Arduino, is presented in this paper.
    [Show full text]
  • Acceleration of Image Processing Using New Color Model
    American Journal of Applied Sciences 6 (5): 1015-1020, 2009 ISSN 1546-9239 © 2009 Science Publications Acceleration of Image Processing Using New Color Model 1Mahdi Alshamasin, 2Riad Al-kasasbeh, 2A. Khraiwish, 2Y. Al-shiboul and 3Dmitriy E. Skopin 1Department of Mechatronics, 2Department of Electrial Engineering, 3Dpartment of Computer Engineering, Faculty of Engineering Technology, Al-Balqa'a Applied University, P.O. Box 15008 Amman 11134, Jordan Abstract: The theoretical outcomes and experimental results of new color model designed for accelerated image processing together with implementations of designed model to software of image processing are presented in the study. This model, as it will be shown below, may be used in modern real time video processing applications. The developed model allows to increase a speed of image processing algorithms using accelerated model of conversion to new decorrelated color space. Experimental results the proposed method can get a better performance than other existing methods. Key words: Image processing, color model, RGB, HSV and HSI INTRODUCTION for object recognition. In section four, we develop an advanced approach for image processing that satisfies Digital image processing is a new and promptly the target of this study. Results and conclusion are developing field which finds more and more application shown in section five and six correspondently. in various information and technical systems such as: radar-tracking, communications, television, astronomy, MATERIALS AND METHODS etc. [1]. There are numerous methods of digital image processing techniques such as: Histogram processing, Image processing using standard color model: The local enhancement, smoothing and sharpening, color data flow of standard color image processing is shown segmentation, a digital image filtration and edge in Fig.
    [Show full text]
  • On the Performance of Kernel Methods for Skin Color Segmentation
    Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2009, Article ID 856039, 13 pages doi:10.1155/2009/856039 Research Article On the Performance of Kernel Methods for Skin Color Segmentation A. Guerrero-Curieses,1 J. L. Rojo-Alvarez,´ 1 P. Con de - Pa rdo, 2 I. Landesa-Vazquez,´ 2 J. Ramos-Lopez,´ 1 and J. L. Alba-Castro2 1 Departamento de Teor´ıa de la Senal˜ y Comunicaciones, Universidad Rey Juan Carlos, 28943 Fuenlabrada, Spain 2 Departamento de Teor´ıa de la Senal˜ y Comunicaciones, Universidad de Vigo, 36200 Vigo, Spain Correspondence should be addressed to A. Guerrero-Curieses, [email protected] Received 26 September 2008; Revised 23 March 2009; Accepted 7 May 2009 Recommended by C.-C. Kuo Human skin detection in color images is a key preprocessing stage in many image processing applications. Though kernel-based methods have been recently pointed out as advantageous for this setting, there is still few evidence on their actual superiority. Specifically, binary Support Vector Classifier (two-class SVM) and one-class Novelty Detection (SVND) have been only tested in some example images or in limited databases. We hypothesize that comparative performance evaluation on a representative application-oriented database will allow us to determine whether proposed kernel methods exhibit significant better performance than conventional skin segmentation methods. Two image databases were acquired for a webcam-based face recognition application, under controlled and uncontrolled lighting and background conditions. Three different chromaticity spaces (YCbCr, ∗ CIEL∗a∗b , and normalized RGB) were used to compare kernel methods (two-class SVM, SVND) with conventional algorithms (Gaussian Mixture Models and Neural Networks).
    [Show full text]
  • Object Localization Using Color Histograms
    Object Localization Using Color Histograms b a Krzysztof Horecki a 0 & Dietrich Paulus & Konrad Wojciechowski b a Schlesische Technische Universitat¨ Lehrstuhl f¨ur Mustererkennung Gliwice, Polen Universitat¨ Erlangen–N¨urnberg [email protected] [email protected] http://www.olimp.com.pl/ krzy ho http://www.mustererkennun g.de Abstract In the contribution we present the comparison of di erent histogram distance measures and color spaces applied to ob ject lo calization. In particular we examine the Earth Mover's Distance which has not yet b een compared to other measures in the area of ob ject lo calization. The evaluation is based on more than 80,000 exp eriments. Furthermore, we prop ose an extension to Normalized RG color space and an ecient scanning algorithm for lo calization. 1 Introduction Localizing an object in a scene is a common task in computer vision. Color as a cue for solving this problem was presented by Swain and Ballard in [Swa91]. There it was shown that the distributions of color information, i.e. color histograms, can be used to solve the object localization problem. In order to localize the object in the scene, we try to find that sub–image of the scene image, which has the smallest distance from the object image. The distances in sense of histograms are concerned. We examined several histogram distance measures and color spaces in which we eval- uated the histograms. We restricted our experiments to finding objects in the office scenery, whose sizes were known apriorifrom manual segmentation.1 This contribution is organized as follows; in Section 2 we introduce color histograms.
    [Show full text]
  • Fusion of Multi Color Space for Human Skin Region Segmentation
    International Journal of Information and Electronics Engineering, Vol. 3, No. 2, March 2013 Fusion of Multi Color Space for Human Skin Region Segmentation Fan Hai Xiang and Shahrel Azmin Suandi, Senior Member, IACSIT YCbCr-YU'V, RGB-YU'V color models are introduced and Abstract—This paper proposes a technique to extract human compared with each other. The proposed skin color regions skin color regions from the color images based on combination segmentation technique is based on combination of RGB and of both RGB and YU'V color spaces. Since skin pixels can vary YUV color models. The experimental results show that the with ambient light, to find the range of values for which most proposed approach can detect skin color regions more skin pixels fall in the skin color space becomes a hard task. Therefore, other techniques based on normalized rg, HSV, YUV, accurately than other methods. YIQ, YCbCr, RGB, YCbCr-YU'V color spaces are presented and compared in this paper in order to select the appropriate control limits for the purpose of skin color segmentation. From II. SKIN COLOR SEGMENTATIONS METHODS the experimental results, the skin detection results obtained by the proposed technique based on RGB-YU'V color model is A. Skin Color Segmentation in Normalize rg Color Model found to be the most appropriate color model compared to The normalized rg color space is one of the most widely other methods. used color spaces for processing digital image for skin color detection. There is one method of skin color segmentation Index Terms—Skin color segmentation, color space, control limits, YCbCr- YU'V.
    [Show full text]
  • Book III Color
    D DD DDD DDDDon.com DDDD Basic Photography in 180 Days Book III - Color Editor: Ramon F. aeroramon.com Contents 1 Day 1 1 1.1 Theory of Colours ........................................... 1 1.1.1 Historical background .................................... 1 1.1.2 Goethe’s theory ........................................ 2 1.1.3 Goethe’s colour wheel .................................... 6 1.1.4 Newton and Goethe ..................................... 9 1.1.5 History and influence ..................................... 10 1.1.6 Quotations .......................................... 13 1.1.7 See also ............................................ 13 1.1.8 Notes and references ..................................... 13 1.1.9 Bibliography ......................................... 16 1.1.10 External links ......................................... 16 2 Day 2 18 2.1 Color ................................................. 18 2.1.1 Physics of color ....................................... 20 2.1.2 Perception .......................................... 22 2.1.3 Associations ......................................... 26 2.1.4 Spectral colors and color reproduction ............................ 26 2.1.5 Additive coloring ....................................... 28 2.1.6 Subtractive coloring ..................................... 28 2.1.7 Structural color ........................................ 29 2.1.8 Mentions of color in social media .............................. 30 2.1.9 Additional terms ....................................... 30 2.1.10 See also ...........................................
    [Show full text]
  • Concerning the Calculation of the Color Gamut in a Digital Camera
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repositorio Institucional de la Universidad de Alicante Concerning the Calculation of the Color Gamut in a Digital Camera Francisco Martı´nez-Verdu´,1* M. J. Luque,2 P. Capilla,2 and J. Pujol3 1Department of Optics, University of Alicante, Apartado de Correos n° 99, 03080 Alicante, Spain 2Department of Optics, University of Valencia, Dr. Moliner s/n, 46100 Burjassot, Valencia, Spain 3Department of Optics and Optometry, Center for Development of Sensors, Instrumentation and Systems (CD6), Technical University of Catalonia, Rambla de Sant Nebridi n° 10, 08222 Terrassa, Barcelona, Spain Received 3 January 2004; revised 25 November 2005; accepted 11 January 2006 Abstract: Several methods to determine the color gamut of INTRODUCTION any digital camera are shown. Since an input device is additive, its color triangle was obtained from their spectral Color devices1–7 are basically divided into input or capture sensitivities and it was compared with the theoretical sen- devices (scanners and digital cameras) and output devices sors of Ives-Abney-Yule and MacAdam. On the other hand, (softcopy, such as displays, and hardcopy, such as printers). the RGB digital data of the optimal or MacAdam colors Scanners, digital cameras, and displays (CRT, LCD/TFT, were simulated to transform them into XYZ data according plasma, etc.) are additive color devices. However, most to the colorimetric profile of the digital camera. From this, printing devices8 (inkjet, electro-photography, offset, etc.) the MacAdam limits associated to the digital camera are perform by additive and subtractive color mixing.
    [Show full text]
  • CSIFT Based Locality-Constrained Linear Coding for Image
    1 CSIFT Based Locality-constrained Linear Coding for Image Classification CHEN Junzhou 1 , LI Qing 1, PENG Qiang 1 and Kin Hong Wong 2 1 School of Information Science & Technology, Southwest Jiaotong University, Chengdu, Sichuan, 610031, China 2 Department of Computer Science & Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong Email:[email protected]; [email protected] Abstract CSIFT descriptors can achieve better performances in re- In the past decade, SIFT descriptor has been witnessed sisting some certain photometric changes. One example as one of the most robust local invariant feature de- can be found in [3], which shows that CSIFT is more scriptors and widely used in various vision tasks. Most stable than SIFT in case of illumination changes. traditional image classification systems depend on the On the other hand, the bag-of-features (BoF) [7] [8] luminance-based SIFT descriptors, which only analyze joined with the spatial pyramid matching (SPM) kernel the gray level variations of the images. Misclassification [9] has been employed to build the recent state-of-the-art may happen since their color contents are ignored. In image classification systems. In BoF, images are consid- this article, we concentrate on improving the perfor- ered as sets of unordered local appearance descriptors, mance of existing image classification algorithms by which are clustered into discrete visual words for the adding color information. To achieve this purpose, dif- representation of images in semantic classification. ferent kinds of colored SIFT descriptors are introduced SPM divides an image into 2l × 2l segments in differ- and implemented.
    [Show full text]
  • Research Article on the Performance of Kernel Methods for Skin Color Segmentation
    Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2009, Article ID 856039, 13 pages doi:10.1155/2009/856039 Research Article On the Performance of Kernel Methods for Skin Color Segmentation A. Guerrero-Curieses,1 J. L. Rojo-Alvarez,´ 1 P. Con de - Pa rdo, 2 I. Landesa-Vazquez,´ 2 J. Ramos-Lopez,´ 1 and J. L. Alba-Castro2 1 Departamento de Teor´ıa de la Senal˜ y Comunicaciones, Universidad Rey Juan Carlos, 28943 Fuenlabrada, Spain 2 Departamento de Teor´ıa de la Senal˜ y Comunicaciones, Universidad de Vigo, 36200 Vigo, Spain Correspondence should be addressed to A. Guerrero-Curieses, [email protected] Received 26 September 2008; Revised 23 March 2009; Accepted 7 May 2009 Recommended by C.-C. Kuo Human skin detection in color images is a key preprocessing stage in many image processing applications. Though kernel-based methods have been recently pointed out as advantageous for this setting, there is still few evidence on their actual superiority. Specifically, binary Support Vector Classifier (two-class SVM) and one-class Novelty Detection (SVND) have been only tested in some example images or in limited databases. We hypothesize that comparative performance evaluation on a representative application-oriented database will allow us to determine whether proposed kernel methods exhibit significant better performance than conventional skin segmentation methods. Two image databases were acquired for a webcam-based face recognition application, under controlled and uncontrolled lighting and background conditions. Three different chromaticity spaces (YCbCr, ∗ CIEL∗a∗b , and normalized RGB) were used to compare kernel methods (two-class SVM, SVND) with conventional algorithms (Gaussian Mixture Models and Neural Networks).
    [Show full text]