Automated Fast Initial Guess in Digital Image Correlation

Total Page:16

File Type:pdf, Size:1020Kb

Automated Fast Initial Guess in Digital Image Correlation An International Journal for Experimental Mechanics Automated Fast Initial Guess in Digital Image Correlation † ‡ Z. Wang*,M.Vo, H. Kieu* and T. Pan *Department of Mechanical Engineering, The Catholic University of America, Washington, DC 20064, USA † Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA ‡ Department of Civil Engineering, The Catholic University of America, Washington, DC 20064, USA ABSTRACT: A challenging task that has hampered the fully automatic processing of the digital image correlation (DIC) technique is the initial guess when large deformation and rotation are present. In this paper, a robust scheme combining the concepts of a scale-invariant feature transform (SIFT) algorithm and an improved random sample consensus (iRANSAC) algorithm is employed to conduct an automated fast initial guess for the DIC technique. The scale-invariant feature transform algorithm can detect a certain number of matching points from two images even though the corresponding deformation and rotation are large or the images have periodic and identical patterns. After removing the wrong matches with the improved random sample consensus algorithm, the three pairs of closest and non-collinear matching points serve for the purpose of initial guess calculation. The validity of the technique is demonstrated by both computer simulation and real experiment. KEY WORDS: digital image correlation, improved random sample consensus, initial guess, scale-invariant feature transform Introduction analysis is to minimise the coefficient C in Equation (1) to find the best matching. For a representative Digital image correlation (DIC) technique is one of the pixel (x ,y ) to be analysed in the reference image, a most widely used methods for shape, motion and 0 0 square pattern of N =(2M +1)×(2M +1)pixelswithits deformation measurements [1]. The DIC technique centre located at (x ,y )isusuallychosenasthe typically works by comparing and matching the grayscale 0 0 reference subset. The corresponding subset in the target images of an object captured from different views, at image, i.e., the target subset, is often of irregular shape. different time, or at different stages of deformation. Denoting the shift amount between the centres of the Through tracking every few pixels of interest in the two matching subset patterns as (ξ,η), the shape reference and target images, followed by carrying out data mapping function for the entire reference and target interpolation, the DIC technique can determine the subsets can be expressed as whole-field two-dimensional (2D) and three-dimensional fi ÀÁ (3D) shape, deformation and motion vector elds as well ′ ¼ þ ξ þ ξ ðÞþÀ ξ À x i xi x xi x0 y yi y as their gradient maps. Because of its primary advantages ÀÁ0 (2) ′ ¼ þ η þ η ðÞþÀ η À of easy implementation and wide range of measurement y i yi x xi x0 y yi y0 sensitivity and resolution, the DIC technique has found ξ ξ η η fi numerous applications in many fields [2]. where x, y, x and y are the coef cients of the shape The DIC technique generally employs a correlation function. To determine all the six unknowns of shape ξ η ξ ξ η η criterion to detect the best image matching for a group of function ( , , x, y, x, y)aswellasthescaleandoffset pixels (named subset) centred at each interrogated pixel. parameters (a,b)involvedinEquation(1),theDIC There exist a variety of correlation criteria for the DIC technique often employs an iterative algorithm such as analysis. For instance, a simple yet robust criterion named the Newton–Raphson or the Levenberg–Marquardt the parametric sum of squared difference criterion can be method to carry out the correlation optimization. The written as [3] iterative algorithm is capable of providing very fast and highly accurate correlation analysis [4]. The downside, N ÂÃÀÁ ÀÁ however, is that a reasonably good initial guess for the six ¼ ∑ ; þ À ′ ; ′ 2 C af xi yi b gxi y i (1) i¼1 unknowns of the shape function on the starting or seed point is required. where a is a scale factor, b is an offset of intensity and f(xi, When the shape change and the rotation of the target yi)andg(x′i,y′i) indicate the intensity values at the ith image with respect to the reference image are relatively pixel in the reference subset and the matching pixel in small, the initial values of ξx, ξy, ηx and ηy can be set to zeros. the target subset, respectively. The task of the correlation In this case, the initial values of ξ and η can be automatically © 2013 Wiley Publishing Ltd | Strain (2014) 50,28–36 28 doi: 10.1111/str.12063 Z. Wang et al. : Automated Fast Initial Guess in Digital Image Correlation detected by a full-field subset scanning process [5, 6]. On the Scale-invariant feature transform feature points contrary, when the shape change and/or the rotation are relatively large, the initial guess normally has to be Keypoints detector conducted by manually selecting three pairs of matching To achieve scale invariance, the interest points are obtained pixels in the reference and target images through human– from scale-space extrema of the difference of Gaussian computer interaction, which yields six equations to solve (DoG). This can be conducted efficiently by constructing for the six unknowns in Equation (2) as an initial guess. an image pyramid in a procedure as follows. First, the Another notable case where a manual initial guess is Gaussian pyramid is formed from the original image by usually demanded is when each of the reference and successive smoothing and subsampling. Second, the DoG target images has multiple identical or nearly identical pyramid is built from the differences of the subsampled regions; the reason is that the existing automatic images and its Gaussian smoothing version at that pyramid initial guess methods may not be able to detect the correct level, as illustrated in Figure 1. Third, the extrema of the matches in those regions. It is also noted that the shape scale-space function are detected by comparing each pixel mapping function in Equation 2 may have more than six in the pyramid image with its 26 neighbour pixels. Last, a parameters to include higher-order terms; nevertheless, quadratic polynomial is fitted to the local sample points of thehigher-ordertermsarenegligiblefortheinitial each detected extremum to localise it with a higher guess purpose. accuracy. This refinement is particularly important to Manual selection of points for the initial guess in the estimate more accurately the scale information that is used DIC analysis has hampered the fully automatic analysis for scale normalisation to achieve a scale invariant feature. feature of the technique. The situation can become worse The interpolation fitting function is also useful in when there are multiple regions of interest, each requiring determining the stability of the local extremum. a reliable initial guess. In this paper, a robust scheme combining the concepts of a scale-invariant feature Keypoints descriptor transform (SIFT) algorithm and an improved random The SIFT descriptor exploits the local gradient information sample consensus (iRANSAC) algorithm is employed to of image intensities to summarise the image appearance in achieve an automated fast initial guess for the DIC a local neighbourhood around each interest point. To technique. The SIFT algorithm is widely used in computer obtain rotational invariance, each interest point is assigned vision to detect and describe local features in images, and an orientation corresponding to the dominant direction it has been recently applied to deformation measurements computed from a local histogram of gradient directions [7, 8]. Despite that, the advantages of the SIFT algorithm accumulated over a neighbourhood of the point. It is noted for the DIC measurements have not been fully utilised. that before being added to the histogram, each sample’s The novel approach to be presented has the ability to gradient magnitude is weighted by a Gaussian window accurately and automatically detect a number of centred at the interrogated point with the size proportional matching points from two images even though the deformation and rotation are large or the images have periodic and identical patterns. Principle Finding correct correspondences between images captured from different directions is traditionally a difficult problem. Recently, important advances have been made to tackle this problem. Among a few of them, the SIFT featureisoneofthemostimportant[9].Theessenceof the SIFT method is twofold: (i) identify feature locations intheimagespacethatareinvariantwithrespecttothe image translation, scaling and rotation; (ii) compute local descriptor around each feature point in the image that captures saliency of the point. The SIFT feature allows the correspondences of feature points between two images to be easily detected. The best matching pairs can be further identified, and the wrong matches can be eliminated by Figure 1: Scale-space pyramid. The image is subsampled by two in the iRANSAC algorithm. every octave © 2013 Wiley Publishing Ltd | Strain (2014) 50,28–36 doi: 10.1111/str.12063 29 Automated Fast Initial Guess in Digital Image Correlation : Z. Wang et al. to its detection scale. Next, a 4 × 4 square grid of size proportional to the detection scale of the interest point centred at the interest point with its orientation determined by the dominant orientation is laid out in the image, as demonstrated in Figure 2. An eight-bin accumulated histogram is then generated from the image gradientdirectionswithineachgrid. As a whole, these local histograms give rises to an image descriptorof4×4×8=128dimensionsforeachinterestpoint. This resulting image descriptor is referred to as the SIFT descriptor. In order to improve the robustness of the descriptor to illumination invariance, the 128-dimensional histogram is Figure 3: Illustration of line fitting using random sample consensus normalised to unit sum. to select reliable matchings Feature point matching With a set of image descriptors, the correspondences algorithm [12] and group sample consensus (GroupSAC) between images can be found by mutually matching their algorithm [13], which are briefly described below.
Recommended publications
  • Pnp-Net: a Hybrid Perspective-N-Point Network
    PnP-Net: A hybrid Perspective-n-Point Network Roy Sheffer1, Ami Wiesel1;2 (1) The Hebrew University of Jerusalem (2) Google Research Abstract. We consider the robust Perspective-n-Point (PnP) problem using a hybrid approach that combines deep learning with model based algorithms. PnP is the problem of estimating the pose of a calibrated camera given a set of 3D points in the world and their corresponding 2D projections in the image. In its more challenging robust version, some of the correspondences may be mismatched and must be efficiently dis- carded. Classical solutions address PnP via iterative robust non-linear least squares method that exploit the problem's geometry but are ei- ther inaccurate or computationally intensive. In contrast, we propose to combine a deep learning initial phase followed by a model-based fine tuning phase. This hybrid approach, denoted by PnP-Net, succeeds in estimating the unknown pose parameters under correspondence errors and noise, with low and fixed computational complexity requirements. We demonstrate its advantages on both synthetic data and real world data. 1 Introduction Camera pose estimation is a fundamental problem that is used in a wide vari- ety of applications such as autonomous driving and augmented reality. The goal of perspective-n-point (PnP) is to estimate the pose parameters of a calibrated arXiv:2003.04626v1 [cs.CV] 10 Mar 2020 camera given the 3D coordinates of a set of objects in the world and their corre- sponding 2D coordinates in an image that was taken (see Fig. 1). Currently, PnP can be efficiently solved under both accurate and low noise conditions [15,17].
    [Show full text]
  • A Multi-Temporal Image Registration Method Based on Edge Matching and Maximum Likelihood Estimation Sample Consensus
    A MULTI-TEMPORAL IMAGE REGISTRATION METHOD BASED ON EDGE MATCHING AND MAXIMUM LIKELIHOOD ESTIMATION SAMPLE CONSENSUS Yan Songa, XiuXiao Yuana, Honggen Xua aSchool of Remote Sensing and Information Engineering School,Wuhan University, 129 Luoyu Road,Wuhan,China 430079– [email protected] Commission III, WG III/1 KEY WORDS: Change Detection, Remote Sensing, Registration, Edge Matching, Maximum Likelihood Sample Consensus ABSTRACT: In the paper, we propose a newly registration method for multi-temporal image registration. Multi-temporal image registration has two difficulties: one is how to design matching strategy to gain enough initial correspondence points. Because the wrong matching correspondence points are unavoidable, we are unable to know how many outliers, so the second difficult of registration is how to calculate true registration parameter from the initial point set correctly. In this paper, we present edge matching to resolve the first difficulty, and creatively introduce maximum likelihood estimation sample consensus to resolve the robustness of registration parameter calculation. The experiment shows, the feature matching we utilized has better performance than traditional normalization correlation coefficient. And the Maximum Likelihood Estimation Sample Conesus is able to solve the true registration parameter robustly. And it can relieve us from defining threshold. In experiment, we select a pair of IKONOS imagery. The feature matching combined with the Maximum Likelihood Estimation Sample Consensus has robust and satisfying registration result. 1. INTRODUCTION majority. So in the second stage, we pay attention on how to calculate registration parameter robustly. These will make us Remote sensing imagery change detection has gain more and robustly calculate registration parameter successfully, even more attention in academic and research society in near decades.
    [Show full text]
  • Density Adaptive Point Set Registration
    Density Adaptive Point Set Registration Felix Jaremo¨ Lawin, Martin Danelljan, Fahad Shahbaz Khan, Per-Erik Forssen,´ Michael Felsberg Computer Vision Laboratory, Department of Electrical Engineering, Linkoping¨ University, Sweden {felix.jaremo-lawin, martin.danelljan, fahad.khan, per-erik.forssen, michael.felsberg}@liu.se Abstract Probabilistic methods for point set registration have demonstrated competitive results in recent years. These techniques estimate a probability distribution model of the point clouds. While such a representation has shown promise, it is highly sensitive to variations in the den- sity of 3D points. This fundamental problem is primar- ily caused by changes in the sensor location across point sets. We revisit the foundations of the probabilistic regis- tration paradigm. Contrary to previous works, we model the underlying structure of the scene as a latent probabil- ity distribution, and thereby induce invariance to point set density changes. Both the probabilistic model of the scene and the registration parameters are inferred by minimiz- ing the Kullback-Leibler divergence in an Expectation Max- imization based framework. Our density-adaptive regis- tration successfully handles severe density variations com- monly encountered in terrestrial Lidar applications. We perform extensive experiments on several challenging real- world Lidar datasets. The results demonstrate that our ap- proach outperforms state-of-the-art probabilistic methods for multi-view registration, without the need of re-sampling. Figure 1. Two example Lidar scans (top row), with signifi- cantly varying density of 3D-points. State-of-the-art probabilistic method [6] (middle left) only aligns the regions with high density. This is caused by the emphasis on dense regions, as visualized by 1.
    [Show full text]
  • Accurate Motion Estimation Through Random Sample Aggregated Consensus
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. X, NO. X, AUGUST 2016 1 Accurate Motion Estimation through Random Sample Aggregated Consensus Martin Rais, Gabriele Facciolo, Enric Meinhardt-Llopis, Jean-Michel Morel, Antoni Buades, and Bartomeu Coll Abstract—We reconsider the classic problem of estimating accurately a 2D transformation from point matches between images containing outliers. RANSAC discriminates outliers by randomly generating minimalistic sampled hypotheses and verifying their consensus over the input data. Its response is based on the single hypothesis that obtained the largest inlier support. In this article we show that the resulting accuracy can be improved by aggregating all generated hypotheses. This yields RANSAAC, a framework that improves systematically over RANSAC and its state-of-the-art variants by statistically aggregating hypotheses. To this end, we introduce a simple strategy that allows to rapidly average 2D transformations, leading to an almost negligible extra computational cost. We give practical applications on projective transforms and homography+distortion models and demonstrate a significant performance gain in both cases. Index Terms—RANSAC, correspondence problem, robust estimation, weighted aggregation. F 1 INTRODUCTION EVERAL applications in computer vision such as im- the parameters of the underlying transformation by regres- S age alignment, panoramic mosaics, 3D reconstruction, sion. RANSAC achieves good accuracy when observations motion tracking, object recognition, among others, rely on are noiseless, even with a significant number of outliers in feature-based approaches. These approaches can be divided the input data. in three distinct steps. First, characteristic image features are Instead of using every sample in the dataset to per- detected on the input images.
    [Show full text]
  • Fast Rigid Motion Segmentation Via Incrementally-Complex Local Models
    Fast Rigid Motion Segmentation via Incrementally-Complex Local Models Fernando Flores-Mangas Allan D. Jepson Department of Computer Science, University of Toronto fmangas,[email protected] Abstract The problem of rigid motion segmentation of trajectory data under orthography has been long solved for non- degenerate motions in the absence of noise. But because real trajectory data often incorporates noise, outliers, mo- tion degeneracies and motion dependencies, recently pro- posed motion segmentation methods resort to non-trivial (a) Original video frames (b) Class-labeled trajectories representations to achieve state of the art segmentation ac- curacies, at the expense of a large computational cost. This paper proposes a method that dramatically reduces this cost (by two or three orders of magnitude) with minimal accu- racy loss (from 98.8% achieved by the state of the art, to 96.2% achieved by our method on the standard Hopkins 155 dataset). Computational efficiency comes from the use of a (c) Model support region (d) Model inliers and control points simple but powerful representation of motion that explicitly incorporates mechanisms to deal with noise, outliers and motion degeneracies. Subsets of motion models with the best balance between prediction accuracy and model com- plexity are chosen from a pool of candidates, which are then used for segmentation. (e) Model residuals (f) Segmentation result Figure 1: Model instantiation and segmentation. a) f th orig- 1. Rigid Motion Segmentation inal frame, Italian Grand Prix c 2012 Formula 1. b) Class- Rigid motion segmentation (MS) consists on separating labeled, trajectory data W (red, green, blue and black cor- regions, features, or trajectories from a video sequence into respond to chassis, helmet, background and outlier classes f spatio-temporally coherent subsets that correspond to in- respectively).
    [Show full text]
  • Automatic Panoramic Image Stitching Using Invariant Features
    Automatic Panoramic Image Stitching using Invariant Features Matthew Brown and David G. Lowe {mbrown|lowe}@cs.ubc.ca Department of Computer Science, University of British Columbia, Vancouver, Canada. Abstract can provide very accurate registration, but they require a close initialisation. Feature based registration does not re- This paper concerns the problem of fully automated quire initialisation, but traditional feature matching meth- panoramic image stitching. Though the 1D problem (single ods (e.g., correlation of image patches around Harris cor- axis of rotation) is well studied, 2D or multi-row stitching is ners [Har92, ST94]) lack the invariance properties needed more difficult. Previous approaches have used human input to enable reliable matching of arbitrary panoramic image or restrictions on the image sequence in order to establish sequences. matching images. In this work, we formulate stitching as a In this paper we describe an invariant feature based ap- multi-image matching problem, and use invariant local fea- proach to fully automatic panoramic image stitching. This tures to find matches between all of the images. Because of has several advantages over previous approaches. Firstly, this our method is insensitive to the ordering, orientation, our use of invariant features enables reliable matching of scale and illumination of the input images. It is also insen- panoramic image sequences despite rotation, zoom and illu- sitive to noise images that are not part of a panorama, and mination change in the input images. Secondly, by viewing can recognise multiple panoramas in an unordered image image stitching as a multi-image matching problem, we can dataset. In addition to providing more detail, this paper ex- automatically discover the matching relationships between tends our previous work in the area [BL03] by introducing the images, and recognise panoramas in unordered datasets.
    [Show full text]
  • Tracking Algorithms for Tpcs Using Consensus-Based Robust Estimators
    Tracking Algorithms for TPCs using Consensus-Based Robust Estimators J. C. Zamora∗, G. F. Fortino Instituto de Fisica, Universidade de Sao Paulo, Sao Paulo 05508-090, Brazil Abstract A tracking algorithm1 based on consensus-robust estimators was implemented for the analysis of experiments with time-projection chambers. In this work, few algorithms beyond RANSAC were successfully tested using experimental data taken with the AT-TPC, ACTAR and TexAT detectors. The present track- ing algorithm has a better inlier-outlier detection than the simple sequential RANSAC routine. Modifications in the random sampling and clustering were included to improve the tracking efficiency. Very good results were obtained in all the test cases, in particular for fitting short tracks in the detection limit. Keywords: Time Projection Chambers, Active Target, Tracking Algorithm, RANSAC, clustering 1. Introduction Time Projection Chambers (TPC) are among the most efficient devices exist- ing for charged-particle tracking. They enable measurements of energy loss and position hits in the 3-dimensional space which are employed for particle identifi- cation and momentum reconstruction. TPCs have been successfully used during arXiv:2011.07138v1 [physics.ins-det] 13 Nov 2020 the last three decades as tracking systems in many particle-physics experiments, e.g., TOPAZ [1], STAR [2] or ALICE [3]. ∗Corresponding author Email address: [email protected] (J. C. Zamora ) 1Code available at https://github.com/jczamorac/Tracking_RANSAC.git Preprint submitted to NIM A November 17, 2020 In the past few years, the operation of TPCs in Active Target (AT) mode have gained great attention for investigation in nuclear physics. The AT prin- ciple brings the advantage to use a gas as detector material and target medium simultaneously.
    [Show full text]
  • Two-View Geometry Estimation by Random Sample and Consensus
    Two-View Geometry Estimation by Random Sample and Consensus A Dissertation Presented to the Faculty of the Electrical Engineering of the Czech Technical University in Prague in Partial Fulfillment of the Re- quirements for the Ph.D. Degree in Study Programme No. P2612 - Electrotech- nics and Informatics, branch No. 3902V035 - Artificial Intelligence and Bio- cybernetics, by Ondrejˇ Chum July 1, 2005 Thesis Advisor Dr. Ing. Jirˇ´ı Matas Center for Machine Perception Department of Cybernetics Faculty of Electrical Engineering Czech Technical University in Prague Karlovo nam´ estˇ ´ı 13, 121 35 Prague 2, Czech Republic fax: +420 224 357 385, phone: +420 224 357 465 http://cmp.felk.cvut.cz Abstract The problem of model parameters estimation from data with a presence of outlier measurements often arises in computer vision and methods of robust estimation have to be used. The RANSAC algorithm introduced by Fishler and Bolles in 1981 is the a widely used robust estimator in the field of computer vision. The algorithm is capable of providing good estimates from data contaminated by large (even significantly more than 50%) fraction of outliers. RANSAC is an optimization method that uses a data-driven random sampling of the parameter space to find the extremum of the cost function. Samples of data define points of the parameter space in which the cost function is evaluated and model parameters with the best score are output. This thesis provides a detailed analysis of RANSAC, which is recast as time-constrained op- timization – a solution that is optimal with certain confidence is sought in the shortest possible time.
    [Show full text]
  • Comparative the Performance of Robust Linear Regression Methods 817
    Applied Mathematical Sciences, Vol. 13, 2019, no. 17, 815 - 822 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2019.97106 Comparative the Performance of Robust Linear Regression Methods and its Application Unchalee Tonggumneada and Nikorn Saengngamb a Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi, Phatum Thani 12110, Thailand b Department of Technical Education , Faculty of Technical Education, Rajamangala University of Technology Thanyaburi, Phatum Thani 12110, Thailand This article is distributed under the Creative Commons by-nc-nd Attribution License. Copyright © 2019 Hikari Ltd. Abstract This research aims to apply the robust estimation method with actual data. M- estimation, quantile regression and Random Sample Consensus (RANSAC) are considered. The result show that, when no outlier, the parameter estimate using M- estimation, quantile regression and random sample consensus (RANSAC) produce a very similary. When the rate of outliers increasing is also conducted the parameter estimate using RANSAC method more different from M- estimation and quantile regression. The RANSAC method is more robust to outlier than the M- estimation and quantile regression method. Which can be clearly seen that, The RANSAC methods are more robust against the outlier than M- estimation and quantile regression ,because in parameter estimation do not include the outlier into the model, only inliers are used to run the model. Keywords: robust regression, M- estimation, quantile regression, Random Sample Consensus. 1 Introduction Regression analysis is the common and useful statistical tool which can be used to quantify the relationship between a response variable (y) and explanatory variables (x), Ordinary Least Square (OLS) is A common used for parameter estimate.
    [Show full text]
  • Robust Regression Sample Consensus Estimator Model: Performed on Inliers
    International Journal of Engineering Applied Sciences and Technology, 2018 Vol. 3, Issue 6, ISSN No. 2455-2143, Pages 22-28 Published Online October 2018 in IJEAST (http://www.ijeast.com) ROBUST REGRESSION SAMPLE CONSENSUS ESTIMATOR MODEL: PERFORMED ON INLIERS R.Akila Dr.J.Ravi S.Dickson Assistant Professor Associate Professor Assistant Professor Dept. of Maths & Stat. Dept. of Maths & Stat. Dept. of Maths & Stat. Vivekanandha College for Women Vivekanandha College for Women Vivekanandha College for Woemn Tiruchengode, Namakkal Tiruchengode, Namakkal Tiruchengode, Namakkal Tamilnadu, India Tamilnadu, India Tamilnadu, India Abstract-Robust regression play vital role in many fields Lindquist (1953); and Glass, Peckham, and Sanders (1972). specially image processing. Corner detection is the one of the Countless studies have shown that even when classic important role in image process. Much corner detection is to parametric tests are robust to Type I errors, they are usually develop day to day; such corner detection is based on robust considerably less powerful than their modern robust regression. SAmple Consensus (SAC) has been popular in counterparts [2]. The RANSAC families formulate regression regression problems with samples contaminated with with outliers as a minimization problem. This formulation is outliers. It has been a milestone of many researches on similar with least square method, which minimize sum of robust estimators, but there are a few survey and squared error values. Robust fitting algorithms are mostly performance analysis on them. According to SAC family to based on sampling strategies, hypotheses are generated and analyze the performance evaluation based on being accurate their support is measured in the point cloud.
    [Show full text]
  • Recursive RANSAC: Multiple Signal Estimation with Outliers
    Recursive RANSAC: Multiple Signal Estimation with Outliers Peter C. Niedfeldt, Randal W. Beard Brigham Young University, Provo, UT USA ([email protected], [email protected]). Abstract: The random sample consensus (RANSAC) algorithm is frequently used in computer vision to estimate the parameters of a signal in the presence of noisy and even spurious observations called gross errors. Instead of just one signal, we desire to estimate the parameters of multiple signals, where at each time step a set of observations of generated from the underlying signals and gross errors are received. In this paper, we develop the recursive RANSAC (RRANSAC) algorithm to solve the inherent data association problem and recursively estimate the parameters of multiple signals without prior knowledge of the number of true signals. We compare the performance of RRANSAC with several existing algorithms, and also demonstrate the capabilities of RRANSAC in an aerial geolocation problem. 1. INTRODUCTION they require solving a nonlinear minimization problem. See Rousseeuw and Leroy [1987], Huber [1996]. The random sample consensus (RANSAC) algorithm de- Regression analysis describes a collection of techniques veloped by Fischler and Bolles [1981] is a novel approach used to determine the relationships between inputs to a to regression analysis. Many previous algorithms formed system and the outputs. A common scenario is where a model using all or most of the available data set, and the system model is assumed to be known while the then removed observations inconsistent with the model specific parameters of the underlying signals are to be before producing a final estimate. Alternatively, RANSAC estimated. One well known algorithm is least-squares (LS), approaches the problem by forming numerous model es- which minimizes the mean-squared error between a set timates using the minimum number of required points of observations and the estimated model.
    [Show full text]
  • Feature Matching with Improved SIRB Using RANSAC by Meet Palod, Manas Joshi, Amber Jain, Viroang Rawat & Prof
    Global Journal of Computer Science and Technology: F Graphics & Vision Volume 21 Issue 1 Version 1.0 Year 2021 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Online ISSN: 0975-4172 & Print ISSN: 0975-4350 Feature Matching with Improved SIRB using RANSAC By Meet Palod, Manas Joshi, Amber Jain, Viroang Rawat & Prof. K.K. Sharma Abstract- In this paper, we suggest to improve the SIRB (SIFT (Scale-Invariant Feature Transform) and ORB (Oriented FAST and Rotated BRIEF)) algorithm by incorporating RANSAC to enhance the matching performance. We use multi-scale space to extract the features which are impervious to scale, rotation, and affine variations. Then the SIFT algorithm generates feature points and passes the interest points to the ORB algorithm. The ORB algorithm generates an ORB descriptor where Hamming distance matches the feature points. We propose to use RANSAC (Random Sample Consensus) to cut down on both the inliers in the form of noise and outliers drastically, to cut down on the computational time taken by the algorithm. This post- processing step removes redundant key points and noises. This computationally effective and accurate algorithm can also be used in handheld devices where their limited GPU acceleration is not able to compensate for the computationally expensive algorithms like SIFT and SURF. Experimental results advocate that the proposed algorithm achieves good matching, improves efficiency, and makes the feature point matching more accurate with scale in-variance taken into consideration. GJCST-F Classification: G.4 FeatureMatchingwithImprovedSIRBusingRANSAC Strictly as per the compliance and regulations of: © 2021. Meet Palod, Manas Joshi, Amber Jain, Viroang Rawat & Prof.
    [Show full text]