
applied sciences Article STN-Homography: Direct Estimation of Homography Parameters for Image Pairs Qiang Zhou and Xin Li * The State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310027, China; [email protected] * Correspondence: [email protected] Received: 17 June 2019; Accepted: 22 November 2019; Published: 29 November 2019 Abstract: Estimating a 2D homography from a pair of images is a fundamental task in computer vision. Contrary to most convolutional neural network-based homography estimation methods that use alternative four-point homography parameterization schemes, in this study, we directly estimate the 3 × 3 homography matrix value. We show that after coordinate normalization, the magnitude difference and variance of the elements of the normalized 3 × 3 homography matrix is very small. Accordingly, we present STN-Homography, a neural network based on spatial transformer network (STN), to directly estimate the normalized homography matrix of an image pair. To decrease the homography estimation error, we propose hierarchical STN-Homography and sequence STN-homography models in which the sequence STN-Homography can be trained in an end-to-end manner. The effectiveness of the proposed methods is demonstrated based on experiments on the Microsoft common objects in context (MSCOCO) dataset, and it is shown that they significantly outperform the current state-of-the-art. The average processing time of the three-stage hierarchical STN-Homography and the three-stage sequence STN-Homography models on a GPU are 17.85 ms and 13.85 ms, respectively. Both models satisfy the real-time processing requirements of most potential applications. Keywords: homography; spatial transformer network; convolutional neural network 1. Introduction Estimating a 2D homography (or projective transformation) from a pair of images is a fundamental task in computer vision. A homography is a mapping between any two images of the same planar surface acquired from different perspectives. They play a vital role in robotics and computer vision applications, such as image stitching [1–3], simultaneous localization and mapping (SLAM) [4–6], three-dimensional (3D) camera pose reconstruction [7–9], and optical flow [10,11]. The basic approach used to tackle a homography estimation is to use two sets of corresponding points and a direct linear transform (DLT) method. However, finding the corresponding set of points in images is not always an easy task. In this regard, a significant amount of research has been conducted on this topic. Various feature extraction methods, such as the scale-invariant feature transform (SIFT) [12] and oriented fast and rotated brief (ORB) [13] are used to identify the interest points. Accordingly, by employing a matching framework, point correspondences are achieved. Commonly, a random sample consensus (RANSAC) [14] approach is applied on the correspondence set to avoid incorrect associations. Additionally, after an iterative optimization process, the best estimate is chosen. One of the major problems of these methods, such as ORB+RANSAC, is that the achievement of an accurate homography estimation heavily relies on accurate localization and evenly distribution of the detected hand-crafted feature points, which is challenging in low-textured scenes. Convolutional neural networks (CNNs) automate feature extraction and provide much powerful features than conventional approaches. Their superiority has been demonstrated on numerous Appl. Sci. 2019, 9, 5187; doi:10.3390/app9235187 www.mdpi.com/journal/applsci Appl. Sci. 2019, 9, 5187 2 of 13 occasions in the past in various tasks [15–19]. Recently, attempts have been expended to solve the problem of matching with CNN. Flownet [20] achieves optical flow estimation by using a parallel convolutional network model to independently extract features from each image. A correlation layer is used to locally match the extracted features against each other and aggregate them with responses. Finally, a refinement stage consisting of deconvolutions is used to map optical flow estimates back to the original image coordinates. Flownet 2.0 [21] uses Flownet models as building blocks to create a hierarchical framework to solve the same problem. In view of the powerful feature extraction and matching capabilities of the CNNs, some studies focused on the solution of homography estimation using CNN and achieved higher accuracies compared with the ORB+RANSAC method. HomographyNet [22] estimated the homography between two images based on the relocation of a set of four points, also known as four-point homography parameterization. This model is based on the VGG architecture [23] with eight convolutional layers, a pooling layer after every two convolutions, two fully connected layers and an L2 loss function that results from the difference between the predicted and the true four-point coordinate values. Nowruzi et al. [24] used the hierarchy of twin convolutional regression networks to estimate the homography between a pair of images, and improved the prediction accuracy of four-point homography compared with that proposed in [22]. Nguyen et al. [25] proposed an unsupervised learning algorithm that trained a deep CNN to estimate planar homography which was also based on four-point homography parameterization. All these studies chose the four-point homography parameterization because the 3 × 3 homography matrix H mixes the rotation, translation, scaling, and shear components of the homography transformation. The rotation and shear components tend to have a much smaller magnitude than the translation component. Even though an error in their values can have a major impact on H, it will have a minor effect on the L2 loss function of the elements of H that is detrimental for training the neural network. The four-point homography parameterization does not suffer from these problems. In this study, we focus on using convolutional neural networks to estimate homography. Contrary to the existing CNN-based four-point homography estimation methods, we directly estimate the 3 × 3 homography matrix value. The large magnitude difference and large variance in the element values of the homography matrix make it very difficult to directly estimate using neural networks. Our study finds that after the pixel coordinates are normalized, the magnitude difference and variance of the element values of the normalized homography matrix will become very small. On this basis, we extend the affine transformation in the STN [26] to the homography transformation and propose the STN-Homography model to directly estimate the pixel coordinate normalized homography matrix. The contributions of this study are as follows: (1) We prove that the 3 × 3 homography matrix can be directly learnt well with the proposed STN-Homography model after pixel coordinate normalization, rather than to estimate the alternative four-point homography. (2) We propose a hierarchical STN-Homography model that yields more accurate results compared with the state-of-the-art. (3) We propose a sequence STN-Homography model that can be trained in an end-to-end manner and yield superior results than those obtained by the hierarchical STN-Homography model and the state-of-the-art. 2. Dataset To compare the homography estimation accuracy of our proposed model with [22,24], we also used the Microsoft common objects in context 2014 dataset (COCO 2014) [27]. First, all the images were converted to grayscale and were downsampled to a resolution of 320 × 240 pixels. To prepare training and test samples, we choose 118,000 images from the trainval set of the COCO 2014 dataset and 10,000 images from the test set of the COCO 2014 dataset. Subsequently, three samples from each image (denoted as image_a) were generated to increase the dataset size. To achieve this, three random rectangles with a size of 128 × 128 pixels (excluding the boundary region comprising 32 pixels) were chosen from each image. For each rectangle, a random perturbation was added (within the range of the 32 pixels) to each corner point of the rectangle. This provided us with the target four-point homography values. Target homography was used with the OpenCV library to warp image_a to image_b, where image_b had the Appl. Sci. 2019, 9, 5187 3 of 13 same size as image_a. Finally, original corner point coordinates were used within the warped image pair (image_a and image_b) to extract the warped patches patch_a and patch_b. Accordingly, the normalized homography matrix Hba can be calculated with the following equation, −1 Hba = MHba M (1) where Hba is the homography matrix calculated from the previously generated four-point homography 2 2 3 w 0 −1 = 6 2 − 7 values and M 4 0 h 15, where w and h denote the width and height of patch_b (patch_a and 0 0 1 patch_b have the same sizes equal to 128 × 128 pixels). We also used the target homography with the OpenCV library to warp patch_a to get patch_a_t with the same size as patch_a, and patch_a_t was used to calculate the L1 pixelwise photometric loss. The quadruplet data of patch_a, patch_b, Hba, and patch_a_t, is our training sample, and are fed as inputs to the network. Please note that the prediction of the network is the normalized Hba, and we need to use Equation (1) to transform the prediction result to obtain the nonnormalized homography matrix Hba. Given that the homography matrix can be multiplied by an arbitrary nonzero scale factor without altering the projective transformation, only the ratio of the matrix elements is significant, thus leaving Hba eight independent ratios corresponding to eight degrees-of-freedom. Furthermore, we always set the last element of H to be equal to 1.0. In the training sample of the quadruplet data, we flattened 2 ba3 h11 h12 h13 6 7 Hba = 4h21 h22 h235 and took the first eight elements as the training input. Figure1 shows the h31 h32 1.0 value histogram of Hba in the training samples after pixel coordinate normalization, as depicted in Equation (1). From Figure1 we can clearly observe that after normalization, the magnitude difference and variance of the eight independent elements of Hba is very small, which means that Hba can be easily regressed with CNN.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-