JOURNAL OF SPACECRAFT AND ROCKETS Vol. 58, No. 2, March–April 2021
Uncooperative Spacecraft Pose Estimation Using Monocular Monochromatic Images
Jian-Feng Shi∗ MDA, Brampton, Ontario L6S 4 J3, Canada and Steve Ulrich† Carleton University, Ottawa, Ontario K1S 5B6, Canada https://doi.org/10.2514/1.A34775 Imaging cameras are cost-effective sensors for spacecraft navigation. Image-driven techniques to extract the target spacecraft from its background are efficient and do not require pretraining. In this paper, we introduce several image-driven foreground extraction methods, including combining the difference of Gaussian-based scene detection and graph manifold ranking-based foreground saliency generation. We successfully apply our foreground extraction method on infrared images from the STS-135 flight mission captured by the space shuttle’s Triangulation and LIDAR Automated Rendezvous and Docking System (TriDAR) thermal camera. Our saliency approach demonstrates state- of-the-art performance and provides an order of magnitude reduction in processing speed from the traditional methods. Furthermore, we develop a new uncooperative spacecraft pose estimation method by combining our foreground extraction technique with the level-set region-based pose estimation with novel initialization and gradient descent enhancements. Our method is validated using synthetically generated Envisat, Radarsat model, and International Space Station motion sequences. The proposed process is also validated with real rendezvous flight images of the International Space Station.
Nomenclature ILab = image in CIE L*a*b color space A B C K = intrinsic camera matrix , , = pose quaternion matrices K A~ = unnormalized optimal affinity matrix B, KB = box filter kernel and kernel size, respec- B x tively = box filter operator K , K = elliptical kernel and kernel size, respec- C = image curve entity i i CG x CL x tively, where i is dilation d or erosion e , = image converting functions from color to K = Laplace filter kernel size gray and red green blue to International Com- L Ks = 3 × 3 Scharr kernel mission on Illumination L*a*b, respectively L x D = Laplacian filter = degree matrix M x = mean value binary threshold function ^ D, D = maximum and minimum feature vector MB x = minimum boundary distance saliency differences, respectively response generation function D x = normalized difference of Gaussian function MC x = image moment centroid generation function d ij = ith row, jth column element in the degree MD x , ME x = morphological dilation and erosion func- matrix tion, respectively E = image energy function M = appearance model, where i is foreground f F x i = fast explicit diffusion function or background b F = coordinate frame vectrix N x = image normalization function f = camera focal length O x = Otsu thresholding function f, f = superpixel and optimized graph manifold oi = image center position in the i direction, ranking function respectively where i is x, y Downloaded by CARLETON UNIVERSITY on March 18, 2021 | http://arc.aiaa.org DOI: 10.2514/1.A34775 f^ = unit vector of the i vector gradient i Pi = probability density function, where i is fore- G = 11 × 11 Gaussian filter ground f or background b G x = Gaussian distribution map generation function q, qi = quaternion represented target orientation H z , δ z = Heaviside step and smoothed Dirac-delta and ith parameter quaternion, respectively function, respectively q~ = nonnormalized quaternion hi = step size in the i direction, where i is x, y R = rotation from target body frame to camera h i = feature vector for the ith superpixel frame I, I = input color image and normalized image, R x = apply contours and keep only maximum respectively perimeter region RC x = regional contrast saliency response genera- tion function Presented as 2018-5281 at the 2018 AIAA SPACE and Astronautics Forum r x = likelihood of the pixel property, where i is and Exposition, Orlando FL, September 17–19, 2018; received 17 February i 2020; revision received 26 June 2020; accepted for publication 23 September foreground f or background b 2020; published online 8 February 2021. Copyright © 2021 by Jian-Feng Shi and rtol = foreground/background classification ratio Steve Ulrich. Published by the American Institute of Aeronautics and Astronau- tolerance tics, Inc., with permission. All requests for copying and permission to reprint Sbg, Sfg = background and foreground masks, respec- should be submitted to CCC at www.copyright.com; employ the eISSN 1533- tively 6794 to initiate your request. See also AIAA Rights and Permissions www. Sd = simplified difference of Gaussian normal- aiaa.org/randp. ized response map *Senior GNC Engineer, 9445 Airport Road. Senior Member AIAA. S S † e, fe = Scharr filtered edge response map and edge Associate Professor, Department of Mechanical and Aerospace Engineer- contour response, respectively ing, 1125 Colonel By Drive. Senior Member AIAA.
284 SHI AND ULRICH 285