
3D Face Geometry Capture Using Monocular Video Shubham Agrawal May 20, 2019 The Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 Thesis Committee: Simon Lucey, Chair Martial Hebert Ming-Fang Chang, Carnegie Mellon University Thesis proposal submitted in partial fulfillment of the requirements for the degree of Master of Science in Robotics c Shubham Agrawal, 2019 Abstract Accurate reconstruction of facial geometry has been one of the oldest tasks in computer vision. Despite being a long-studied problem, many modern methods fail to reconstruct realistic looking faces or rely on highly constrained environments for capture. High fidelity face reconstructions have so far been limited to either studio settings or through expensive 3D scanners. On the other hand, unconstrained reconstruction methods are typically lim- ited by low-capacity models. We aim to capture face geometry with high fidelity using just a single monocular video sequence of the face. Our method reconstructs accurate face geometry of a subject using a video shot from a smartphone in an unconstrained environment. Our approach takes advantage of recent advances in visual SLAM, keypoint detection, and object detection to improve accuracy and robustness. By not being constrained to a model subspace, our reconstructed meshes cap- ture important details while being robust to noise and being topologically consistent. Our evaluations show that our method outperforms current single and multi-view baselines by a significant margin, both in terms of geometric accuracy and in capturing person-specific details important for making realistic looking models. To further the current work on single and multi-view 3D face reconstruction, we also propose a dataset of video sequences of individuals, specifically with the goal to improve deep-learning based reconstruction techniques using self-supervision as a training loss. I Acknowledgements First and foremost, I would like to express my deepest thanks to my advisor, Dr. Simon Lucey. His patient guidance, continued encouragement, and immense knowledge were key motivating factors throughout my masters. His insights on what the right tool would be to solve a particular problem really helped me hone my own intuition as a researcher and helped guide me through my masters. It also has been an absolute privilege to work with and learn from all the brilliant peo- ple in my masters program cohort. I have had the good fortune to meet the most helpful and humble people at CMU RI. Last, but not the least, I would like to express my gratitude and indebtedness to my family for their love and support. Their support and hope brought me here in the first place. II Contents 1 Introduction 2 1.1 Motivation . .2 1.2 Challenges . .3 1.3 Contributions . .4 1.4 Thesis Outline . .4 2 Related Work 6 2.1 3D Morphable Models (3DMMs) . .6 2.2 Single Image 3D Face Reconstruction . .6 2.3 SfM based Multi-view Reconstruction . .8 2.4 Photometric Stereo . .8 3 Camera Pose Estimation 9 3.1 Introduction . .9 3.2 Our Approach . .9 4 Multi-view Stereo 10 4.1 Introduction . 10 4.2 Our Approach . 10 5 Mesh Fitting 12 5.1 Introduction . 12 5.2 Point cloud constraints . 12 5.3 Landmark constraints . 13 5.4 Edge constraints . 14 5.5 Non-Rigid Iterative Closest Points . 15 6 Mesoscopic Augmentations 18 6.1 Introduction . 18 6.2 Our Approach . 18 7 Experimental Results 20 7.1 Quantitative Evaluation . 20 7.2 Expressions . 21 8 Dataset 25 III 9 Conclusion 28 IV List of Figures 1.1 While machine learning based models for keypoint detection have really im- proved over the past few years, they are still fairly brittle to images with face angles beyond a certain threshold and face geometry beyond what they may have been trained with. This in turn means that relying on landmarks for pose estimation does not lead to accurate pose estimates . .5 2.1 Visualization of 3DMM mesh, and variation along the first few principal components . .7 2.2 Overview of the pipeline of the state of the art multi-view algorithm of [26] et. al. based on prior constrained structure from motion. The method uses landmarks to initialize poses in a bundle-adjustment system that minimizes photometric consistency between frames, while optimizing 3d structure in the constrained 3DMM space. .7 2.3 General approach taken by current SOTA image-to-image translation based deep networks for reconstruction. The training data of such networks is still limited by low quality synthetic data. .8 4.1 Example point clouds generated at the end of our Point cloud generation stage, with and without texture. The point clouds accurately capture the overall face geometry and details in areas like eyes and lips, that make the person recognizable. However, the point clouds have missing data as well as noise, which requires a robust mesh fitting approach . 11 5.1 Comparison of mesh generation methods a) Sample image. b) Generated point cloud c) [35] can fill in gaps in the point cloud but at the cost of overly smooth meshes. d) Depth fusion method of [25] can preserve details, but is unable to handle missing data. e) Our approach reconstructs meshes with consistent topology and correspondence between vertices, while capturing details of the point cloud and being robust to noise and missing data. 13 5.2 Exaggerated view of the point cloud constraints. For each vertex, the set of points within a small threshold of its normal (in green here) are found and their median used as the target 3D coordinate for the vertex. 14 V 5.3 a) We train a bounding box regressor (above) and landmark detector (be- low) specifically for ears. This improves our reconstruction’s overall accuracy while allowing us to capture ear size and contour. b) Visualization of edge constraints. Image edges in yellow, mesh vertices corresponding to edges projected in blue. Note that mesh vertices fit the ear well because of the ear landmark detection. 15 5.4 The figure shows the non-monotonic decrease of the residual. This non- convex nature prevents the use of a black-box optimiser. The figure shows the residual versus iteration during a registration. The residual increases between some steps, as the reliability weights increase when the template aligns itself with the target and more points find a correspondence. A gen- eral optimiser can not escape from the local minima, while the method we use is robust to this behaviour of the loss. In our method, convergence is determined when a threshold stiffness value is reached. 17 6.1 (Centre) Ours. (Right) Ours with modified mesoscopic augmentations. 19 7.1 Qualitative comparison against reconstructions of various single and multi- view methods. Let to Right: sample frame and ground truth 3D, Pix2vertex [51], PRN [18], multi-view landmark fitting (4dface [28]), PCSfM [26], Ours. For each method the upper row shows the reconstructed mesh, front and profile, and the corresponding heatmap of error (Accuracy) is shown in the lowerrow ...................................... 22 7.2 Effect of ear landmarking: Ground truth mesh (white) overlapped with er- ror heatmaps of PCSfM(left) and ours(right). Landmarking the ears greatly improves our fitting and reduces the geometric error in our reconstructions 23 7.3 (Middle) Output from Structure RGB-D Sensor [41]. Details like the eyes, nose and lips are excessively smoothed out. (Right) Our reconstruction. 24 7.4 Our method naturally generalizes to any face geometry, including deforma- tions caused by expressions. 24 8.1 For each subject, we record two video sequences under different lighting and background. For the subject’s where ground truth is not available, we self- validate the two reconstructed meshes to be consistent, within a small toler- ance. 26 8.2 (Middle) Output from Structure RGB-D Sensor [41]. Details like the eyes, nose and lips are excessively smoothed out. (Right) Our reconstruction. 27 VI List of Tables 7.1 Quantitative results against ground truth scans. We evaluate the state of the art single and multi-view reconstruction methods. As is common in MVS benchmarks, we evaluate the reconstructions in terms of average distance from reconstruction to ground truth (accuracy) and distance from ground truth to reconstruction (completion). All numbers in mm; lower is better. * denotes that the method needs camera intrinsics to be known in advance. 21 8.1 An overview of available 3D face datasets and the pose variation in RGB im- ages available in them. 26 1 Chapter 1 Introduction 1.1 Motivation Reconstructing faces has been a problem of great interest in computer vision and graphics with applications in a wide variety of domains, ranging from animation [29], entertainment [47], genetics, bio-metrics, medical procedures, and more recently, augmented and virtual reality. Despite the long body of work, 3D face reconstruction still remains an open and challenging problem, primarily because of the high level of detail required owing to our sensitivity to facial features. Even slight anomalies in the reconstructions can make the output look unrealistic and hence, the accuracy of reconstructed face models is of utmost importance. While accurate scans of facial geometry can be obtained using structured light or laser scanners, these are often prohibitively expensive, typically costing tens of thousands of dol- lars. The seminal work of Beeler [7] showed that a studio setup of cameras could be used to capture face geometry accurately. Since then, a variety of work has focused on using Pho- tometric stereo or Multi-view stereo techniques in studio settings for face reconstruction and performance capture [13, 21]. Although accurate in their reconstructions, these studio setups are not trivial to set up, typically requiring a calibrated camera setup along with con- trolled lighting and backgrounds.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages40 Page
-
File Size-