USOO9036898B1 (12) United States Patent (10) Patent No.: US 9,036,898 B1 Beeler et al. (45) Date of Patent: May 19, 2015 (54) HIGH-QUALITY PASSIVE PERFORMANCE 2009.0153655 A1 6/2009 Ike et al. ......................... 348,77 CAPTURE USING ANCHOR FRAMES 2010.0189342 A1* 7, 2010 Parr et al. ...................... 382,154 2010/0284607 A1* 1 1/2010 Van Den Hengel et al. ... 382/154 (75) Inventors: Thabo Beeler, Zurich (CH); Bernd OTHER PUBLICATIONS Bickel, Zurich (CH); Fabian Hahn, Regensdorf (CH); Derek Bradley, High Resolution Passive Facial Performance Capture. Derek Brad Zurich (CH); Paul Beardsley, Zurich eXander,Wigg ... Hsieh,This 6 e digital RP emilys Allaproject: Sheth photoreal Iacia (CH), Bob Sumner, Zurich (CH): modeling and animation.” ACM SIGGRAPH, 2009, Courses 1-15. Markus Gross, Zurich (CH) Anuar, N., et al., “Extracting animated meshes with adaptive motion (73) Assi DISNEY ENTERPRISES INC estimation.” Proc. Vision, Modeling, and Visualization, 2004, pp. SS1gnee: 9 es 63-71 Burbank, CA (US) Beeler, T., et al., "High-quality single-shot capture of facial geom etry.” ACM Trans. Graphics, 2010, Proc. SIGGRAPH, 9 pages. (*) Notice: Subject to any disclaimer, the term of this (Continued) patent is extended or adjusted under 35 U.S.C. 154(b) by 535 days. Primary Examiner — Stephen R Koziol 21) Appl. No.: 13/287,774 Assistant Examiner — Delomia Gilliard (21) Appl. No.: 9 (74) Attorney, Agent, or Firm — Kilpatrick Townsend & (22) Filed: Nov. 2, 2011 Stockton LLP Related U.S. Application Data (57) ABSTRACT High-quality passive performance capture using anchor (60) Provisional application No. 61/433,926, filed on Jan. frames derives in part from a robust tracking algorithm. 18, 2011. Tracking is performed in image space and uses an integrated result to propagate a single reference mesh to performances (51) G06TInt. Cl. I7/00 (2006.01) represented in the image space. Image-space tracking is com puted for each camera in multi-camera setups. Thus, multiple (52) U.S. Cl. hypotheses can be propagated forward in time. If one flow CPC - - - - - - - - - - - - - - grgr. G06T 1700 (2013.01) computation develops inaccuracies, the others can compen (58) Field of Classification Search sate. This yields results Superior to mesh-based tracking tech CPC ....................................................... GO6T 17/OO niques because image data typically contains much more USPC . ... 382/154 detail, facilitating more accurate tracking. Moreover, the See application file for complete search history. problem of error propagation due to inaccurate tracking in (56) Ref Cited image space can be dealt with in the same domain in which it eerees e occurs. Therefore, there is less complication of distortion due to parameterization, a technique used frequently in mesh U.S. PATENT DOCUMENTS processing algorithms. 2009, OOO3686 A1* 1/2009 Gu ................................ 382,154 2009, OO16435 A1* 1/2009 Brandsma et al. ....... 375,240.12 21 Claims, 20 Drawing Sheets IMAGE ACQUISETION STAGE20 MESHRE CONSERUCON STAGE 220 WESH REFINEMENT STAGE260 N PROPAGATION STAGE 250 FRAME SEQUENCE US 9,036,898 B1 Page 2 (56) References Cited Ma, W., et al., “Facial performance synthesis using deformation driven polynomial displacement maps.” ACM Trans. Graphics, 2008, OTHER PUBLICATIONS Proc. SIGGRAPH Asia, vol. 27, No. 5, 10 pages. Pighin, F. H., et al., “Resynthesizing facial animation through 3d Bickel, B., et al., “Multi-scale capture of facial geometry and model-based tracking.” ICCV, 1999, pp. 143-150. motion.” ACM Trans. Graphics, 2007, Proc. SIGGRAPH, vol.33, pp. Popa, T., et al., “Globally consistent space-time reconstruction.” 10 pages. Eurographics Symposium on Geometry Processing, 2010, 10 pages. Blanz, V., et al., “Reanimating faces in images and video. Computer Rav-Acha, A., et al., “Unwrap mosaics: A new representation for Graphics Forum, 2003, Proc. Eurographics, vol. 22, No. 3, pp. 641 video editing.” ACM Transactions on Graphics, 2008, SIGGRAPH 650. Aug. 2008, 12 pages. Bradley, D., et al., “High resolution passive facial performance cap Sumner, R. W., et al., “Deformation transfer for triangle meshes.” ture.” ACM Trans. Graphics, 2010, Proc. SIGGRAPH, pp. 10 pages. ACM Trans Graph., 2004, vol. 23, pp. 399-405. Decarlo, D., et al., “The integration of optical flow and deformable Wand, M., et al., “Efficient reconstruction of nonrigid shape and models with applications to human face shape and motion estima motion from real-time 3d scanner data.” ACM Trans. Graph, 2009, tion.” CVPR, 1996, vol. 231, pp. 8 pages. vol. 28, No. 2, pp. 1-15. Essa, I., et al., “Modeling, tracking and interactive animation of faces Wang.Y., et al., “High resolution acquisition, learning and transfer of and heads using input from video. Proceedings of Computer Ani dynamic 3-d facial expressions.” Computer Graphics Forum, 2004, mation, 1996, vol. 68, pp. 12 pages. vol. 23, No. 3, pp. 677-686. Furukawa, Y, et al., “Dense 3d motion capture for human faces.” Williams, L., “Performance-driven facial animation.” Computer CVPR, 2009, pp. 8 pages. Graphics, 1990, Proceedings of SIGGRAPH 90, vol. 24, pp. 235 Guenter, B., et al., “Making faces.” Computer Graphics, 1998, ACM 242. Press, New York, SIGGRAPH 98 Proceedings, pp.55-66. Winkler, T., et al., “Mesh massage.” The Visual Computer, 2008, vol. Hernandez, C., et al., “Self-calibrating a real-time monocular 3d 24, pp. 775-785. facial capture system.” Proceedings International Symposium on 3D Zhang, L., et al., “Spacetime faces: High resolution capture for mod Data Processing, Visualization and Transmission, 2010, pp. 8 pages. eling and animation.” ACM Transactions on Graphics, 2004, vol. 23. Kraevoy, V., et al., “Cross-parameterization and compatible remesh No. 3, pp. 548-558. ing of 3d models.” ACM Trans. Graph., 2004, vol. 23, pp. 861-869. Ekman, Paul, et al., “Facial Action Coding System. The Manual on Li, H., et al., “3-d motion estimation in model-based facial image CD Rom, HTML Demonstration Version.” online), published by A coding.” IEEE Trans. Pattern Anal. Mach. Intell., 1993, vol. 15, No. Human Face, 2002, ISBN 0-93.1835-01-1, retrieved on Oct. 3, 6, pp. 545-555. 2013, retrieved from the internet:<URL: http://face-and-emotion. Lin, I. C., et al., “Mirror mocap: Automatic and efficient capture of com/datafaceffacs/manual.html <, 8 pages. dense 3d facial motion parameters from video,” 2005. The Visual Computer, vol. 21, No. 6, pp. 355-372. * cited by examiner U.S. Patent May 19, 2015 Sheet 1 of 20 US 9,036,898 B1 OBJECT MODELING SYSTEM(S) 130 OBJECT ARTICULATON SYSTEM(S) 140 OBJECT DESGN ANMATON COMPUTER(S) SYSTEM(S) 110 150 OBJECT SMUATION SYSTEM(S) 160 OBJECT BRARES 120 OBJECT RENDERING SYSTEM(S) 170 F.G. 1 U.S. Patent May 19, 2015 Sheet 2 of 20 US 9,036,898 B1 U.S. Patent May 19, 2015 Sheet 3 of 20 US 9,036,898 B1 BEGIN 310 CAPTURE PERFORMANCE TO GENERATE A SEOUENCE OF FRAMES 320 IDENTFY OBJECTS IN EACH FRAME IN THE 330 SEOUENCE OF FRAMES GENERATE GEOMETRY FOREACH IDENT FED 340 OBJECT STORE GEOMETRY GENERATED FOREACH 350 DENT FED OBJECT END 360 FIG. 3 U.S. Patent May 19, 2015 Sheet 4 of 20 US 9,036,898 B1 ( 400 BEGIN 410 DETERMINE AREFERENCE FRAME INA SEOUENCE OF FRAMES CAPTURING A 420 PERFORMANCE IDENTIFY ANCHOR FRAMES INSEQUENCE OF FRAMES BASED ON REFERENCE FRAME 430 GENERATE CPS BASED ON ANCHOR FRAMES 440 STORE GENERATED CPS 450 END 460 FG. 4 U.S. Patent May 19, 2015 Sheet 5 of 20 US 9,036,898 B1 BEGIN 510 DETERMNE FEATURE SET OF REFERENCE FRAME 520 RECEIVE PLURALITY OF TARGET FRAMES 530 PERFORM CORRESPONDENCE MATCHING BETWEEN FEATURE SET OF REFERENCE 540 FRAME AND EACH TARGET FRAME DETERMINE AMOUNT OF CORRESPONDENCE BETWEEN FEATURE SET OF REFERENCE 550 FRAME AND EACH TARGET FRAME GENERATE INFORMATION DENTFYING EACH TARGET FRAMEAS AN ANCHOR FRAME WHEN AMOUNT OF CORRESPONDENCE SATSFES 560 SELECTION CRITERA END 570 F.G. 5 U.S. Patent May 19, 2015 Sheet 6 of 20 US 9,036,898 B1 §§§§ 9' 099 & 8:008 ca. U.S. Patent May 19, 2015 Sheet 7 of 20 US 9,036,898 B1 Y-s s S U.S. Patent May 19, 2015 Sheet 8 of 20 US 9,036,898 B1 ( 800 BEGIN 810 RECEIVE PLURALITY OF CPS 820 FOREACH CLP N THE PLURALITY OF CLIPS, PERFORM IMAGE-SPACE TRACKING BETWEEN 830 REFERENCE FRAME AND ANCHOR FRAMES OF CLIP TO GENERATE MOTON ESTIMATION FOREACH CLP IN THE PLURALITY OF CLIPS, PERFORM IMAGE-SPACE TRACKING BETWEEN REFERENCE FRAME AND UNANCHOREO 840 FRAMES OF CLP TO GENERATE MOTON ESTEMATON GENERATE TRACKING INFORMATION FOR 850 EACH CIPN THE PLURALITY OF CPS END 860 FG. 8 U.S. Patent May 19, 2015 Sheet 9 of 20 US 9,036,898 B1 BEGIN 910 PERFORM FEATURE MATCHING BETWEEN REFERENCE FRAME AND ANCHOR FRAME OF A CP TO GENERATE FORWARD MOTION 920 ESTMATON AND BACKWARD MOTION ESTMATON FTER FEATURE MATCHES BASED ON 930 SELECTION CRITERA PERFORM RE-MATCHING OF UNMATCHED 940 FEATURES PERFORM MATCH REFINEMENT 950 GENERATE FEATURE TRACKING INFORMATION 96.O FROM REFERENCE FRAME TO ANCHOR FRAME END 970 FIG. 9 U.S. Patent May 19, 2015 Sheet 10 of 20 US 9,036,898 B1 (? 1000 BEGIN 1010 PERFORM INCREMENTAL FRAME-BY-FRAME FEATURE MATCHING FROMA PRECEONG ANCHOR FRAME TO AN. UNANCHORED FRAME 1 O2O OF ACLIP TO GENERATE FORWARD MOTON ESTIMATON PERFORM MATCH REFINEMENT 1030 PERFORM INCREMENTAL FRAME-BY-FRAME FEATURE MATCHING FROMA SUCCEEONG ANCHOR FRAME TO THE UNANCHORED FRAME 1040 OF ACP TO GENERATE BACKWARD MOTION ESTIMATON PERFORM MATCH REFINEMENT 1050 CHOOSE FORWARD MOTON EST MATONOR BACKWARD MOTON ESTMATON BASED ON 1060 SELECTION CRITERA GENERATE FEATURE TRACKING INFORMATION FROMANCHOR FRAME TO THE UNANCHORED 1070 FRAME END 108O FIG 10 U.S. Patent May 19, 2015 Sheet 11 of 20 US 9,036,898 B1 U.S.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages40 Page
-
File Size-