<<

EUROGRAPHICS 2005 / M. Alexa and J. Marks Volume 24 (2005), Number 3 (Guest Editors)

Garment Motion Capture Using -Coded Patterns

Submission id: paper1243

Abstract In this paper we present an image-based algorithm for surface reconstruction of moving garments from multiple calibrated video cameras. Using a color-coded cloth texture allows reliable matching of circular features between different camera views. We use an a priori known triangle mesh as surface model. By identifying the mesh vertices with texture elements we obtain a consistent parameterization of the surface over time without further processing. Missing data points resulting from self-shadowing are plausibly interpolated by minimizing a thin-plate functional. The deforming geometry can be used for different graphics applications, e.g. for realistic retexturing. We show results for real garments demonstrating the accuracy of the recovered flexible shape.

Categories and Subject Descriptors (according to ACM CCS): I.4.1 [Image Processing and Computer Vision]: Dig- itization and Image Capture I.3.7 [Computer Graphics]: Animation

1. Introduction imations in real-time. In order to achieve real time simu- Correct realistic simulation and visualization of textiles has lations many simplifications, heuristics, and precalculation been at the focus of many research fields such as math- have to be done which produce physically reasonable results ematics, physics, materials science, and computer graph- but the calculated movements still do not look naturally and ics. A wide area of applications ranging from virtual actors lack the correct physical behavior ([VMT01]). for the film industry to virtual prototyping of cloth design An alternative to physical simulation is motion capture. has made this research field especially attractive. Physically Correct physics is not an issue in this case. Rather than using based approaches aim at modeling the behavior of textiles complex models for the human body and the garment, the based on physical models of textile dynamics. The meth- motion is measured directly excluding numerical or physical ods used vary from linear elastic to nonlinear visco-elastic inaccuracies. To close the gap between model and experi- approaches. Much research is concentrating on finding the ment, an automatic measurement technique is needed. Much right model to capture the physical behavior and represent research has been devoted to skeletal and facial motion cap- ∗ it [CK02, EKS03, RBF03, MTCK 04]. Since the work of ture, but cloth capture still remains an open problem. Baraff and Witkin [BW98], implicit numerical time integra- This paper is organized as follows. In Section 2 we men- tion has proved to be a powerful numerical method to solve tion related work regarding the acquisition of non-rigid sur- the stiff ordinary differential equations that arise in cloth faces. Section 3 explains the process of garment produc- simulations. tion. We then move on to describe our method for shape re- The obtained results have to be rendered photo- construction in Section 4. Section 5 explains our rendering realistically (Figure 1) to obtain a good look and feel of method and Section 6 presents the obtained results. Finally, the simulated result. Here, research aims at modeling the we summarize our findings in Section 7 and conclude with visual behavior of cloth to enhance the realistic impression ∗ an outlook on potentially fruitful future work. of the animation ([SSK03, MMS 04]). Nowadays, current cloth simulation engines have become powerful enough to 2. Related work be used in quality demanding applications relevant to tex- tile industries. Moreover, animations produced by film in- The problem of capturing non-rigid motion has been ad- dustry include very realistically looking cloth which cannot dressed by a number of researchers. Carceroni and Kutu- be distinguished from real captured cloth. Despite these ef- lakos [CK01] obtain shape, reflectance and non-rigid motion forts and faster computers coming up every year, the known of a dynamic 3D scene by an algorithm called surfel sam- methods are still far from generating realistic detailed an- pling. Experimental results for complex real scenes (a wav- submitted to EUROGRAPHICS 2005. 2 Submission id: paper1243 / Garment Motion Capture Using Color-Coded Patterns

repeated over the whole fabric. In this method the color code is only used for the parameterization of the surface. The work by Guskov et al. [GKB03] is closest to our work. They use color-coded quad markers for the acquisi- tion of non-rigid surfaces. Results for different surface types, including a T-shirt are presented. The used color code has also a limited size of codewords so that a tracking method based on Markov random fields is employed. The system achieves real-time performance. Tracking performance dete- riorates for fast motion and the quads have to be quite large which limits surface resolution. In contrast, our method uses a color code with more codewords and can cope with fast motion. It makes use of the a priori knowledge of surface connectivity and the color-coded pattern. In our work we present the first results for complex motion of real garments.

3. Preliminary work Our approach requires a costum-printed cloth pattern. We describe its production in the following. Additionally, a tri- Figure 1: Simulation and visualization of virtual cloth with measured physical data on a captured person (taken from angle mesh for the garment is constructed as input for the ∗ [MMS 04]). acquisition algorithm.

3.1. Color-coded patterns Color codes are well-known in the context of structured ing flag, skin, shiny objects) are shown. The reconstructed reconstruction techniques [ZCS02]. In [PSGM03], a surfels are quite large which gives a coarse sampling of the good overview of projection patterns including color codes surface. A flow-based tracking method which does not re- is given. For two-dimensional cloth textures we need a pat- quire prior shape models is described in [TYAB01]. The tern which encodes both axes. On the one hand, the pat- method produces 3D reconstructions from single-view video tern should be large enough for manufacturing garments. by exploiting rank constraints on optical flow. Results for a On the other hand, each point in the pattern must be iden- shoe and a T-shirt tracking sequence are shown. tifiable and thus have a different codeword. We have cho- ∗ ∗ Bhat et al. [BTH 03] estimate the parameters for a cloth sen M-arrays [MOC 98], a color code which encodes each simulation by adjusting the simulation results to real world point in the pattern by its spatial neighborhood (Figure 2). × footage. This is an elegant way to avoid parameter tuning by In this code, each 3 3 neighborhood of a point is unique hand. Results for fabrics with different material properties and can be used for point identification. The number of pos- are shown. By reducing non-rigid motion to several mate- sible codewords depends on the number of c and is rial parameters, this method is suitable mainly for qualitative reproduction. Pritchard and Heidrich [PH03] use an image- based approach to cloth motion. They use a calibrated stereo camera pair and obtain the surface parameterization by using SIFT feature matching. Motion blur caused by fast motion reduces the accuracy of the matching. Lobay and Forsyth [LF04] show that shape-from-texture techniques can be applied to cloth reconstruction. The results are based on still images and a surface model with irradiance maps is reconstructed. Their shape from texture approach derives surface normals from the shape of the individual tex- ture elements which requires a regular texture pattern. The results look smooth but lack detail. Ebert et al. [AED03] use color-coded cloth textures for retexturing virtual clothing. Figure 2: The pseudo-random color pattern used for our Together with range scans of the garment a parameteriza- garments contains five colors: , , , or- tion of the mesh is obtained. The authors use a color code ange and . which has a limited size of codewords so that the pattern is

submitted to EUROGRAPHICS 2005. Submission id: paper1243 / Garment Motion Capture Using Color-Coded Patterns 3 given by c9 (1.9 million for c = 5). By choosing five well the cloth patterns are specified, and the complete piece of distinguishable colors we are able to construct a pattern with cloth is constructed. In this step, we assert that the patterns a reasonable size for textile printing (76 × 251 points). For are triangulated in such a way that two corresponding seam pseudo-random code generation we adopt an incremental al- lines have the same number of vertices. More precisely, each ∗ gorithm described in [MOC 98]. It begins by seeding the seam is given by a list of pairs of vertices being the cor- top-left 3 × 3 window of the pattern matrix with a random responding vertices on two not necessarily distinct planar color assignment and fills up the matrix by incrementally patterns. Afterwards, the integrated cloth simulation based adding random colors. In each step, the window property on [EKS03] is used to achieve a smooth triangle mesh as is verified. In our case, the windows may be rotated in the input for the reconstruction algorithm. A more detailed de- camera images. In order to make point identification invari- scription of the mesh construction process can be found in ∗ ant to rotations, all windows are also verified against rotated [GMP 04, KFW04]. The correspondence between the mesh versions in 45 degree steps. This reduces the number of pos- vertices inside the borders and the colored pattern dots is sible codewords but still allows patterns of reasonable size. also established during mesh construction. The uv texture The output of the algorithm is a pattern matrix M with en- coordinates of a vertex correspond to its index in the pattern tries for the five colors. matrix M. The generated color pattern is printed on polyester fabric with a high-quality textile inkjet printer. The grid spacing 4. Cloth Motion Capture between dots is 2 cm with diameter measuring 1.3 cm. The two garments, a skirt and a T-shirt are manufactured by a tai- Our system consists of eight synchronized video cameras ar- lor. During this process, we take photographs of the garment ranged all around the person wearing the garment. In a first panel outlines for triangle mesh construction (Figure 3). step, the acquired video images are processed for feature identification and matching between different camera views. Using geometric camera calibration, 3D surface points are reconstructed from the image feature positions. The acquired surface is then processed by hole filling and smoothing al- gorithms. In the following, we go through the processing pipeline.

4.1. Feature recognition The first step in our method is the recognition of colored el- lipses, the image projections of our pattern dots. Color clas- sification should be robust against illumination variations. For this purpose, we convert RGB color into the HSV to separate luminance from properties. is representing the color information, while Saturation Figure 3: The four garment panels used for the T-shirt. After (whiteness) and Value (brightness) vary with illumination in- sewing, the outlines are further reduced due to the seams tensity. A common model for this variability are Gaussian which is important for mesh construction. distributions used in Bayesian color classifiers [VV03]. This method requires training data for the classifier under differ- ent illumination conditions, often hand-labeled in the im- ∗ ages [GGW 98]. For a multi-camera setup this is a tedious 3.2. Mesh construction procedure. We use a simpler approach which uses only hue for color classification. We assume that hue remains constant Based on the photographs of the panels, we design the vir- over a wide range of brightness levels and use nearest neigh- tual garment counterparts. To this end we use the cloth sim- ∗ bor classification to distinguish between five color classes. ulation plugin tcCloth for Alias’ Maya software [GMP 04]. No additional thresholds as in Bayesian methods are needed. With this tool we construct the corresponding meshes for the For estimation of the color class , a test pattern with the garment. First, we draw the border curves of each pattern us- five cloth pattern colors is recorded for every camera (Fig- ing Nurbs curves. These curves are assembled to cloth pat- ure 4). terns from which single triangle meshes are constructed for the patterns. We meet the additional constraint that interior Color detection is affected by camera noise and does mesh points are situated at the center of the color dots. This not allow to exactly estimate the projected dot shapes. We is achieved by triangulating a regular quadratic mesh, rep- increase the robustness of feature detection by combining resenting the color-coded texture, inside the border curves color with edge information. A Canny edge detector [Can86] drawn in the first step. Then, the necessary seams between is applied to the luminance images. This yields well-defined submitted to EUROGRAPHICS 2005. 4 Submission id: paper1243 / Garment Motion Capture Using Color-Coded Patterns

Figure 5: Left side: frontal view, nearest eight neighbors of the center dot. Right side: oblique view, different neighbors due to perspective foreshortening.

Figure 4: Test pattern with five colors for color hue calibra- tion. C u+v v’ v A ellipse contours due to the high brightness contrast between u B u’ the color dots and the cloth background. The contours v are further processed by outer contour following [SA85]. A D u lower and upper threshold for the feature area is used for fil- tering. For color classification, every pixel inside an ellipse votes for a color class and the ellipse color is determined by Figure 6: Left side: The local vectors u, v and u + v can the winner-takes-all strategy. Color detection works at an er- change roles under certain viewing angles. In addition to ror rate of 5-10 percent. Performance deteriorates mainly in the obvious eight neighbors (), the green and neigh- shadowed areas. The centers of the color dots are calculated borhoods (dashed lines) have to be considered as well. For as the center of gravity of the filled dot contours and are later example the blue parallelogram can be obtained by shifting used for reconstruction. the base line AB in the u + v and −(u + v) directions.

4.2. Feature labeling base lines AB and CD along the local directions (Fig- In the next step every colored dot has to be identified by its ure 6). window in the pattern matrix M and assigned an index (i, j). 3. Filter out hypotheses which are not in the pattern matrix We use the garment panel outlines from Figure 3 to mask M. For searching, the neighbor dots are sorted by the po- out all dots which are not in the garment and use this as a lar angle around the center dot. priori knowledge for labeling. The algorithm uses a region growing strategy. As a first step, a seed color dot has to be After this, we have several seed hypotheses available which identified in the image. For determining its neighbors, the are verified in the following region growing step. In this easiest approach would consider the nearest eight dots in the manner, seed dots can also be found for cloth areas observed image. However, this may give incorrect results when the under oblique angles, which is important at cloth folds. The dots are observed under perspective projection. Dots from region growing algorithm iterates over the following steps the 5×5 neighborhood can come closer for oblique viewing until no further dots are found (Figure 6, right side): angles (Figure 5). As cloth can show significant perspective 1. Compute the local vectors u, v from a 3×3 neighborhood effects at cloth folds, this has to be taken into account. While 2. Predict the neighbor dot position by adding u, v to the this complication can be avoided if a hexagonal grid with center position (for example in Figure 6, u + v is added uniform neighbor distances is used, the reduced number of to the center of the black dot). codewords (c7) in this case would require additional colors. 3. Label an image dot in the search window (dashed window Thus, we have to examine the 5 × 5 neighborhood of the in Figure 6) with an index (i, j) if it has the correct color 0 center dot and generate three hypotheses for its 3 × 3 neigh- and the new local vectors satisfy ku − uk ≤ ckuk and 0 borhood and the local lattice directions u, v and u + v (Fig- kv − vk ≤ ckvk with 0 ≤ c ≤ 1. ure 6). The algorithm for seed finding is as follows: Step 2 implies a smoothness constraint for the lattice struc- 1. Determine the three nearest neighbors of the center to es- ture because the neighboring dot has to lie in the specified timate the local u, v and u + v directions (their opposite search window. This is a reasonable assumption for smooth lying neighbors are determined as well). cloth surfaces. The inequalities in step 3 have the same pur- 2. Generate three different 3 × 3 hypotheses by shifting the pose. The color test uses the a priori known pattern matrix M.

submitted to EUROGRAPHICS 2005. Submission id: paper1243 / Garment Motion Capture Using Color-Coded Patterns 5

At depth discontinuities, region growing should stop. This is from [HZ00] for reconstruction. All image measurements enforced by steps 2 and 3 because in this case, the local lat- xi = (xi,yi) = PiX, i ∈ {0...n − 1} of a pattern point in n tice orientation is likely to change and the chance to find a camera views are used. Pi denotes the 3 × 4 camera projec- 1T 3T dot with the correct color on the other side of the disconti- tion matrix of camera i with rows pi − pi and X the 3D nuity is only 20 percent for five colors. However, some dots reconstruction. A linear system AX = 0 is solved with sin- are labeled wrongly which can be detected in the later re- gular value decomposition: construction step. A seed hypothesis is verified if a region 3T 1T of sufficient size is found by the region growing algorithm. x0p0 − p0  3T 2T  Otherwise, the region growing is restarted with another seed y0p0 − p0 hypothesis.  ... X = 0 (1)  3T 1T   xn−1pn−1 − pn−1  The region growing approach works remarkably well and  3T 2T   yn− p − − p −  can cope with arbitrary image background as it exploits the 1 n 1 n 1 regular lattice structure in the image. Figure 7 shows the output after region growing. The region growing algorithm The solution X minimizes the reprojection error in the im- considers only vertical and horizontal neighbors which in- ages in a least squares sense. The maximum reprojection er- creases robustness. For most images only a few seed points ror is computed for every reconstructed point in all views. are needed to label all dots. Algorithm performance deterio- Outlier points are removed by comparing the error with an rates for oblique angles, but these areas don’t deliver accu- upper threshold. The reconstructed points deliver the vertex rate measurements for reconstruction anyway. coordinates for the garment’s triangle mesh.

Figure 8: Missing data after reconstruction due to self- occlusion, and result after thin-plate hole filling.

4.4. Hole interpolation The resulting surface contains holes in areas of missing data (Figure 8). One reason for missing data is self-shadowing. In these areas, feature detection must fail. Deep cloth folds may only be seen by one camera, in which case a reconstruction is also not possible. As a post-processing step we employ mesh interpolation with thin-plate splines to fill hole areas. Thin- plate interpolation yields smooth surfaces which makes this Figure 7: Output of the labeling algorithm. The star-shaped approach suitable for cloth surfaces. The thin plate energy crosses (left side, center) mark seed points where region E( f ) = f + 2 f + f du dv (2) growing starts. Every color dot is connected with its pre- Z uu uv vv decessor, yielding the tree structure shown as lines. for a function f : R2 → R punishes strong bending and the corresponding minimum energy surface is given by ∆2 f = 0. For triangle meshes the uniform Laplacian for a mesh vertex with vertex neighbors is discretized as [KCVS98]: 4.3. Reconstruction p pi n−1 After labeling, the image projections of the visible pattern 1 L(p) = ∑ (pi − p) (3) dots are known. We employ the linear triangulation method n i=0 submitted to EUROGRAPHICS 2005. 6 Submission id: paper1243 / Garment Motion Capture Using Color-Coded Patterns

The bi-Laplacian operator ∆2 is thus end of the spectrum and reduce the distance of the used col- n−1 ors in color space. 2 1 L (p) = ∑ (L(pi) − L(p)) (4) For camera calibration we use Zhang’s method [Zha99]. n i=0 Our complete reconstruction method requires from feature For hole filling, we solve a linear constrained minimization recognition to mesh post-processing approximately 30 sec- problem. We require two nested rings of boundary vertices onds per video frame on a Pentium IV 3.2 GHz. The garment which are fixed. We add for these vertices pi a row pi = ci in triangle meshes have 3000-3500 mesh vertices. the linear system, where the ci are the original vertex posi- tions. This imposes a C1 boundary condition on the problem. 2 The hole vertices can move freely. A row L (pi) = 0 is added to the linear system for every free vertex. The corresponding matrix is sparse and the linear system can be solved effi- ciently with iterative methods like GMRES [GL96]. The mesh holes are determined by a region growing algo- rithm and filled sequentially. The uniform bi-Laplacian op- erator in Equation (4) leads to uniform edge lengths. To pre- serve the quadratic mesh structure, the vertex neighborhood is restricted to horizontal and vertical neighbors for interpo- lation. In order to remove noise artifacts, the mesh is slightly smoothed by Laplacian smoothing using the same neighbor- hood structure in the spatial and temporal domain. Outlier vertices are removed and interpolated from neighbors by an- alyzing the spatial and temporal neighborhood. Figure 9: Camera setup using eight cameras and two HMI lamps with softboxes.

5. Rendering The visualization of the dynamic surfaces is implemented It is difficult to evaluate the reconstruction results when in hardware-accelerated OpenGL which results in interac- ground truth is not available. We render the obtained surface tive framerates (> 30 fps) on a Nvidia GeForce4 Ti graph- into the original video images to estimate the accuracy of the ics card. Rendering the reconstructed meshes as wireframe, acquired shape (Figure 10). The reconstructions are quan- shaded or retextured using various textures is possible. The titatively accurate. The largest discrepancies occur in dark textures are created from real cloth pictures which improves cloth areas and near the image borders due to decreasing the visual appeal of our renderings. Furthermore the view- camera calibration accuracy. Furthermore, the visible folds point can be interactively chosen from one of the camera and the overall fall of the garment can be reproduced (Fig- perspectives or a freely adjustable user-defined view. In the ures 11, 13 and 14). Results are best assessed in the accom- case of a camera perspective the recorded images are used panying video, which shows examples for slow and fast mo- as background images. The background images are prepro- tion. cessed in order to remove lens distortion effects. Since the reconstructed garments are stored in a standard 3D mesh file format, it is easy to incorporate the textured dynamic gar- 7. Conclusions ments into new or existing virtual environments using 3D We have presented a method for robustly acquiring com- animation software. plex cloth motion. By using color-coded textures, we can establish reliable point correspondences between different 6. Results camera views. A prior triangle mesh model enables us to plausibly fill in missing data. Our method provides a sur- The scenes were recorded with eight synchronized Imperx face parameterization that allows retexturing and rendering 1004C video cameras arranged in a circle around the scene the dynamic surfaces realistically from arbitrary viewpoints. (Figure 9). We recorded with a frame rate of 25 frames A limitation of our system is that reconstruction is based on per second and an image resolution of 1004 × 1004 pix- single frames. Color-coded patterns also require a consider- els. A controlled environment improves the recon- able amount of work before recording can begin. In the fu- struction results significantly by reducing the effects of self- ture, we plan to exploit temporal coherence and to use more shadowing. The scene is illuminated by two HMI lamps general cloth textures. with softboxes. We use HMI lamps because color recogni- tion works best with a daylight spectral distribution. Stan- dard tungsten halogen lamps have a higher output at the red 8. Acknowledgments

submitted to EUROGRAPHICS 2005. Submission id: paper1243 / Garment Motion Capture Using Color-Coded Patterns 7

Figure 11: Reconstructed surface for different time in- stances.

Cloth Simulation. In Proc. SIGGRAPH ’98 (1998), pp. 43–54. 1

[Can86] CANNY J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8, 6 (1986), 679–698. 3

[CK01] CARCERONI R. L., KUTULAKOS K. N.: Multi-view scene capture by surfel sampling: From video streams to non-rigid 3d motion- shape & reflectance. In ICCV (2001), pp. 60– 67. 1

[CK02] CHOI K.-J., KO H.-S.: Stable but Respon- sive Cloth. In Proc. SIGGRAPH ’02 (2002), Figure 10: Overlayed wireframe renderings of the surface pp. 604–611. 1 in two camera perspectives. [EKS03] ETZMUSS O., KECKEISEN M., STRASSER W.: A Fast Finite Element Solution for Cloth Modelling. Proc. Pacific Graphics (2003), References 244–251. 1, 3 [AED03] A. EBERT J. S., DISCH A.: Innovative ∗ [GGW 98] GUENTER B. K., GRIMM C., WOOD D., retexturing using cooperative patterns. In MALVAR H. S., PIGHIN F. H.: Making faces. IASTED International Conference on Visual- In SIGGRAPH (1998), pp. 55–66. 3 ization, Imaging and Image Processing (VIIP 2003) (2003). 2 [GKB03] GUSKOV I., KLIBANOV S., BRYANT B.: ∗ Trackable surfaces. In SCA ’03: Proceedings [BTH 03] BHAT K. S., TWIGG C. D., HODGINS J. K., of the 2003 ACM SIGGRAPH/Eurographics KHOSLA P. K., POPOVICˇ ; Z., SEITZ S. M.: Symposium on Computer animation (2003), Estimating cloth simulation parameters from Eurographics Association, pp. 251–257. 2 video. In SCA ’03: Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium [GL96] GOLUB G. H., LOAN C. F. V.: Matrix Com- on Computer animation (2003), Eurographics putations. The John Hopkins University Press, Association, pp. 37–51. 2 1996. 6 ∗ [BW98] BARAFF D., WITKIN A. P.: Large Steps in [GMP 04] GRUBER M., MICHEL C., PABST S., submitted to EUROGRAPHICS 2005. 8 Submission id: paper1243 / Garment Motion Capture Using Color-Coded Patterns

WACKER M., KECKEISEN M., KIMMERLE [SA85] SUZUKI S., ABE K.: Topological structural S.: tcCloth - An interactive cloth model- analysis of digital images by border following. ing and animation system. Proc. Graphiktag CVGIP 30, 1 (1985), 32–46. 3 (2004), 361–372. 3 [SSK03] SATTLER M., SARLETTE R., KLEIN R.: Ef- [HZ00] HARTLEY R., ZISSERMAN A.: Multiple View ficient and Realistic Visualization of Cloth. In Geometry in computer vision. Cambridge Uni- Eurographics Symposium on Rendering 2003 versity Press, 2000. 5 (June 2003), pp. 167–177. 1

[KCVS98] KOBBELT L., CAMPAGNA S., VORSATZ J., [TYAB01] TORRESANI L., YANG D. B., ALEXANDER SEIDEL H.-P.: Interactive multi-resolution E. J., BREGLER C.: Tracking and model- modeling on arbitrary meshes. In SIGGRAPH ing non-rigid objects with rank constraints. In ’98: Proceedings of the 25th annual confer- CVPR (1) (2001), pp. 493–500. 2 ence on Computer graphics and interactive [VMT01] VOLINO P., MAGNENAT-THALMANN N.: techniques (1998), ACM Press, pp. 105–114. Comparing Efficiency of Integration Methods 5 for Cloth Animation. In Computer Graphics [KFW04] KECKEISEN M., FEURER M., WACKER M.: International Proceedings (2001), pp. 265– Tailor Tools for Interactive Design of Cloth- 274. 1 ing in Virtual Environments. In Proceedings [VV03] V. VEZHNEVETS V. SAZONOV A. A.: A sur- of ACM VRST (2004), pp. 182–185. 3 vey on pixel-based skin color detection tech- [LF04] LOBAY A., FORSYTH D. A.: Recovering niques. In Proc. Graphicon-2003 (2003), shape and irradiance maps from rich dense pp. 85–92. 3 texton fields. In CVPR (1) (2004), pp. 400– [ZCS02] ZHANG L., CURLESS B., SEITZ S. M.: 406. 2 Rapid shape acquisition using color structured ∗ [MMS 04] MÜLLER G., MESETH J., SATTLER M., light and multi-pass dynamic programming. In SARLETTE R., KLEIN R.: Acquisition, Syn- The 1st IEEE International Symposium on 3D thesis and Rendering of Bidirectional Texture Data Processing, Visualization, and Transmis- Functions. In Proceedings of Eurographics sion (June 2002), pp. 24–36. 2 (2004), pp. 69–94. State of the Art Reports. [Zha99] ZHANG Z.: Flexible camera calibration by 1, 2 viewing a plane from unknown orientations. ∗ [MOC 98] MORANO R. A., OZTURK C., CONN R., In ICCV (1999), pp. 666–673. 6 DUBIN S., ZIETZ S., NISSANOV J.: Struc- tured light using pseudorandom codes. IEEE Trans. Pattern Anal. Mach. Intell. 20, 3 (1998), 322–327. 2, 3 ∗ [MTCK 04] MAGNENAT-THALMANN N., CORDIER F., KECKEISEN M., KIMMERLE S., KLEIN R., MESETH J.: Simulation of Clothes for Real- time Applications. In Proc. of Eurographics, Tutorial 1 (2004). 1

[PH03] PRITCHARD D., HEIDRICH W.: Cloth motion capture. Comput. Graph. Forum 22, 3 (2003), 263–272. (Proc. Eurographics 2003). 2

[PSGM03] PAGÈS J., SALVI J., GARCÍA R., MATA- BOSCH C.: Overview of coded light projec- tion techniques for automatic 3d profiling. In ICRA (2003), pp. 133–138. 2

[RBF03] R. BRIDSON S. M., FEDKIW R.: Simula- tion of clothing with folds and wrinkles. In SCA ’03: Proceedings of the 2003 ACM SIG- GRAPH/Eurographics Symposium on Com- puter animation (2003), Eurographics Associ- ation, pp. 28–36. 1

submitted to EUROGRAPHICS 2005. Submission id: paper1243 / Garment Motion Capture Using Color-Coded Patterns 9

Figure 12: Eight input camera views for the same moment in time.

Figure 13: The reconstructed surface faithfully represents the cloth folds visible in the input frames.

Figure 14: Reconstruction results for the T-shirt.

Figure 15: Arbitrary texture can be applied to the reconstructed dynamic surface.

submitted to EUROGRAPHICS 2005.