TL-28.qxd 3/21/12 7:56 PM Page 309

Laser Scanning in Heritage Documentation: The Scanning Pipeline and its Challenges

Heinz Rüther, Roshan Bhurtha, Christoph Held, Ralph Schröder, and Stephen Wessels

Abstract convey the capabilities and limitations of the technology In an attempt to show the complexity of terrestrial laser and relate these to the needs of the 3D data user community. scanning of heritage sites and dispel the not infrequently What is often overlooked is the complexity of the multi- found perception of a largely automated tool with close to stage data acquisition and processing pipeline associated real time processing capabilities, the individual steps of the with this technology as well as the special requirements of data acquisition and processing pipeline, including registra- heritage documentation in comparison to other applications tion, point cloud cleaning, surface reconstruction, hole in industry and modern architecture. The paper will attempt filling, texturing and final output are discussed. Emphasized to address some of the issues associated with the application are areas in which further development and research are of laser scanning and suggest approaches based on extensive desirable. A brief discussion of the concepts and implemen- experience with data acquisition and processing with tation of the African Cultural Heritage Sites and Landscapes different scanners and in widely varying environments. Project, as executed by the Zamani Research Group at the University of , is incorporated into the paper. This database integrates spatial and non-spatial data and Laser Scanning Pipeline focuses on architectural heritage sites and cultural land- While the acquisition of laser scan data is relatively simple, scapes. The spatial data comprises of 3D laser scans, GIS’s provided some basic principles of scan position geometry and for each site and its environment, stereo images, panoramas, choice of resolution are adhered to, the processing of the data ground plans, elevations and sections derived from laser is complex, only partly automated and time consuming, and scan models, contextual photography and videos as well as the quality of the end product depends to a large extent on 3D landscape models. The database is primarily designed as the experience of the data processor and the choice of a resource for research and higher education. However, the appropriate algorithms. The huge data volumes prevalent in spatial data acquired for the project are presently used in a laser scanning complicate every step of the processing number of restoration and conservation projects. An impor- pipeline, especially so when applied to heritage documenta- tant further objective is the creation of a permanent digital tion. In industrial and other applications it is often acceptable record. The project, funded by the Andrew W. Mellon to replace detailed point clouds by geometric primitives, such Foundation, is a joint initiative of the Zamani Research as planes for walls and cylinders for pipes, thus dramatically Group at the University of Cape Town and JStor, New York. reducing the data volume and the complexity of the modeling process. In heritage documentation every single point can be relevant and any decimation, although sometimes unavoid- Introduction able, is in principle unacceptable, as every surface detail may Laser scanning has found its way into heritage documenta- be essential for restoration, conservation, monitoring, and tion rather rapidly, and it has partly replaced some of the analysis. To the contrary, having scanned a heritage site at conventional methods for the spatial documentation of high resolution and in great detail, one often cannot help the heritage sites (Fisher et al., 2011; Ruether et al., 2009). feeling that even denser point clouds would have been While the technique was exceptionally well received by desirable. In the following, the steps of the laser scanning technical and non-technical users, it is often misunderstood pipeline (Figure 1) are briefly discussed and some of the and its capabilities are over- or underrated. Laser-scan typical challenges associated with each step are highlighted. generated 3D models, when displayed and manipulated on the computer screen, are generally well received and praised Data Acquisition by archaeologists, conservation experts, and architects. Field procedures for laser scanning have changed signifi- However, when it comes to practical applications of such cantly since the first instruments were employed for heritage models, an element of bewilderment often replaces the documentation. In its earliest missions, some five years ago, original enthusiasm and when asked how such a model can the Zamani team completed between five and ten full-dome be utilized in practical terms, besides the obvious use as a medium resolution scans (1 to 2 cm point spacing at 20 m means of familiarization with a site, the potential user often distance) per field day, and a typical scanning mission requests the generation of 2D and 3D line drawings to be returned with 40 to 50 scans per site. In today’s field derived from the model. In this process, large amounts of campaigns with phase-based scanners, between 60 and 120 information remains unused, and the effort and cost to scans are acquired daily and more than 1,000 individual collect the data are wasted. It is therefore important to

Photogrammetric Engineering & Remote Sensing Vol. 78, No. 4, April 2012, pp. 309–316. Zamani Research Group, University of Cape Town, Division of Geomatics, School of Architecture, Plannings and Geomatics, 0099-1112/12/7804–309/$3.00/0 University of Cape Town, Private Bag, Rondebosch 7701, © 2012 American Society for Photogrammetry South ([email protected]). and Remote Sensing

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING April 2012 309 TL-28.qxd 3/20/12 6:41 PM Page 310

because of the one-button operation and the built in data storage and batteries, which did away with the time con- suming transport of laptop, batteries, and cables from station to station. Having stated that the data acquisition process is in principle not difficult, it must be noted that only a full understanding of the data pipeline and the complexities of each step will guarantee that the principal criteria for a successful field campaign are met. These include: • complete coverage of the structure/site, or at least as complete as physically possible. One has to accept that complete coverage of a monument or site is in most cases unachievable, but experience also shows that minimizing scan set ups in the field can lead to difficulties when processing the data. • appropriate choice of resolution depending on the detail of the surface structure and on the variation in distance to the scanned surfaces, bearing in mind that resolution changes with distance from the scanner. • sufficient overlap for ICP based registration, where overlap areas must be chosen to contain sufficient surface detail to allow registration algorithms to find a unique solution. Figure 1. The 3D modeling pipeline. Dotted Overlaps should also be chosen to allow for similar angles lines mark optional steps. of incidence from the scan stations. • choice of scanner positions which avoid flat angles of incidence for all surfaces, but especially for overlap areas. • if targets are employed, the choice of an economic target scans can accumulate for a single site (Figure 2). Accord- distribution where targets are visible from a maximum number of instrument setups, without however, sacrificing ingly, point clouds have increased from between 20 to 50 geometric requirements. million points per site to 7 billion points and more. The dramatic increase in the number of set-ups over time was a Buildings are typically scanned with 1 to 2 cm point consequence of the development from early time-of-flight spacing, while terrain is captured with point intervals scanners with scan times of two to three hours for a full- varying from 10 to 50 cm. Experience showed that preplan- dome scan to phase-based scanners with scan times of three ning scanner setups is impractical, and in most cases of to six minutes for the same resolution. The highest number heritage documentation, impossible. This is because one of scans with this resolution acquired in one day while cannot assess, without being on site, optimal field-of-view, documenting a fortress on with a Leica HDS overlap areas, resolution, and necessary additional scans to 6100 scanner was 160. This was possible not only because cover vital occluded detail. Therefore, the decision on of the high scan rate of the phase-based scanner, but also suitable scanner positions is always done in situ.

Figure 2. Laser scan positions of the Saint Sebastian Fortress, Island.

310 April 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING TL-28.qxd 3/20/12 6:41 PM Page 311

Scan Registration (with appropriate coordinate transformation, as required) or The first step in producing a 3D model is the registration of estimated in a GIS (Figure 2). This way scans introduced into the individual scans acquired in the field. This involves the the registration software are approximately pre-oriented and transformation of all scans into a single uniform coordinate positioned, thus avoiding the need for a time-consuming, system. Registration is a vital element in the data processing manual pre-alignment, the prerequisite for the more precise pipeline, as the accuracy achieved in this stage influences alignment using the ICP-Algorithm. all subsequent processing steps. There are two approaches to Software and hardware limitations become obvious combine the individual scans into a single point cloud. One when registering large numbers of scans, which require long option relies on targets common to some, or all, of the scans processing times and can cause system crashes. The Zamani while the other approach aligns scans based on overlapping team has adopted an approach in which a skeleton model of surfaces, using well established variants of the Iterated the entire site or structure is created by first registering a Closest Point (ICP) algorithm (Besl and McKay, 1992; Chen minimum number of individual, usually longer, distance and Medioni, 1992) scans. This is followed by a global registration of the Targets are generally easy to identify and locate in point skeleton model, provided the data volume allows this. The clouds during the registration process, thus reducing skeleton model is then filled in with the bulk of short range processing time, especially with software allowing for scans. automatic target detection. Targets also provide high regis- tration accuracies and reduce the danger of mis-registration. Cleaning of Point Clouds The disadvantage of the use of targets is the requirement of Cleaning of the point cloud, i.e., the removal of unwanted high resolution sub-scans of the target area which in turn objects such as vegetation, people, animals, equipment, increases scan times in the field, unless the entire scan is electrical cables, doors, and random objects, can be done executed at high resolution. It is possible to locate targets either before, in parallel with, or after the registration. outside the scanned object, provided that they are visible A choice arises in the cleaning process as moving from more than one scan position. In heritage documenta- objects in the overlap areas should be removed before tion, however, one encounters a number of practical prob- registration as their changed position in different scans can lems when using targets. It is often difficult, if not impossi- lead to registration errors, which suggests pre-registration ble, to physically place targets on fragile or high walls and cleaning. A further argument for pre-registration cleaning is in difficult-to-access areas. The option of placing targets the often significantly reduced data volume for the registra- outside the object is obviously not possible when scanning tion process. This is especially relevant for densely vege- the inside of buildings. A further argument against targets is tated sites (Figure 3). The argument against pre-registration the complexity of heritage buildings. For example, the so- cleaning is that large variety of objects in a scan stabilizes called “palace” at the ruined Swahili town of Songo Mnara the registration, which would suggest that as many in comprises of more than 75 rooms. In this case it unwanted objects as possible should be left in the point was not only the large number of targets required which cloud and used for registration, which supports post- made this approach impractical, but also the design of the registration cleaning. Post-registration cleaning can be done building with narrow passages and small connecting doors. on individual scans or on the combined point cloud. This scenario can result in poor geometry for target-based Operating on the combined point cloud has the obvious transformations and in small angles of incidence for the advantage that objects appearing in multiple scans can be signal, which leads to poor reflection and reduced resolution removed in a single operation thus reducing cleaning time. on targets. However, the huge data volume of a full point cloud For the , ICP-type algorithms are therefore can, and often does, cause system crashes during cleaning. preferred. The original ICP approach by Besl and McKay For post-registration cleaning, one also has the choice of (1992) and Chen and Medioni (1992), respectively minimize, cleaning scans while still in point form or after meshing. in the least square sense, the distance between closest Cleaning of scans is still a tedious and time consuming points/vertices on corresponding surfaces and the distance manual task. A typical scanning project in the Zamani between vertex and destination surface. Numerous modifica- project comprises of 500 to 1,000 scans, each of which tions of the ICP algorithm have been developed including typically requires 0.5 to 2 hours of cleaning by an experi- additional criteria, such as correspondence of surface color enced operator, while complex scenes can take up to a full or intensity of reflection. A useful comparison of some of day of cleaning. Assuming an average of one hour each for these algorithms is provided by Rusinkiewicz and Levoy 750 scans leads to a requirement of nearly 100 person days (2001). of cleaning. The policy adopted for the scanning of sites for the African Heritage documentation project was therefore to rely Surface Reconstruction on surface based registration and to use only limited In the next step of the pipeline, the single point cloud numbers of full-sphere targets for very large sites which are created in the registration process is converted into a placed in exposed positions where they can be viewed from continuous surface, represented by triangles, in a process a maximum number of scans. The coordinates of these referred to as surface reconstruction or, somewhat loosely, targets are determined by postprocessed GPS surveys wher- as meshing or modeling. Numerous algorithms have been ever possible, and they serve to check and position the final developed for surface reconstructions and the Zamani team model. For some sites, point clouds were registered with has explored a number of these with software of varying both, target- and surface-based methods, but no significant transparency, but no single algorithm emerged as the perfect difference between the accuracies of the resulting registered solution suitable for any type of surface. Best results were point clouds could be detected. achieved with combinations of methods, where individual The project team adopted a field technique which scans and the complete registered point cloud are meshed significantly reduces the time required for registration, and with different algorithms and then combined by weighted especially surface based registration. In this approach the averaging. scanner is leveled and aligned in the same direction, either Each of the explored methods responds differently to visually or with the help of a compass, and scanner posi- different surface texture, complexity of detail, scan resolu- tions are surveyed by means of a total station or RTK-GPS tion, and presence of noise. Variations occur even in a

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING April 2012 311 TL-28.qxd 3/20/12 6:41 PM Page 312

(a) (b) Figure 3. (a) Not cleaned point cloud with 6.7 million points, and (b) cleaned point cloud with 4 million points.

single scan and different algorithms may have to be used for walls are meshed as in 4a but the ground and grass covered the same scanner set up. areas are created with a Poisson algorithm. The hole-filling Surfaces can be formed as either directly passing through effect of the Poisson algorithm is obvious. the scanned points (Amenta et al., 2001; Dey et al., 2001; A problem area for all surface reconstruction is the Bernardini et al., 1999) or as an approximation based on choice of a suitable threshold which decides whether a established best-fit algorithms (Alexa et al., 2001; Curless and points neighbor is close enough to be considered part of the Levoy, 1996; Kazhdan et al., 2006). The former approach same surface. Setting this value too large will create artifacts usually employs the Voronoi diagram and the corresponding and connect objects which should not be connected, or, if Delaunay triangulation to find point neighbors and create set too small, will create holes in areas even though connections. The Poisson algorithm (Kazhdan et al., 2006) acquired data would suffice to describe the surface. Some and the volumetric integration approach (Curless and Levoy, surface generating algorithms rely on the grid nature of scan 1996; Callieri et al., 2003) are examples for the latter. data which significantly simplifies the search for the nearest Experiences with both algorithms showed that the Delaunay neighbor and the determination of vertex-normals. algorithm creates surfaces with less or smaller holes in areas The surface reconstruction process is further complicated with sparse data while the volumetric integration approach by noise and inaccuracies in the registration. A section performs better when modeling detail. The Poisson method through a surface defined by a point cloud should in theory creates watertight models and is very effective in reducing be two-dimensional and infinitely thin. The unavoidable noise, but loses detail and makes corners and edges appear presence of noise will turn such a section into a three- less well defined. The smoothing effect of the Poisson dimensional slice of points. Connecting the closest neighbors method is demonstrated in the comparison of Figure 4a and in this slice without filtering can cause intersecting triangles. Figure 4b. Figure 4a shows an example of wall ruins with Thus, noise reduction is necessary before the triangulation surrounding ground in Songo Mnara, Tanzania meshed process can be initiated, which in turn has the disadvantage entirely with the volumetric integration method. In Figure 4b that noise reduction eliminates points indiscriminately and

(a) (b)

Figure 4. Wall ruins with surrounding ground in Songo Mnara, Tanzania, comparing the (a) volumetric integration algorithm, and (b) a combination with the Poisson method.

312 April 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING TL-28.qxd 3/20/12 6:41 PM Page 313

can therefore result in the loss of important detail. In If no scan data is available, current software fills in overlapping areas noise can be enhanced due to registration small holes automatically and offers conventional modeling errors. A noise-like phenomenon occurs when scanning grass or cloning options for larger patches (Figure 5). This semi- or leaves, where each leave or blade is only represented by automatic approach is a time intensive task, since it is very few points on one surface side only, making it impossi- heavily user-based. ble to create realistic surfaces. Vertex-normal-based algorithms Methods for automated hole-filling or surface augmenta- have an acceptable smoothing effect on grass, but unfortunately tion have been developed (Sharf et al., 2004) but their use current implementations (Fiorin et al., 2007; Bolitho et al., in heritage documentation is questionable. Surface augmen- 2007) tested by the project are not very robust when applied tation algorithms can fill holes plausibly, which makes it to such irregular surfaces and often crash. difficult and even impossible to distinguish between real Meshing the large models encountered in the Zamani surface data and artificially introduced patches. This process project cannot be done with standard in-core software; 64-bit is not acceptable for the scientific documentation of cultural tools, out-of-core, and/or streaming techniques are essential. heritage sites, where data might be used for research or To reduce this problem one can split large datasets into restoration. On the other hand, hole free surfaces are small manageable pieces, which also allows the processing aesthetically more appealing and necessary to produce of several sets in parallel. Reassembling these subsets after interactive 3D walkthroughs. Watertight models are also processing again requires specially optimized software and required when producing physical to-scale models with a 3D additional processing time. The Zamani project approach printer. It would seem desirable for the heritage scanning attempts to resolve the data volume problem by splitting the community if software performing hole-filling and model- data into subsets for parallel processing with out-of-core viewing could make use of a standard display format clearly tools. To avoid discontinuities and generate smooth connec- indicating augmented surface portions, on request. tions between these subsets, boundaries of subsets must be defined to overlap with their neighbors. The final model is Texturing usually being extracted at a resolution of 2 cm. Color can enhance a 3D Model not only visually, but also assist with interpretation and diagnosis. Photorealistic color Hole Filling or Surface Augmentation of a surface can, for example, assist with the detection and Only in very exceptional cases is a laser scan model free of monitoring of eroded, chemically changed, or restored scan holes. Holes occur especially when scanning very surface areas. While textured models might be desirable, but detailed objects, or wherever else a surface is invisible to the not essential for many applications, they are critical when scanner. Typical examples are ornamental building facades modeling shelters (Figure 6). or upward facing surfaces where no scan positions can be Laser-scanner manufacturers increasingly equip their found above the surface, such as window ledges and roofs. instruments with photo or video cameras to achieve the The same problem arises when acquiring photography for colorization of the scanned surface. These built-in systems texturing the model. are convenient to use, but so far they do not seem to reach

Figure 5. 3D model showing filling techniques of the Peace Memorial Museum in Zanzibar, Tanzania.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING April 2012 313 TL-28.qxd 3/20/12 6:41 PM Page 314

Figure 6. 3D model screenshot of a textured rock art panel, Game Pass, .

the quality of independent, external cameras. Images tend to (Snavely et al., 2006) suggest the acquisition and use of have a blurry or milky appearance. Some manufacturers thus large numbers of images. These algorithms require large offer adapters for external cameras which take a panoramic overlaps and small separations between camera positions image from the position of the scanner. The biggest disad- and can almost automatically estimate all camera parameters vantage of these approaches is the inconsistent lighting required for texture mapping. However, software using these between scanning positions. A scanning campaign of a methods to color the model, commercial, freeware, or heritage site usually spans several days with the scanner academic is still rare. operating throughout the day and in some cases at night. If the task of finding the camera parameters is solved, it This necessarily leads to scanning under very different light has to be decided how to project the images onto the conditions, and if the images are taken concurrently with surface. There are two ways, either by projective texture the scans, they can differ significantly throughout the model. mapping or by assigning colors to the vertices. The first This suggests that the optimal way to acquire texturing approach requires that each vertex (3D model coordinates) photography for a model is independent photography, taken on the model is associated with a corresponding point (2D over a minimal time period, or on different days with image coordinates) on one or more of the images. It is not approximately the same lighting conditions. trivial to handle these extremely large datasets of extensive To place an image onto a 3D model, the camera parame- models with hundreds of associated images. ters, internal and external, have to be known. An accurate It is much easier to assign a color directly to each projection can only be accomplished with the correct position vertex, and then interpolate the color on the surface between and orientation of the camera in space, the correct focal the vertices. This requires that the resolution of the model is length and distortion free images. The best approach is to use high enough to represent the required detail. If this is not images acquired with a calibrated camera and lens. But this the case one can artificially densify the mesh by subdividing method also has its disadvantage, as it requires a preset fixed the original triangles. In this case each new point needs to focus, which might result in some of the captured images be located in the image set to obtain its corresponding “real” being blurred. Theoretically, the camera can be calibrated color. Practically, this requires tools which allow the loading individually for each photo but this is extremely time of large models and to then further increase their size. At consuming, whatever technique one uses for the calibration. present such tools appear elusive. The internal as well as the external parameters can also be estimated or post-calibrated. If the parameters are not known at all, current software asks the user to find corre- African Cultural Heritage Sites and Landscapes Database sponding points on the model and on the image. But even if The observations reported above were derived from experi- the amount of correlated points is reduced to a bare mini- ences gained while acquiring and processing scan data for mum, these points need to be chosen accurately, and thus, the African Cultural Heritage Sites and Landscapes Database. covering a large structure completely in detail with hun- The project, initiated in 2005 and still ongoing, is funded by dreds or even thousands of images will take a significant the Andrew W. Mellon Foundation and based at the Univer- amount of time. Full-dome panoramic images, converted to sity of Cape Town. The remainder of this paper briefly cube maps, can be used to reduce the amount of images describes the database and its concepts and reports on significantly. Per panorama, only one sub-image needs to be experiences gained. The African Cultural Heritage Sites and registered, while the remaining images differ only in their Landscapes Database project is dedicated to recording, and orientation. Panoramic images are nearly free of lens thus contributing to the protection of cultural heritage distortions, since the camera parameters are estimated when monuments and landscapes throughout the African conti- stitching the panorama image. Computational tools to refine nent. The project recognizes the urgent need to create initial camera parameter estimations and thus increase the metrically correct digital (3D) documentations of Heritage accuracy of the image-model-fit have been successfully Sites for future generations, provide spatial information for demonstrated (Corsini et al., 2009) conservation, restoration, education and research, promote Current texturing techniques encourage the user to awareness of architectural heritage (historical sites, build- minimize the number of photos to reduce processing time. ings, and towns) within Africa and worldwide, and provide However, advances in structure-from-motion algorithms management tools for site management at local and regional

314 April 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING TL-28.qxd 3/20/12 6:41 PM Page 315

level. The project uses state-of-the-art technologies, such as of the Queens () and the Laetoli footprints were laser scanning, GIS, close-range and aerial photogrammetry, documented for the Getty Conservation Institute, the Tassili satellite remote sensing, as well virtual reality technology, to n’Ajjer site () for TARA Nairobi and Lalibela () create an integrated database of important sites. The output for World Monuments Fund and UNESCO. All projects are of the project is made available by the JStor , executed by members of the unit, with the support of staff New York (http://JStor.org and http://Aluka.org). Full data members of Antiquities or equivalent government depart- sets are available for research, education, conservation, and ments. A total of approximately fifteen terabytes of data on restoration projects. A small subset of the data can be African sites has been generated by the Zamani group over viewed on http://www.zamaniproject.org. The Zamani group the past five years. These data are augmented by relevant has to date documented some 40 sites with close to 100 contextual information, selected by JStor/. structures of standing architecture in , , Mozam- The output of the project are: an integrated database bique, Tanzania, , Ethiopia, , , Algeria, consisting of a Spatial/Geographic Information System (GIS) , , , and South Africa. The Valley for each of the sites, 3D models of structures (Figures 7 and

Figure 7. 3D Model of Castle of Good Hope, Cape Town.

Figure 8. 3D Model of Small Domed , Kilwa Kisiwani, Tanzania.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING April 2012 315 TL-28.qxd 3/20/12 6:41 PM Page 316

8), parts of towns and landscapes, elevations, cross sections Bernardini, F., J. Mittleman, H. Rushmeier, C. Silva, and G. Taubin, (Ruether et al., 2009), ground plans and roof plans, computer 1999. The ball-pivoting algorithm for surface reconstruction, visualizations with walk-through and other inspection IEEE Transactions on Visualization and Computer Graphics, capabilities (where feasible), photographic panoramas, stereo 5(4):349–359. photography of important features, contextual photographs, Besl, P.J., and M.D. McKay, 1992. A method for registration of 3D site related documents, scientific papers, excavation reports, shapes, IEEE Transactions on Pattern. Analysis and Machine and similar material in digital form. During the process of Intelligence, 14(2):239–256. creating the database, a methodology for the documentation Bolitho, M., M. Kazhdan, R. Burns, and H. Hoppe, 2007. Multilevel of African Heritage sites has been developed and optimal streaming for out-of-core surface reconstruction, SGP ’07: Proceedings of the Fifth Eurographics Symposium on Geometry ways are explored in which the data can be used by African Processing, 04-06 July 2007, Barcelona, Spain (Eurographics heritage authorities and museums and by conservators and Association, Aire-la-Ville, Switzerland), pp. 69–78. researchers in Africa and worldwide. Callieri, M., P. Cignoni, F. Ganovelli, C. Montani, P. Pingi, and R. Scopigno, 2003. Vclab’s tools for 3d range data processing, Proceedings of the 4th International Symposium on Virtual Conclusions Reality, Archaeology and Intelligent Cultural Heritage Laser scanning has proved its relevance as a tool for the (VAST2003) and First EUROGRAPHICS Workshop on Graphics creation of spatial documentation for the management, and Cultural Heritage, 05-07 November, Brighton, UK (Euro- monitoring, restoration, and conservation of sites as well as graphics Association, Aire-la-Ville, Switzerland), pp. 13-22. research and education. Chen, Y., and G. Medioni, 1992. Object modelling by registration of While laser scanning technology and software has multiple range images, Image Vision Computing, 10(3):145–155. significantly and rapidly improved, there is still need for Cignoni, P., M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and research and software development towards the effective G. Ranzuglia, 2008. Meshlab: An open-source mesh processing tool, Proceedings of the Sixth Eurographics Italian Chapter and fast processing of huge datasets and for the improve- Conference, 02-04 July, Salerno, Italy (Eurographics Association, ment of existing algorithms, especially in modeling and Aire- la-Ville, Switzerland), pp. 129–136. texturing. Development is also required in the areas of Corsini, M., M. Dellepiane, F. Ponchio, and R. Scopigno, 2009. registration, cleaning, hole filling, and display with a single, ‘Image-to-Geometry Registration: A Mutual Information Method easy to operate and largely automated scan-to-textured- exploiting Illumination-related Geometric Properties,’ Computer model concept as the ultimate goal. Integration with well Graphics Forum, 28(7):1755–1764. established methods such as close range photogrammetry Curless, B., and M. Levoy, 1996. A volumetric method for building and GIS needs to be further explored and developed. In the complex models from range Images, Proceedings of SIGGRAPH context of this paper, only heritage documentation applica- 96 23rd International Conference on Computer Graphics and tions of laser scanning are discussed, but similar problems Interactive Techniques, 04-09 August, New Orleans, Louisiana are bound to exist in other applications. The African (ACM, New York), pp. 303–312. Heritage database which relies heavily on laser scanning has Dey, T.K., J. Giesen, and J. Hudson, 2001. Delaunay based shape been able to acquire a significant collection of spatial data of reconstruction from large data, Proceedings of the IEEE 2001 African architectural and rock art sites. It would not have Symposium on Parallel and Large-data Visualization and been possible to require such detailed information without Graphics, 22-23 October, San Diego, California (IEEE Press Piscataway, USA), pp. 19– 27. this technology. The project is on-going and further docu- mentation is planned, especially in West and Central Africa Fiorin, V., P. Cignoni, and R. Scopigno, 2007. Out-of-core mls reconstruction, Proceedings of the Ninth IASTED International where only a few sites could be covered to date. Conference on Computer Graphics and Imaging - CGIM 2007,13-15 February, Innsbruck, Austria (ACTA Press, Anaheim, California), pp. 27–34. Acknowledgments Fisher, C., S. Leisz, and G. Outlaw, 2011, LIDAR – A valuable tool This project has been made possible with the generous uncovers an ancient city in Mexico, Photogrammetric Engineer- support of the Mellon Foundation, the assistance of Rahim ing & Remote Sensing, 77(10):962–967. Rajan from Aluka/JStor, and the invaluable advice of the Kazhdan, M., M. Bolitho, and H. Hoppe, 2006. Poisson surface team of experts on African history: Dr. George Abungu, reconstruction, Proceedings of the 4th Eurographics Symposium Professor Susan Macintosh, and Professor Martin Hall. The on Geometry Processing, 26-28 June, Sardinia, Italy (Eurograph- authors thank Leica Geosystems and Optron Geomatics ics Association, Aire-la-Ville, Switzerland), pp. 61–70. (Trimble), who have generously provided additional laser Ruether, H., M. Chazan, R. Schroeder, R. Neeser, C. Held, S.J. scanning equipment for field campaigns. Our gratitude goes Walker, A. Matmon, and L.K. Horwitz, 2009. Laser scanning for conservation and research of African cultural heritage sites: The to the Visual Computing Lab of the ISTI-CNR, Pisa, Italy, case study of , South Africa, Journal of Stanford Computer Graphics Laboratory, USA, and to Archaeological Science, 36(9):1847–1856. InnovMetric Software, Inc, Canada. Rusinkiewicz, S., and M. Levoy, 2001. Efficient variants of the ICP algorithm, Proceedings Third International Conference on 3-D Digital Imaging and Modeling, 28 May-01 June, Quebec City, References Canada (IEEE Computer Society), pp. 145–152. Alexa, M., J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T. Sharf, A., M. Alexa, and D. Cohen-Or, 2004. Context-based surface Silva, 2001. Point set Surfaces, Proceedings of the Conference completion, Proceedings of SIGGRAPH ’04: ACM SIGGRAPH on Visualization ’01, 21-26 October, San Diego, California (IEEE 2004, 08-12 August, Los Angeles, California (ACM, New York), Computer Society, Washington, D.C.), pp. 21–28. pp. 878–887. Amenta, N., S. Choi, and R. K. Kolluri, 2001. The power crust, Snavely, N., S.M. Seitz, and R. Szeliski, 2006. Photo tourism: Proceedings of the 6th ACM Symposium on Solid Modeling and Exploring photo collections in 3D, Proceedings of SIGGRAPH Applications, 04 - 08 June, Ann Arbor, Michigan (ACM, New ’06: ACM SIGGRAPH 2006, 30 July-03 August, Boston, Massa- York), pp. 249–260. chusetts (ACM, New York), pp. 835–846.

316 April 2012 PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING