3D Digitization in Cultural Heritage Institutions Guidebook

Emma Cieslik

Digitization Research Intern, Summer 2020

Dr. Samuel D. Harris National Museum of Dentistry

Draft prepared on July 27, 2020

Cieslik 1

Table of Contents

Section or Chapter Page number

Introduction 3

Chapter 1: Before Starting a Digitization Project 4

Chapter 2: Selecting the Best 3D Digitization Technology 10

Chapter 3: 2D+ Digitization Technologies 11

Chapter 4: Light Dependent Active Methods of 3D Digitization 14

Chapter 5: Light Dependent Passive Methods of 3D Digitization 22

Chapter 6: Light Independent Methods of 3D Digitization 27

Chapter 7: 3D Pipeline of Developing a 3D Model from Raw Data 29

Chapter 8: File Formats 31

Chapter 9: Preservation Metadata 37

Chapter 10: Current Evolution of Metadata Standards 41

Chapter 11: End Goals of Object Digitization 45

Conclusion 53

References 54

Appendix #1: Current Museum Digitization Projects, Digitization Labs, and Independent Digitization Projects 66

Appendix #2: Commercial Online Viewers for Cultural Heritage 3D Models 72

Appendix #3: Most Recent Digitization Standards Reference Guides 73

Author Biography 76

Cieslik 2

Introduction to 3D Digitization Guidebook

Once confined to the field of industrial or medical technologies, 3D digitization has become an integral part of 21st century cultural heritage collections management. 3D content is defined as content that “provides a faithful, often photorealistic, representation of real-world objects; the accuracy with which reality is depicted is linked to the instruments used for capture and the processing algorithms” (Fernie, 2019: 6). This latter part of Fernie's comment above is critical, given that much literature related to 3D digitization relates to the selection of the best digitization technology for the specific object, architecture, or archaeological site to digitize.

This guide specifically refers to objects, sites, or buildings that are digitized from a real-life counterpart, as few 3D models are born digital, i.e., a 3D object without a real-world counterpart (Flynn 2019b). This guide will primarily address 3D digitization technologies, including what techniques are used for different objects, sites, and settings, which methods are used to avoid or correct troubleshoot problems, and which techniques are best implemented in archives, libraries, and museums, specific to this guide. This guide will also contain references to advanced 2D+ technologies, which are augmented two-dimensional recording technologies that can be combined with 3D technologies to obtain a more accurate and informative image.

Most cultural heritage three-dimensional digitization involves three-dimensional depth sensing, where the distance and therefore space coordinates (xyz coordinates in a plane) of all points in a setting are determined from a point of reference with a light capturing or recording device (Pavlidis and Royo, 2018). Before starting a digitization project, it is important to refer to the questions first proposed by the BCR’s CDP Digital Imaging Best Practices Working Group (2008), which are explored in the first section listed below.

Cieslik 3

Chapter 1: Before Starting a Digitization Project

What is the purpose of digitization?

The digitization project, or survey, can be completed for documentation, analysis, dissemination, and virtual reality purposes (Donadio et al. 2020). It is important to consider the end product prior to digitizing the object in order to choose the best technology, processing software, and file format for the finalized mesh that will be produced. This topic is addressed more thoroughly in a section related to conservation, accessibility, and outreach of the final product.

Who is the audience? Who owns the object?

Are 3D objects subject to copyright? This is an important question considering that 3D digitization is the digital reproduction of an object with a real-life counterpart. Even museums that have the legal title to collection items cannot freely reproduce the items unless they have received authorization for the rights owners. In the case of 3D digitization, since the material is all digital, standard gift agreements used by the archives are not really needed, as there is no need for exclusive copyright transfer to the archive, a royalty-free, non-exclusive license to archive the 3D model (Smith 2009). This relates to the FACADE Project at MIT related to preserving architectural models. In many cases, copyright does not and should not be used to protect 3D scans, especially for scans that were created to turn a physical object into a 3D digital representation. There is overall consensus that it is the person who conceives the plan, rather the one who conducts it, that is the author of copyright. Representation scans that work to transfer a physical thing into a digital medium are therefore not eligible for copyright production (Weinberg 2016).

Despite not being under the purview of copyright, copyright rules raise important issues related to the sharing and downloading of culturally sensitive files. Sketchfab, one of the most popular upload sites for 3D models, is a Digital Millennium Copyright Act-compliant and has a clear process of resolving an infringement of copyright claims. Institutions have the option but the obligation to enable 3D models to be downloaded under several Creative Commons licenses (Flynn 2019b). While Sketchfab has some mechanisms in place, researchers have called attention to issues of control over digitized content that becomes available on the internet, specifically those related to culturally sensitive , objects, and human and funerary remains that may be eligible for repatriation (Crouch 2010; Hirst et al. 2018).

Should cultural heritage institutions digitize culturally sensitive or repatriated objects? In this section, the topic of digitizing culturally sensitive material and repatriation is discussed, given its crucial role in if and how culturally sensitive objects should be digitized and shared. Discussion of accessibility and interaction with 3D models is discussed below in the Accessibility and Outreach section.

According to the Protocols for Native American Archival Materials, includes the following items may be culturally sensitive: images of human remains, religious or sacred objects, burials, sacred

Cieslik 4

places, recordings of transcripts of songs, chants, healing songs, myths or folklore, cartographic materials of sacred sites, medical histories, psychological tests, and ethnobotanical materials. In general, when tutorials, tools and training materials are developed for indigenous communities or for using indigenous content, they should be developed after consultation with relevant conduct policies, following the UN Declaration on the Rights of Indigenous Peoples with awareness of and commitment to the free and informed consent protocols that will address Intellectual Property issues and copyright (Champion 2017). Digitization should involve active communication with stakeholder communities, including about the topic of copyright. For example, this model was applied for the digitization and reproduction of artifacts repatriated to the Tlingit community of southeast Alaska. A Killer Whale Hat brought to the Smithsonian was laser scanned and underwent under the approval of tribal leaders, and all parts of the recreation process, including using a block of seasoned alder wood from the region were used to carve the hat in conjunction with the stakeholder group (Hollinger et al. 2013).

The Smithsonian’s National Museum of the American Indian worked to create the “Fourth Museum” project, focused on digitizing the museum’s photographic archive and involving Native Americans in the design and content of the exhibits. The Fourth Museum project works to uncover the names of some of the anonymous photographic subjects. The Smithsonian still owns the rights to the image, but it now has less control over the life of this image after digitization then before (Crouch 2010). This raises key issues about sharing culturally sensitive content online where people have an opportunity to download, print, and even modify the image or 3D mesh. Similarly, Hirst and her colleagues have questioned new and unlimited access to fragile human remains, as 3G Geometric Morphometrics (GMM) have expanded the field to where 3D digitization of archaeological material, including human remains is common, as many universities own equipment (Hirst et al. 2018). With the proliferation of 3D printers, the 3D reproduction of human remains becomes more of a possibility and culturally sensitive archaeological materials become more accessible, raising key ethical issues about sharing 2D and 3D content through shareable platforms.

This guide follows with the model noting that future studies which digitize culturally sensitive remains or remains in anticipation of repatriation should require ethical approval from the organization or community to whom the remains will be repatriated (Hirst et al. 2018). All issues related to communication and collaboration with stakeholder groups have not been rectified, including the additional inclusion of elements that were once lost from the original object, in the case of the Killer Whale hat at the Smithsonian Museum. This situation and accessibility to repatriated objects and objects and remains under consideration for repatriation merit more research but should be an active consideration of all individuals actively digitizing artifacts under their collection care.

What are good ethical standards or models to follow? While 3D digitization itself is still undergoing development of national and international standards for cultural heritage institutions, there are several models to follow when seeking out ethical examples of digitization. The National Library Act (1960) in Australia notes that “digitisation is a key strategy in enabling the Library to provide culturally appropriate access to Ingidenous collection material as well as to repatriate material to traditional knowledge holders and communities” (National Library of Australia, n.d.).

Cieslik 5

As part of a project 3D scanning ethnographic objects in the David T. Vernon Collection of Native American held in the museum collections of the Grand Teton National Park, all objects were vetted first by the tribes with affiliation to the objects through a series of meetings between National Park service museum staff and tribal liaisons to select objects best suited to this scanning process (Smith 2018). In the reproduction of another repatriated Native American pipe at the Smithsonian, the cultural representation and the tribal association authorized the NMNH Repatriation Office to digitize and replace the objects, prior to digitization starting (Hollinger et al. 2013). More digitization projects should follow this model but also recognize that some digitization efforts currently do not have standards related to the release of data. This means no creation of digital copies without the approval or vetting of the cultural group. High resolution 3D digitization can solve common museum problems, including maintaining specimens after destruction sampling or repatriation and to create virtual “back-ups” of objects as insurance against accidental loss (Mathys et al. 2013).

What are the physical characteristics of the object or collection that are to be digitized?

What factors does it depend on? The choice of a 3D scanning method often depends on many factors related to the object, including reflectance, transmittance and absorbance of light, shape, dimension, and (Alliez et al. 2017; Donadio et al. 2020). This will affect the type of radiation that is used, either penetrating radiation with x-ray devices, including CT or µCT-scanning or non-penetrating 3D scans, both light-dependent and light-independent, which may allow little penetration under the illuminated surface (Alliez et al. 2017). Cultural heritage objects vary greatly in terms of size and shape, morphological complexity, and diverse in material, so no one technique will work for all objects, but Structure from Motion is becoming popular because it is able to tackle large digitization projects with little manual operation (Pavlidis and Royo 2018). Budak e al. (2019) worked to create an expert system based on different parameters of cultural heritage that would allow input of object parameters to determine the best method. The determining factors were reflectivity of materials, dimensions, geometric complexity, visual texture, accessibility, and portability of objects (Budak et al. 2019). This database is helpful for recommending the best technology for the object but also taking into consideration end-use requirements, including price, acquisition time, accuracy, 3D model with texture and resolution (Budak et al. 2019). Below, the guide lists specific categories of objects in cultural heritage sites and recommended technologies to use.

Cieslik 6

Chart describing the applicability of different 3D digitization methods related to the size of the object (Budak et al. 2019, Fig 3). Size of the object is key for determining 3D digitization method considering the field of view of the object.

Small to medium museum objects Most literature surrounding digitization of small to medium museum objects focuses on photogrammetry and Structure from Motion (SfM), given its accessibility and cost effectiveness. For example, the 3D-ICONS project digitized objects at the Civic Archaeological Museum in Italy, using automatic or digital photogrammetry based on Structure from Motion (SfM) and dense image matching for small texture museum objects (Guidi et al. 2015). Despite this, other researchers argue that triangulated laser scanners should be used for small to medium-sized objects (typically those between 15 cm and 600 cm) (Hess and Baik 2018). Many museums contain items that are similar in size but different in surface complexity, such as human remains collections, which Hirst et al. (2018) notes can be digitized by photogrammetry, structured light scanning, laser scanning, CT scanning, and magnetic resonance imaging. Important considerations for small to medium museum objects include reflectivity, as transparent materials and those with highly reflective surfaces may not be suitable for some 3D digitization because they cause blind stop from reflective light, accessibility, and mobility (Budak et al. 2019). The guide discusses all of these specific technologies below.

Chart describing the best 3D digitization technique based on the need resolution of the object (Budak et al. 2019, Fig 5). Resolution is an important consideration for documentation for conservation and restorative purposes.

Architectural structures

Cieslik 7

The different technologies used to digitize pre-modern and modern architecture is diverse, as CAD 3D modelling was one of the inspirations for 3D modelling in cultural heritage. From the literature, it appears that Structure from Motion and laser scanning are used for specific parts of the structure, depending on its complexity. Active range sensing, based on Time of Flight (TOM) or Phase Shift imaging was used for architectural structures, given its ability to produce metric 3D output (Guidi et al. 2015). While ancient buildings have a textured surface and could possibly be imaged with SfM, modern buildings have smooth surfaces, which are harder to align, so a triangulated laser scanner was used (Guidi et al. 2015). The characteristics under concern also should include the characteristics of the space where digitization occurs, if it is inside or outside and therefore includes natural or artificial light, and if it is movable (Donadio et al. 2020). For example, when digitizing a large site in the ancient Maya kingdom of Copán, Remondino et al. (2009) utilized different techniques for each part, including conventional photogrammetry for large flat walls, laser scanning for irregular or partially broken structures, photogrammetric dense matching for small detailed decorations (Remondino et al. 2009). For this reason, Photogrammetry/ Structure from Motion (SfM) is utilized mostly for pre-modern architecture; whereas, laser scanning appears to be utilized for smooth surfaces with modern architecture.

Problems when digitizing objects This topic is also explored throughout the rest of the guide with specific technologies used, but this section discusses overall difficulties encountered by 3D digitization technicians. Shao (2019) pointed out several problems for digitizing clothing collections using CAD software, including the need for more accurate dressing effects, conveying the properties of the materials and building a more effective material model using deformation and mechanical features of real materials, given that many fabrics, feathers, and furs easily deform with motion, making it difficult to digitize them (Shao 2019). Marble is light-colored and diffuse, composed of densely packed transparent crystals causing it to exhibit subsurface scattering, and although the material was very translucent, the statues were unpolished and coated with dirt, reducing subsurface scattering. Polished marble statues also had more noise, meaning more difficulty aligning images and scans together (Levoy et al. 2000). Ivory also poses similar problems in terms of being diffuse, along with enamel which is a large contributor to a dental history museum collection, addressed below.

Chart describing the best 3D digitization technique based on portability (Budak et al. 2019, Fig 2). Probability useful when selecting the best 3D digitization method given the movability of the cultural heritage object itself.

Cieslik 8

What techniques can be used to digitize dental collections? It is difficult to digitize a tooth because the surface is featureless and unsuitable for photogrammetric mapping and also reflective, so past research with medical attempts to 3D digitize a tooth involve painting the tooth with a weak watercolor solution, which may not be an option in cultural heritage collections (Mitchell and Chadwick 2008). While medical studies are useful given that the goal of digitizing the tooth is to measure loss of tooth surfaces similar to volume of objects over time, the techniques to digitizing the tooth may be different. While automatic photogrammetry can overcome problems with size of the object and calibrating the , teeth are largely unsuitable for photogrammetric mapping without augmentation because tooth enamel is translucent and reflective, causing areas of glare from in-build illumination and noise similar to marble statues (Mitchell and Chadwick 2008).

Comparison between accuracy of non-smooth and smooth µCT models and UV photogrammetry at 365 nm (Mathys et al. 2019, Fig. 22).

Enamel is difficult to digitize because it is a crystalline reflective material composed mainly of hydroxyapatite. Photogrammetry struggles to digitize plaster casts or objects that cannot be recorded in detail by white light photogrammetry (Mathys et al. 2019). Mathys et al. (2019) utilized spectra photogrammetry, capturing images at different wavelengths to create 3D models of tooth enamel. These models were more accurate when the enamel was imaged using ultraviolet wavelengths versus red and infrared spectrum (Mathys et al. 2019). Spectra photogrammetry involves full-spectrum conversion, where the infrared cut-off filter in front of the sensor on a DSLR camera is removed to allow visible spectrum and IR radiations to pass, and this camera is therefore sensitive to ultraviolet light. For geometric surface reconstructions, results are consistent between human and animal teeth, an important detail for future faunal and human remains digitization (Mathys et al. 2019).

Cieslik 9

This image depicts enamel reconstruction with 365 UVR photogrammetry, along with added whtie light texture (Mathys et al. 2019, Fig. 34)

While photogrammetry can be used for enamel digitization, CT scannerCT scanning (Siemens Sensation 64) was found to be best for recording internal structures for digitizing reflective materials such as enamel (Mathys et al. 2013). µ-CT scanning provides the best results for those digitizing tooth enamel, but segmentation and surface extraction can be costly methods, which is why spectra photogrammetry may be a suitable and cost-effective opportunity for dental history collections with the use of a modified DSLR (Mathys et al. 2019).

How is the project being funded?

If funding is provided for digitizing the project, it is important to consider quality and file format specifications for the project, given the funding agencies’ goals for distribution for the 3D scans. For example, the European 3D-ICONS project was funded under the European Commission’s ICT Policy Support Program, which builds on the CARARE and 3D-COFORM projects working to contribute 3D models and 3D data to the Europeana repository, an international digital repository of cultural heritage collections that is accessible online. All models created under this project are saved with image texture as Wavefront (OBJ) files, which is a widely used file type (Barsanti and Guidi 2013).

Other areas related to funding include the equipment, training costs, and timeframe of the project itself (Hess 2015; Donadio et al. 2020). According to a 2019 survey conducted by the major 3D display platform Sketchfab, respondents identify the top three barriers to producing more 3D content to be lack of time, lack of funding, and lack of trained staff (Flynn 2019). Funding can provide the needed equipment and trained personnel for the project, but it’s important to consider the timeframe of the project and the output of 3D models.

Cieslik 10

Chapter 2: Selecting the Best 3D digitization Technology

What are the different types of 3D digitization technologies?

There are many different types of 3D digitization technologies available, and as mentioned above the decision of which technology to use is critical to the success of the project. Pavlidis and Royo (2018) subdivide 3D digitization into light-dependent and light-independent methods. Light- dependent systems sense light in order to assess the placement of the object in three spatial dimensions, such as white-light scanner that emits patterned light onto the object to determine the distortion of the light on the object; whereas, light-independent employs methods that do not directly sense light but rather use geometric principles, such as CT scanning that uses radiation to model the internal structures of the object.

For light-dependent methods, the 3D digitization methodologies can then be further subdivided into active (coded light is projected on the surface to be measured from a sensing basing distance between devices on triangulation) and passive methods (details must clearly contrast with the background) (Alliez et al. 2017; Pavlidis and Royo 2018). Active sensors use the triangulation principle to measure angles and distances, resulting in point clouds with millions of points colored by an integrated RGB camera which is processed into a 3D model; whereas, passive sensors have become more attractive with the integration of image matching and SfM algorithms. The final product is a dense 3D point cloud, which will be discussed later in the 3D pipeline section (Donadio et al. 2020).

The models produced by these technologies fall into three main categories: geometry, appearance, and scene information. For geometric models, one of the most common models given that photogrammetry and laser scanning work to recreate object geometry in 3D space, there are four methods for modelling 3D shapes. These include vertex-based wire-frame models (also commonly called triangular meshes), parametric surfaces described by curves and shapes (Non-Uniform Rational B-Splines, NURBS), geometric solids (Constructive Solid Geometry, CSG), and boundary representations (B-reps). The level of detail of the 3D model is dependent on the quality of the polygrams present, but 3D models with large amounts of polygons and higher quality polygrams can be difficult to upload to platforms and to use in exhibition spaces where models must be downloaded quickly (Trognitz et al. 2016b).

Given that some 3D digitization technologies are more suitable for capturing certain types of objects, architecture, and archaeological sites over others, some 3D digitization is used more often than others. According to a 2019 survey conducted by the model sharing platforms Sketchfab, 64.0% of those surveyed use photogrammetry, followed by LiDAR/laser scanning (15.6%) and handheld or structured light scanning (20.5%) (Flynn 2019a). Keeping this in consideration, this guide will provide information about the majority of 3D digitization technologies that are utilized in cultural heritage institutions, but it will focus primarily on photogrammetry and laser scanning, followed by handheld or structured light scanning, given that these specific methods are what most commonly used in cultural heritage settings.

Cieslik 11

Chapter 3: 2D+ Digitization Technologies

2D+ technologies are augmenting imaging technologies that utilize 2D in different light settings to generate a 2D imaging that imitates a 3D modelled surface. This method is especially effective when used to read light or damaged inscriptions that cannot be identified with the naked eye. The most commonly used 2D+ digitization technologies are primarily focus stacking (UV and white light), reflectance transformation imaging (RTI), often focused on increasing the readability of inscriptions and engravings, and MiniDome. Most literature related to cultural heritage collections relates to reflectance transformation imaging given its useful inscriptions and engravings focused on coins.

Reflectance Transformation Imaging (RTI)

Reflectance Transformation Imaging is a computational photographic method capturing an object’s surface shape and color through re-lighting the subject from multiple directions. The enhancement of the surface’s shape and color can reveal information not visible to the anked eye. RTI involves creating hyper-realistic digital reproductions that can be controlled by the viewer of the image. In order to produce RTI images, the method is based on the synthesis of multiple digital images of a subject in a fixed position based on fixed camera position.

RTI was first developed by Tom Malzbender in 2001 as polynomial texture mapping (PTM).1 PTM is a reflective transformation imaging technique that creates texture maps of objects because PTM store color information and normal values to allow relighting original texture (Payne 2013). PTM provides more accurate, repeatable representations of the object, without considerations of light (Payne 2013). PTM is cheaper than laser scanning and accessible to museums. Laser scanning has value when 3D modelling objects but sometimes it’s too expensive (Payne 2013). RTI involves capturing multiple images with the subject and camera position fixed but the light source varying.

Reflectance transmitting imaging (RTI) is a method of 2D+ digitizing that uses the same equipment as photogrammetry, which can capture very small morphological detail that can be difficult to capture using 3D digitizing technologies (Mathys et al. 2013). RTI involves modelling the distribution of light reflected from an object surface as functions of space, angle, spectrum, and time (MacDonald 2018). For RTI images to be uploaded remotely, the original image is processed with a tool that allows changing of several parameters, like zoom factor, the light direction, and the sharing enhancement (Gainpaolo and Corsini 2010).

A new development in RTI is Highlight-RTI, in which a glossy sphere is placed in the scene so that the direction of the incident illumination can later be inferred from the coordinates of the highlight in each image. The feature is that the illumination source, such as a or spotlight, can be moved freely to any position above the surface for each image with no predetermined constraints and also no specific recording of its positions (MacDonald 2018). This allows the development of light positions on the object surface to anticipate illumination.

1 https://www.si.edu/MCIImagingStudio/RTI

Cieslik 12

Advantages and disadvantages Although reflectance transformation imaging has been utilized extensively for research since the early 21st-century, RTI is still being used and advanced today. Earl et al. (2010) acknowledges the need for an RTI viewer that provides a transition between RTI databases in three-dimensional space. Potential RTI enhancements include the need to derive three-dimensional data via photogrammetry (measure lost volumes), registering comparing maps, and automated calculation of a per- scale factor per RTI (Earl etl 2010). RTI can be used to enhance surface features impossible to discern with physical examination. The enhanced interplay of light and shadow allows RTI to reveal details about the subject’s 3D surface form. RTIs can also capture surface features of different materials, including jade or gold (Mudge et al. 2010).

Highlight RTI image capture is a technique for obtaining original digital data that can be used to produce reflectance transformation images. The Cultural Heritage Imaging group uses Canon digital SLRs, spheres with a diameter of at least 250 pixels as the resulting photograph, a portable lighting source, and a computer to control the camera. This guide recommends using a white, black, or gray neutral background and a neutral gray card or photographic reference card that includes a scale of neutral grays with known values for color balancing (Reflectance Transformation Imaging, 2013).

Images with normals, called RGBN images, are those that have color and surface normal orientation for each pixel. These images are good for cultural heritage because they are easy to acquire (using low-cost, off-the-shelf devices), high resolution (higher resolution color and normal maps sthan 3D geometry in laser scans), easily extended for stylized rendering, and simple and efficient). This can be used with a photogrammetric stereo process where normals are captured by making several images with different illuminations, and computer normal reveals more surface details (Toler-Franklin and Rusinkiewicz 2010). RTI is more future-proof because RTIs are constructed to contain links to their raw data and are bundled with the raw data when archived, allowing anyone to access the original image data and generate RTIs (Mudge et al. 2010). Gainpaolo and Corsini (2010) offer two tools for visualizing and analyzing RTI images: a multi-platform viewer (RTIVIEWER, allowing new shading enhancement techniques), and a web application based on SpiderGL (JavaScript 3D graphics library relying on WebGL to provide real time renderings of RTI).

Examples Coins One example of RTI use in cultural heritage institutions includes digitizing coins from the collection from San Matteo National Museum shows how advanced 2D image-based representation was chosen because RTI was used to produce photo-realistic image renderings (Di Benedetto et al. 2014). Similarly, the Angora Project found that RTI can be used as a complement method to photogrammetry to record fine detail (Mathys et al. 2013).

Inscriptions RTI was used to investigate the scribe lines in the preparatory layer used to layout the single point perspective of the painting. are also a good candidate for RTI, as the surface is polished metal and is poor for 3D scanning. RTI imaging of the was

Cieslik 13

improved with a velvet lined snoot, allowing documentation of condition. RTI was also used to image a Lenape Indian bandolier leather bag with faded iron gall ink, which left a slight impression in the leather (Wachowiak and Keats Webb 2010).

Surface texture of furniture Reflectance transformation is a low-cost, open-source imaging tool using modeling techniques. RTI permitted interactive viewing of surface textures to show tool marks evident from processing an early 19th century French table centerpiece at Cooper Hewitt, Smithsonian Design Museum (Walthew et al. 2020). RTI was used in a museum exhibit at Cooper Hewitt to teach visitors about conservation and imaging techniques.

Cieslik 14

Chapter 4: Light Dependent Active Methods of 3D digitization

The two main types of 3D scanners used today for artifact preservation include structured light 3D scanners and laser pulse-based 3D scanners, which is the focus on this section of the guide (Groenendyk 2013).

Laser triangulation scanners

Laser scanners are desktop or hand-held scanning devices that capture object geometry based on triangulation to capture a 3D shape as millions of points. This method is called triangulation because the laser dot (or line), the sensor and the laser emitter form a triangle, and accurate point measurements can be made by calculating the reflection angle of the laser light. 3D laser scanning involves emitting a laser beam onto the surface and measures the distance by time delay or phase-shift measurements. Surfaces are therefore a collection for measured points in space (Hess and Baik 2018). For triangulation, a laser beam is projected as a line, and its reflection is measured at a distance profile by a camera array. The system measures the two- dimensional projection of the laser line on an object, and the resulting data is 0.1 mm with an accuracy of 0.02 mm (Hess and Baik 2018).

When this laser beam is projected, this scan of the artifact produces a range map, which encodes the geometry of the surface, similar to other 3D pipelines, these range maps are then aligned with one another to produce a mesh that is then edited, simplified, and undergoes multiresolution and color mapping (Calleri et al. 2011). Raw scans can be imported into Rapidform XOR2 Redesign, an example of a modeling program, to align and merge into a single 3D model (Adams et al. 2010).

Terrestrial laser scanning involves emitting a laser light from a fixed position to obtain a ranged measurement. Terrestrial scanners can be pulse-based, phase-based, or triangulated-based. The pulse-based scanner uses the emitted-reflected light time delay as the basis of their measurement. The phase-based scanners use the laser sinusoidal wave shift as the basis of their measurement. Triangulation-based scanners use the baseline between laser emitter and camera to triangulate a measurement (Weigert et al. 2019).

Systems to use: NextEngine HD Desktop 3D Scanner, triTOS-HE, and Lucida NextEngine HD Desktop 3D Scanner2 Adams et al. (2010) used the Next Engine HD Desktop 3D Scanner and Scan Studio HD Pro Software, which captured point cloud data and RGB color during scanning. After the scans are uploaded to a program that can align the scans and apply a color texture, the full resolution 3D model is then stored in a Wavefront (OBJ) file format, which is able to retain color surface mapping and underlying mesh. The current cost of the scanner is $2,995, and it captures fine detail to 100-micron precision.3

2 http://www.nextengine.com 3 http://www.nextengine.com

Cieslik 15

Breuckmann triTOS-HE4 MCI selected a heritage scanner that is portable and noncontact and captures color data, the triTOS-HE made by Breuckmann GmbH. The system has a 20 degree triangulation angle, which improves the scanner’s ability to capture data in areas of deep relief (Wachowiak and Karas 2009). This scanner is specific to the needs of the Smithsonian Conservation Institution’s needs through conversation with the company. Another similar model that is available on the market is Breuckmann Smartscan, but this is a structured light scanner produced by AICON for industrial purposes.5

Lucida6 The Lucida 3d scanner is a close-range, non-contact laser recording system that captures high- resolution surface texture for low-relief surfaces, such as paintings or base-reliefs. The Lucida hardware and software was developed by Factum . Lucida is a 3D scanning system, built by Factum-Arte and supported by the Factum Foundation used to record microstructure details of paintings and relief surfaces. Lucida is a stereo camera, able to record texture resolutions up to 100 microns challenging shiny, glossy, matte, or dark textures (Weigert et al. 2019).7

Advantages or disadvantages Laser and white light scanners are the most popular as they are lower cost and are accurate and reliable, and both laser and white light scanners are good for surface-recording and most types of color capture (Wachowiak and Karas 2009). Despite the portability of some hand-held laser scanners used in museums, including NextEngine, which is one of the least expensive scanners on the market, the laser scanner may struggle to capture the surface of objects that have deep undercuts, highly reflective surfaces, or significant surface scattering.

Problems related to laser scanning include color scanning, sharp-edged objects, and reflective surfaces. Color sampling poses a problem because most 3D scanners are designed for geometric acquisition rather than color, sampling only the apparent color of the surface and not its reflectance properties (Calleri et al. 2011). The two methods of color storage are (1) per-vertex (each vertex in the mesh has an associated RGB value8, good for dense models for 3D scanning) or (2) texture mapping (3D object has parameterization. This is more standard and is supported by all 3D softwares) (Calleri et al. 2009). Laser scanners also struggle to provide effective results on areas with sharp changes in direction, such as on blades on thin surfaces, which makes it difficult for studies related to use-wear on cultural heritage objects. This is because the laser scanner cannot capture angles finer than the angle between the laser source and sensor recording the laser reflection on the modelled object (Molloy et al. 2016).

4 https://www.si.edu/MCIImagingStudio/3DScanning 5 https://www.aniwaa.com/product/3d-scanners/aicon-3d-systems-breuckmann-smartscan/ 6 https://www.factum-arte.com/pag/1552/recording-with-the-lucida-3d-scanner 7 https://www.factum-arte.com/pag/682/Lucida-3D-Scanner 8 RGB values refer to the color of the object taken up by the camera. RGB is a device-dependent color model, since different devices are going to detect or reproduce a specific RGB value in different ways. This is because color elements in the camera respond to the individual levels of red, green, and blue (hence the name) differently depending on the manufacturer of the device.

Cieslik 16

Stray light on shiny surfaces poses a problem given that reflected light is used to determine object geometry. Extraneous reflections from highly reflective surfaces of polished metal are an example of stray light that can interfere with good surface recording and accurate geometric reconstruction. Translucent or transparent materials such as glass and plastic will be difficult to scan, along with objects that change shape when handled, including inflatable structures, costumes, features, fur, hair, and rubber (Wachowiak and Karas 2009). For laser scanning, shiny surfaces were at one time difficult to image, since reflections interfered with white light laser scanners’ shape interpretation, but using blue light laser scanners eliminate this problem, as they can scan reflective surfaces effectively without any temporary anti-glare surface coating (Walthew et al. 2020). This makes laser scanning much more effective when digitizing reflective objects, and the use of a blue laser scanner could be used when digitizing difficult objects, including marble, ivory, and enamel.

Examples Marble statues Although photogrammetry, structured-light triangulation, time-of-flight, and interferometry can be used to digitize marble statues (light-colored, diffuse, and consistent minimum feature size imposed by strength of material). The group chose laser-stripe triangulation because it offered the best combination of accuracy, working volume, robustness, and portability for the digitization of 10 statues by Michaelangelo as part of the Digitization Michelangelo Project (Levoy 2000).

Restoring ships When comparing photogrammetry to RE laser scanning with Konica Minolta VI 9i system for scanning boats, the laser scanner was accurate for the purposes of preserving and restoring traditional ships, but it did pose problems, as in the field, the use of optical systems based on light projection are sensitive to environmental conditions (Martorelli et al. 2014).

Sarcophagus Menna et al. (2016) utilized both close-range photogrammetry and triangulation-based laser scanner to 3D model an Etrucan Sarcophagus. The authors used a Konica-Minolta Vivid 910 triangulation-based laser scanner, using a wide lens for lower area and medium lens for the human figures (Menna et al. 2016).

Archaeological artifacts Tucci et al. (2011) utilized NextEngine 3D Scanner HD, which is a triangulation-based laser scanner, suited to smaller artifacts; whereas, larger and heavier artifacts can be scanned with the photogrammetric system Z-Scan by Menci Software. This scanner was used to digitize archaeological materials from Aegean and Cypriot antiques in Tuscany. While the automatic storage of a color attribute in each vertex of the polygonal mesh carries the advantage of not requiring further texture mapping, the downside is the low picture quality (Tucci et al. 2011).

Furniture: 3D laser scanning (using a Faro Edge Blue Light laser scanner) of an early 19th century French table centerpiece at Cooper Hewitt, Smithsonian Design Museum allows comparison of two closely related figures, one of which is believed to be a surmoulage (replacement cast) of the other (Walthew et al. 2020).

Cieslik 17

Skeletal remains Mathys et al. (2013) utilized NextEngine to digitize a Neanderthal talus from Spy, Belgium. This talus comes from a collection of Neanderthal bones from two adults excavated at Spy, Belgium. The talus was 3D modelled by digitized by CT, NextEngine, and photogrammetry (Mathys et al. 2013).

Archaeological artifacts For the European “3DIcons” project working to provide 3D models and 3D data to Europeana, laser scanning was used for 3D survey of archaeological remains, including 472 objects at the Civic Archaeological Museum in Milan (Guidi et al. 2015). A standard triangulation device (laser and pattern projects) was used to digitize the object without creating artifacts. In another case, laser scanning was avoided for archaeological objects because their material is less optically cooperative. When colored texture is needed, the highly textured surfaces and generation of a textured mesh takes more time than applying texture and shape as with SfM (Barsanti and Guidi 2013).

Lithic objects A computer algorithm was used to position the artifact in a way that enables the extraction of standard parameters (length, width, etc.), removing ambiguities affecting traditional manual measurement systems. The lithic objects were scanned by a 3D camera produced by Polygon Technology, operated based on structured light projected on the artifact, recorded by two digital . The scanning routine and the conversation of the data to a 3D model are carried out by the program QTSculptor. The lithic artifacts are made of flint, which is harder to scan as a result of surface reflectivity and dark color (Grosman et al. 2008).

Time-of-flight scanning (LiDAR, LADAR, Range Scanning)

Time-of-Flight (TOF) approach involves sending a pulsed light signal towards the surface of the object, building, or archaeological site, measuring the time elapsed until the reflection of the same signal returns to the imaging. One advantage is that this can scan an entire building because of the large working volume compared to laser triangulation, explained in the previous section. TOF is less accurate and has less sampling density, however, (Calleri et al. 2011). TOF or Phase shift scanning two methods used where points are measured directly in a metric scale. A laser scanner on the tribrach can be set up over survey control points on the ground, and geometric targets are placed around the scene (Hess and Baik 2018).

Systems in use: Faro FARO FOCUS 709 The most common laser LiDAR scanner is Faro, behind institutions that do not use Laser or LiDAR light scanners (Flynn 2019a). Larger-scale scanners, including MatterPort and Faro 3D scanner, can even scan room interiors, large buildings, and city landscapes (Groenendyk 2013). One of the models currently on the market, the FARO Focus3D scanner is the smallest 3D laser scanner ever built weighing 5 kg. This scanner has many advantages, including distance

9 https://www.faro.com/products/construction-bim/faro-focus/

Cieslik 18

accuracy up to 2 mm, range from 0.6 m up to 120 m, and a measurement rate of 976,000 points per second.10 The authors utilized a Terrestrial Light Scanner FARO Focus 3D x 130 HDR, a device that uses phase shift technology, to obtain scans from 0.6 m to 130 m. It can capture millions of 3D measurements at up to 976,000 points per second, and after Seti I tomb was surveyed from 70 different positions, the point cloud obtained for the entire tomb was 2.132 million points.

Leica ScanStation C1011 Leica ScanStation C10 is compact, has a full dome, and a great range. The faster scanning makes this system more cost effective for more scanning projects, including exteriors and interiors, and this scanner has both onboard and PC controlling systems, along with GPS and prism options.12 This scanning system was used for integrated laser scanning when digitizing the Calci Charterhouse in Pisa, Italy for conservation purposes (Croce et al. 2019).

Advantages and disadvantages TOF is not restricted by the need for a fixed distance between the laser scanner and the sensor, so it is more portable and able to record larger objects but also less accurately (Payne 2013). For larger objects, like buildings, bridges or dams, Time of Flight (TOF) active range sensing is used. Systems based on the measurement of distance are called LiDAR (Light Detection And Ranging), even if the area is used for indicating the specific category of the laser scanner (Guidi and Remondino 2012).

LIDAR (Laser Imaging Detection and Ranging Scanner) is very useful in accurately digitizing an object, often used for topological maps and land (Wheeler 2017). LiDAR measures the exact distance between the laser and point being imaged. The files produced through LiDAR are CAD (Computer Aided Design) files. LiDAR is useful because it can capture a large amount of information at a time, with very accurate measurements but LiDAR is expensive and requires moderate training to understand the equipment, capturing method, processing, and software (Woody, 2016b).

Examples Architecture As part of a full 3D integrated scan conducted on the Casa de Vidro in São Paulo, Brazil in 2017, the geometric, morphometric, and diagnostic documentation as performed with a 3D laser scanner surveyed by time-of-flight technology (Laser Scanner Leica P20 and P30) to obtain a high-accuracy 3D metric model. The overall point cloud produced can be queried with source software (Leica Cyclone 9.2.0) (Balzani et al. 2019).

Altarpiece A Baroque altarpiece from San Isidoro was scanned using a Leica Nova MS50 multistation, and the final five stations were aligned by classic topography, obtaining a point cloud with a resolution of 5 mm (García-León et al. 2019).

10 https://www.cadcam.org/products/faro-3d-scanners/ 11 https://www.exactmetrology.com/metrology-equipment/leica-hds/leica-scan-station-c10 12 https://www.nsscanada.com/img/scanners/Leica_ScanStation_C10_BRO_en.pdf

Cieslik 19

Burial tomb For digitizing the tomb of Seti I, Lowe (2017) utilized a Lucida Laser Scanner, which is a close- range scanning system that works at 8 cm from the surface of the wall. The scanning head is mounted onto a lightweight mast and the horizontal and vertical movement (X and Y axis) is regulated by a Computer Numerical Control (CNC). The scanner has a of 2.5 cm and records an area of 48 x 48 cm in one hour. The scanner was best utilized for providing a medium resolution geometry and color information for the entire space (Lowe 2017).

Architecture TOF method used for digitizing Muayeded of the Dolmabahce Palace, the official residence of the Ottoman Empire in the 19th and early 20th centuries in Istanbul. Before terrestrial laser scanning, regularly distributed reflectors were attached on the visible facades and coordinates were measured with georeferencing for laser scans. Terrestrial laser scanning performed with TOF scanner LMS-Z420i from RIEGL with mounted Nikon D70s calibrated . Since the camera calibrated and images available from every viewpoint, a RGB color value was assigned automatically to every scanned 3D point (Yastikli 2007).

Structured-Light Scanning

Among 3D scanning systems, the most frequently used are active optical devices that shoot a controlled, structured light across the surface of the artifact and record its geometry (Calleri et al. 2011). Light can be “coherent,” a laser stripe swept across the object surface or “incoherent,” such as complex fringe patterns produced with video or slide projectors.

Systems in use Artec Space Spider13 The most common handheld (structured light scanner) is Artex, behind institutions not using handheld or desktop 3D scanners (Flynn 2019a). Artec Spider is a hand-held 3D scanner, developed for CAD users to scan small items with complex surface structures (Kersten 2018).

Creaform Go!SCAN 3D14 The IUPUI University Library Center for Digital Scholarship purchased Creaform’s Go!Scan 3D portable, hand-held white light scanners (Johnson et al. 2018). The scanner will produce a point cloud that must be converted into a polygon (.obj file). Polygons are therefore compatible with 3D software packages (Johnson et al. 2018).

Pocketscan 3D15 The quality parameters for the PocketScan 3D indicate that self-calibration routines are useful and recommended because the instrument was not accurately determined. The handling of the

13https://www.delscan.com/artec-space-spider.html?utm_source=google&utm_medium=cpc&utm_campaign=artec-space- spider&utm_content=spider-mainpage&gclid=Cj0KCQjwoPL2BRDxARIsAEMm9y8U-ilvWtrP4KunCLXdQithqNNKF6SnlyusBF72- hcEeZSPoPYY90kaAvcmEALw_wcB 14 https://www.creaform3d.com/en/handheld-portable-3d-scanner-goscan-3d#gref 15 https://it3d.com/en/3d-scanners/pocketscan-3d/

Cieslik 20

systems required slow, homogenous movements around the object, and all systems had high scanning speed, requiring only a few minutes to scan the object (Kersten 2018).

Stonex F6 Handheld Scanner16 The accuracy is 0.2%-0.1% upon scanning distance. Itcan also scan from complete darkness to daylight. It has a depth of field of 4.5 to 0.5m. No special equipment is required for calibration, and the system runs using an internal battery. This scanner produces PTS, ASCII, PLY, E57, and STL files. Patrucco et al. (2019) describe the use of this structured light handheld laser scanner for digitizing a wooden maquette of the temple of El-Hilla from the ancient Nubian temples from “Museo Egizio di Torino” (Patrucco et al. 2019).

Stonex X300 Terrestrial Laser Scanner17 This scanner was used to determine high-resolution measurements in a short time period. This scanner was used to digitize a statue from the ancient city of Halta called the Lady of Halta, for which artificial targets were distributed over the statue and surroundings to facilitate alignment of the point clouds. JRC 3D-Reconstructor (Stonex Native Software) was used to process the data, including for filtering, registration, 3D modelling, and texturing. Stonex X300 provides largy point clouds with high density resolution, which results in very noisy and low retailed meshes for close-range acquisition, so Thamir and Abed (2020) integrated an external DSLR camera with the scanner to preserve details. Following the meshing process, the models are exported as PLY files (Thamir and Abed 2020).

Other systems This article also contains a list of three scanning applications: Tmio, Scann3D, and Trimensional. The Handheld Infrared Scanners are Microsoft Kinect, iSense Scanner for iPad 4 and iPhone 6, XYZ Handheld Scanner, RealSense Development Kit, and occipital Structure Sensor. Some of the best Photogrammetry Software includes Skanect, Arc3D (free), Autodesk ReCap 360, Autodesk ReMake, and Reality Capture. LIDAR scanners include Next Engine Scanner, Trumble TX 8, HandyScan 3D, Leica ScanStation P40 3D (Wheeler 2017).

Advantages and disadvantages Structured-light scanners offer many advantages, including their portability and accuracy, but the handling of the systems required slow, homogenous movements around the object, and all systems had high scanning speed, requiring only a few minutes to scan the object (Kersten 2018). Using a light tent means that the scanned objects retain all of their color with no shadows, resulting in significantly less time needed for processing photos (Niven et al. 2009). Some systems remove color texture before uploading or use in an exhibit because it makes it easier to see, but color is very significant to cultural heritage collections.

Examples Statues While conducting a 3D Ultrasonic Tomographic Imaging Test of an Egyptian statues of Amenmes and Reshpu, the survey involved laser scanning, which was done with a handheld 3D laser scanner (FARO Scanner Freestyle 3D). This scanner uses structured light technology

16 https://www.stonex.it/project/f6-laser-scanner/ 17 https://www.stonex.it/project/x300-laser-scanner/

Cieslik 21

consisting of two infrared cameras that create a “stereo pair” of images looking at the structured light pattern, and the laser sensor ensures the measurement of the surface within a range of 0.5-3 m (3D point accuracy 1.5 mm) and an RGB camera (Di Petra et al. 2017). Similarly, The handheld Kinect scanner, the Faro Freestyle used in this study of an Egyptian sculpture, is useful for cultural heritage because it is portable and provides full color clouds and is easy and versatile for small rooms survey and is cheapeter (Donadio et at. 2020).

Large scenes When digitizing the Longmen, Xiangtangshan, and Maijishan grottoes and the Nanyuewang museum, TOF scanners are used for large scenes, but structured light scanners based on triangulation offer more accuracy (Li et al. 2010). As mentioned above, TOF scanners are able to capture more data within a scene, but structured light scanners offer a more precise image.

Faunal collections Niven et al. (2009) scanned modern animal skeletons with a Breuckmann triTOS-HE structured light scanner to generate a virtual comparative faunal collection. While objects ranging from 1 cm up to 90 cm can be scanned using the lens, the camera resolution is not adjustable. For larger objects, when the scanner’s field of view increases, the resolution of the scanner decreases. To create color, a color image of the skeleton was required.

Small architectural models When digitizing a collection of architectural models representing ancient Nubian temples from “Museo Egizio di Torino,” structured laser scanning, using a handheld F6 SR laser scanner, was found to reconstruct a more complete geometry, compared to photogrammetry that shows more holes. The handheld F6 SR structured-light laser scanners are able to operate at a declared minimum distance of 250 mm with a certified accuracy of 90 µm. The device has a RGB camera and can be used to generate a photographic texture (Patrucco et al. 2019).

Chapter 5: Light Dependent Passive Methods of 3D digitization

Photogrammetry

Photogrammetry is a metric imaging method that enables digital reconstruction of the form and geometry of a real object in three dimensions. The reconstruction is based on a set of photographic images covering all surfaces with enough overlap to enable identification of common details in each photo (Hess et al. 2009). Photogrammetry involves taking multiple photographs around an object and then generating data points. In order to do so, an image must be taken on at least three different elevation levels in a 360 rotation with a photograph taken on at least every 20 marks. This will produce a minimum of 90 images that can be synthesized to

Cieslik 22

create a point cloud of the 3D object and then a 3D digital representation (Woody 2018).

While photogrammetric technique based on the object, an overlap of 60% was traditionally required for analytical photogrammetry providing a strong base to height ratio. As a result of automatic image correlation algorithms, 66% overlap exists. While it requires capturing good photographs with uniform and high contrast, final accuracy depends on image resolution or ground sample distance, resulting from the resolution of the camera, of lens, and distance to subject (Matthews and Noble 2010). According to the Collections Cubed survey, 75% of respondents indicated that photogrammetry is a method used to digitize collections because this takes advantage of DSLR camera images (Urban 2016). According to a survey of members of Europeana Aggregator’s Forum and EuropeanaTech community, most (77.8%) of member institutions utilize photogrammetry, followed by 3D models optimized for online publications (Fernie 2019). These surveys therefore confirm the popularity of this method and its usefulness for cultural heritage digitization.

Systems in use (equipment) Digital SLR camera The equipment includes a digital SLR camera, light equipment, scale bars, and a color target for easy transport to museums and other sites (Hess et al. 2009). Algorithmic analysis of the photo pairs makes it possible to identify each detail as common points, which are then used to identify each detail as common points (Hess et al. 2009). Gaiani et al. (2020) created 3D models of small and medium-sized cultural heritage objects with a smartphone camera, LED lights, color checker, and tripods. The authors utilized a reference SLR camera: Nikon 5200 camera. The color accuracy was achieved via an application processing RAW photo formats. On smartphones, there are two processing stages, as image pre-processing works to calibrate the frames using a SHAFT application given the RAW files are not available on the smartphone. The smartphone produced very similar results to the SLR camera (Gaiani et al. 2020). For artifact photography, the artifact must be placed on a black or white background, must include the artifact catalogue number, have a color or white-grey-black scale, and be placed straight. The photographs should use a digital camera capable of capturing 300 pixels in RAW or TIFF files (Brosseau et al. 2006).

Shining 3D Freescan and Creaform HandyScan Compared to advanced 3D structured light scanners with speckle patterns, two photogrammetric stereo systems Shining 3D FreeScan X7 and Creaform HandySCAN 700 obtained the best results, obtained by using signalized targets in object space and by sensor calibration accomplished before object scanning (Kersten 2018).

Advantages and disadvantages Photogrammetry is very versatile, as it can be done underwater, terrestrial, aerial, or satellite imaging and with different relative positions (Guidi and Remondino 2012). Image-based modelling techniques (photogrammetry) are preferred for monuments or architectures with regular geometric shapes, low budget projects, good experience, and time and constraints (Guidi and Remondino 2012). When digitizing complex wooden structures, such as altarpieces and liturgical furniture, digital photogrammetry and orthophotography are recommended for this metric documentation (Ceballos 2017).

Cieslik 23

Barns (n.d.) notes that close-range photogrammetry can be used for pottery, osteological, or lithic pieces as well as archaeological structures and excavation sites. Close range photogrammetry refers to the collection of photographs from a lesser distance than aerial photogrammetry. For small textured artifacts, Guidi et al. (2015) used an automatic photogrammetric approach because it avoids geometrical artifacts related to highly contrasted textures over the 3D shape. The SfM-based approach, once the image is acquired with an automatic protocol, will provide a textured 3D mesh that requires only a little residual editing (Guidi et al. 2010).

Photogrammetry instruments are cheap and portable and easy to use, and the object can be reconstructed. The expertise needed is why many prefer 3D sensors, including laser scanners, which allows easy creation of 3D point clouds (Guidi and Remondino 2012). Structure from Motion and Image Matching Photogrammetry has shown to be effective for digitization of cultural heritage collections, allowing for texturized models (Alliez et al. 2017). As a result of cultural heritage objects often having poor texture and the difficulty of aligning homologous points between images, stereo digital photogrammetry allows for 3D vectorization and digital surface model generation with a semi-automatic or automatic process. The method of stereo viewing is achieved using liquid crystal display eyewear containing alternating and infrared emitter on top of the monitor in a digital photogrammetric system (Yastikli 2007).

Examples Statue surface Apollonio et al. (2017) used real-time rendering with very accurate color and shape to compare the surface of open-air bronze art before, during, and after cleaning treatment. The shape of the surface was reconstructed with a triangle mesh, with 0.2 mm accuracy. The semi-automation solves the problem of Lambertian reflexion of pantinas and specular reflections in claned bronze.

Sarcophagus Menna et al. (2016) utilized both close-range photogrammetry and triangulation-based laser scanner to 3D model an Etrucan Sarcophagus. For close-range photogrammetry, the authors used a digital single-lens reflex Nikon D3X 24 megapixel camera. To avoid casting shadows visible on the image, light should be positioned symmetrical with respect to the optical axis of the camera (Menna et al. 2006).

Statues While conducting a 3D Ultrasonic Tomographic Imaging Test of an Egyptian statues of Amenmes and Reshpu, the survey involved photogrammetry, which was conducted with a Nikon D800E camera and the images were processed with a Structure from Motion technique 3D model with an accuracy of less than 5 mm. The photogrammetric cloud has minimal noise compared to the freestyle cloud, but if photogrammetry is to be used for UTI, high levels of cameras, optics, and expertise are needed in the acquisition and modelling stages to ensure the accuracy needed for UTI tests (Di Pietra et al. 2017).

Architectural part condition

Cieslik 24

Digital photogrammetry was used to digitize a mosaic floor at the Roman VIlla del Casale at Piazza Armerina (Enna, Sicily, Italy) (Gabellone 2019). UAV-based photogrammetry was used to obtain information about the state of roofs, inaccessible courtyards, and perimeter, complementing data that could not be acquired through terrestrial surveys when digitizing the Calci Charterhouse in Pisa, Italy for conservation purposes (Croce et al. 2019). For close-range photogrammetry, a Nikon D700 and a GoPro Hero 4 were used.

Archaeological objects The 3D-ICONS project chose 472 of the most relevant items at the Civic Archaeological Museum in Milan, Italy and chose the most chose the best technology for each thing:while photogrammetry was used first, given its efficiency, it was determined that it did not have readable textures to generate recognizable patterns to graph onto the object (Guidi et al. 2015).

Point cloud of small wooden maquette of the temple of El-Hilla from architectural models of ancient Nubian temples produced from photogrammetric 3D imaging (Patrucco et al. 2019, Fig. 8).

Historical boats Martorelli et al. (2014) uses digital photogrammetry to analyze the shape of three historical boats. The photogrammetric method used followed analysis of the shape of the object to be reconstructed in digital form and planning of the photographs to be taken, calibration of the camera, processing of the photos with a software to generate a point cloud, and transferring it to CAD software to create a 3D CAD model. Photogrammetry, although being less accurate than RE laser systems, is quicker and less expensive (Martorelli et al. 2014).

Natural history collections From the Angora Project comparing 2D and 3D digitization methods in natural history museums, photogrammetry (18 Mpx Canon 600 D camera with macro 50 system, price of supplies, quality) was found to be a good option considering portability of the system, price of supplies, and quality. There was the least deviation in methods between photogrammetry and structured light scanning (Mephisto Ex-Pro) (Mathys et al. 2013). When digitizing a Neanderthal talus from Spy, Belgium, results from Agisoft Photoscan are better than medical CT and texture has better than surface scanner. Moulding or µ-CT acquisition are more precise but do not record texture (Mathys et al. 2013).

Aerial photogrammetry example, archaeological site Terrestrial photogrammetry was combined with aerial 3D imaging to 3D model the “Theaters Area” of Pompeii, Italy. Image-based open source tools (photogrammetry) provide easy access to non-expert users and they frequently do not provide precise and reliable 3D reconstructions with

Cieslik 25

an accurate metric context. Range scanners could provide dense and precise 3D clouds, matching these two methods well together (Sleri et al. 2013).

Building facade Photogrammetry was used because of the 3D nature of the Fatih Mosque facades. Stereo-pairs of photographs taken in normal cases using Rolleiflex 6008 semi-metric film based cameras. films with the resolution of 12 µm as RGB and TIFF image format. The 3D vectorization of the Fatih Mosque was performed by experienced human operators. Stereo Viewing achieved by LCD eyewear containing an alternating shutter and infrared emitter on top of monitors. This produced a set of line drawings and CAD-based DXF and DGN files (Yastikli 2007).

Structure from Motion (SfM)

Systems in use (equipment) For SfM, the camera (Sony NEX-6-mirrorless) was held on a , and the object, resulting in a lazy-Susan, was rotated 10-30 degrees for each shot, so between 60 to 120 images were taken for each model. Images were captured in RAW format and edited in Adobe Photoshop elements 13 and Adobe Camera RAW 9. 3D models were generated using Agisoft Photoscan and post- processing using MeshLab (Molloey et al. 2019).

Advantages and disadvantages: Focus stacking Focus stacking, a technique which extends the of an image by combining different images with low depth of field, can be used with Structure from Motion (SfM) in order to create accurate 3D models of small artifacts (Mathys and Breco 2018). Although none of the images has the object of interest fully in focus, collectively they contain all of the data required to generate an image that has all parts in focus (Mathys and Breco 2018). Focus stacking can be compatible with photogrammetry with 3D recording of sub-millimetric details of prehistoric petroglyphs and paintings (Plisson and Zotkina 2015) and with digitizing small artifacts, such as the Venus of Frasassi Paleolithic sculpture (Clini et al. 2016). SfM uses highly redundant bundle adjustment based on matching features in overlapping images for reconstruction (Westoby 2012).

Examples Archaeological objects For the European “3DIcons” project providing 3D models and 3D data to Europeana, Structure from Motion was used for archaeological objects, including artifacts at the Archaeological Museum in Milan. Applying texture and shape at the same time with SfM is faster than generating a textured mesh with a laser scanning. Images were acquired with a Canon 3D Mark II and a Canon 60D and a Sony Nex5. Due to limitations of software used for SfM processing, only capable of opening JPG images (not raw), images were captured as JPG. Barsanti and Guidi (2013) used DOFmaster to determine depth of field and Agisoft Photoscan to select the polygon number of the model (Barsanti and Guidi 2013).

Ethnographic objects

Cieslik 26

Structure in Motion approach was used to digitize high resolution images of ethnographic objects in the David T. Vernon Collection of Native American social and cultural objects. This was done by taking a series of overlapping stereo photographs of each item, post-processing to a 3D image scale reference (Youngs 2017).

What is stereo matching and stereo imaging? Dense stereo matching builds on structure-from-motion techniques. Instead of obtaining a discrete depth map, dense stereo matching works to reconstruct a sub-pixel accurate continuous depth map. Stereo matching is applied to each pixel of the images considering all possible couples among them (Dellepiane 2010). Although it cannot be used for measurements, it can be used for archaeological excavations (Dellepiane 2010). The output of the Arc3D web-service can be loaded and processed in MeshLab.

Multi-view stereo imaging refers to the reconstruction of a 3D object or scene from more than two source images. After extracting interest points and line or area features, the retrieves correspondences are used in bundle adjustment to restore the image orientations. The Structure from Motion process can also provide the camera calibration in a multi-view stereo vision (Stentoumis 2018). When conducting multi-view stereo imaging, visibility models are important to define which points in the scene are visible and which ones are occluded in each image, and the geometric or photogrammetric attributes of the 3D model (Stentoumis 2018).

Chapter 6: Light Independent Methods of 3D Digitization

Topographic methods, often combined with other methods

This paper discusses Structure-from-Motion photogrammetry, a method idea of high resolution topographic reconstruction for low-budget projects in remote areas without direct power sources, as terrestrial laser scanning is complicated by the cost, weight, and power needs of the instruments (Westoby et al. 2012).

Examples Architecture A full 3D integrated scan conducted on the Casa de Vidro in São Paulo, Brazil to generate a point cloud model, a detailed topographic survey of the homology points, and the high-resolution

Cieslik 27

photographic survey aimed at documentation of the house’s conservation state (Balzani et al. 2019). This helped determine the conservation work needed on the structure.

Architecture Croce et al. (2019) combined topographic survey with Global Navigation Satellite Systems (GNSS), terrestrial laser scanning, and ground-based photogrammetry, and indoor and outdoor drone photography along with traditional topographic techniques to digitize the Calci Charterhouse in Pisa, Italy. A future goal is to create benchmarks characterized by stability and durability over time.

UAV-borne orthophotos of Calco Charterhouse in Pisa, Italy with enlarged digitization of roof structure (Croce et al. 2019). These photos were combined with terrestrial laser scanning and ground-based photogrammetry.

CT scanning: viewing internal structures

CT scanners have often been used to create a 3D model of internal structures and provide researchers with an idea of internal contents, such as recovered sarcophagi (Groenendyk 2013). 3D data can therefore be used in conjunction with x-radiography and CT scanning as a complement to these analytical tools to reveal structure and spatial relationships with higher resolution (Wachowiak and Karas 2009). Conserved objects should undergo imaging study, or the physical examination of the internal structure of the sculpture by means of x-ray (one image per capture where there are superimposed planes) or computer tomography (multiple images of the interior of the object in any plane of space without being superimposed). CT provides more information than x-rays about the inside of the sculpture but also about transformations or deterioration inside. Medical scanners may pose problems of size limitations and low imaging resolution compared to industrial scanners (Ceballos 2017).

Examples Historical violins Most digitization projects of historical violins were performed with CT scans in order to obtain the internal and external surfaces of the violin relative to one another (Dondi et al. 2017). Many studies have used CT scanning as a comparative standard for a good level of object geometry, but this method can often be expensive and time consuming.

Mummified remains Museum collections contain animal and human mummified remains. CT scanning was used to image five animal mummies housed in the Egyptian collection of Iziko Museums of South

Cieslik 28

Africa in Cape Town, utilizing a CT scanner located at Stellenbosch University. This technique is useful for identifying animal species, skeletal portions, and any fakes among the mummified collection objects (Cornelius et al. 2012).

µCT scanning

Micro-computer tomography (micro-CT) is a non-destructive imaging technique that allows for the rapid creation of high-resolution three-dimensional data. Based on x-ray imaging, it allows for a full virtual representation of both internal and external features of the scanned object. The resulting 3D models can be interactively manipulate on screen (rotation, zoom, virtual dissection, isolation of features or organs of interest) and 3D measurements can be performed (from simply length and volume measurement to density, porosity, thickness, and other material-related parameters (Fraile et al. 2016).

Example Natural history collections The Hellenic Centre for Marine Research has led the way for using this technique for rapid digitization of natural history collections and creating tools to display and manipulate the 3D tomography. HCMR also created a handbook for micro-computed tomography for natural history specimens (Keklikoglouetal et al. 2019).

3D model of Omorgus gigas beetle after quick segmentation before (A) and after (B) manual removal of the pin using the Dragonfly software. This 3D model was created from The Royal Belgian Institute of Natural Sciences by Jonathan Brecko, who the author contacted about 3D digitization techniques and standards (Keklikoglouetal et al. 2019, Fig. 21).

Cieslik 29

Chapter 7: 3D Pipeline of Developing a 3D Model from RAW Data

After 3D data has been captured through laser scanning, LiDAR, or photogrammetry, it must undergo several steps from uploading the RAW camera data or laser scanning files to a computer and producing a point cloud, mesh, or finalized 3D model.

Modelling, data alignment, and texture mapping

Reality-based polygonal modelling involves a scene being modelled in 3D by capturing many points of its geometrical features with a digital instrument and connecting them by polygons to produce a 3D result similar to the polygonal CAD model (Guidi 2012). 3D data registration involves aligning 3D data in one coordinate system, with XYZ points in 3D space and hence building the global frame for all feature points and registering each by connecting neighboring points to polygons that can then be reduced using polygon decimation and texture mapping (Li et al. 2010).

Cieslik 30

Further processing may be required after the model has been constructed, including hole filling, noise reduction, UV wrapping, and texture/color per vertex calculation. Unwrapping and normal map extraction can be completed with ZBrush, Cinema3D, , 3DS Max, Maya, and 3D Coat to create a derivative image viewed on portable devices and virtual reality devices (Nieva de la Hidalga et al. 2019). It’s important to consider that small holes may not be noticeable in the final model. For an image viewed at a near distance of distinct vision (approximately 250 mm), any subject detail recorded with an image circle of 0.2-0.3 mm diameter may not be perceived (Ray, 2002). This means that the human eye may not normally notice holes in the picture or small noise around the object surface. For applying the 3D surface appearance, most materials apply an image or texture to the model, by mapping each three-dimensional vertex to a corresponding point within a two-dimensional image and most 3D file formats support this (McHenry and Bajcsy 2008).

Meshlab and other 3D programs

MeshLab is an open source tool used to generate and process color information on a high- resolution 3D model (Calleri et al. 2009). MeshLab was created for managing and processing large unstructured triangular meshes and point clouds and works to check, measure, clean, and convert 3D meshes (Calleri et al. 2009). MeshLab is a popular system for finalizing 3D meshes. MeshLab can improve 3D models through mesh cleaning (editing and processing tool to remove unwanted data and artifacts), remeshing (geometry recreated by preserving geometric figures and removing noise) and color transfer (transfer color information from original to model) (Dellepiane 2010; Donadio et al. 2020).

Many processing steps have been conducted using open-source processing packages, including MeshLab and Scanalize, and commercial, including Polyworks, RapidForm, Geomagic Studio, Cycline, and 3DReshaper (Guidi and Remondino 2012). MeshLab was released from Apple iOS and Android with a 3D viewer, which reads file formats and interactive mesh rendering up to 1 to 2 million faces (Di Benedetto et al. 2014).

One of the most important uses of Meshlab is simplification of 3D meshes so that they can be uploaded and shared through embedded 3D Viewers and on website platforms, including Sketchfab. Tools for the simplification of triangulated surfaces include MeshLab as well as Nexus, which is a multiresolution visualization library supporting interactive rendering of large surface models transforming triangles to msall contiguous portions of mesh to reduce the number of per-element CPI operations (Di Benedetto et al. 2014).

While MeshLab is effective, other programs exist, including Blender that was used in the case of UTI analysis of an Egyptian statue with a Freestyle scanner, chosen for its capability in re- meshing, UV unwrapping, and baking (Donadio et al. 2020). Blender was shown to be the most widely used software by Sketchfab users, according to a 2019 survey conducted by the 3D model upload service (Flynn 2019a). Behind Blender, Sketchfab users utilized Zbrush, Max, Maya, Meshlab, and Meshmixer to construct 3D files. Similarly, although Meshlab is well-suited for cleaning 3D models, the most common photogrammetry software program is

Cieslik 31

PhotoScan/MetaShape followed by Reality Capture, Meshroom, and 3DF Zephyr (Flynn 2019). Some software is specifically tailored for specific types of 3D digitization.

CAD or BIM approaches

CAD models, or computer-aided design models, are produced by software. These models are used by architects, engineers, and artists to create precision drawings and technical illustrations. CAD software is used to create three-dimensional models of buildings and other structures that themselves are archived for historical and educational purposes. In the last couple years, the Architecture, Engineering and Construction domain has seen the spread of Building Information Modeling18 techniques based on the construction of parametric models enriched with information levels. The Scan-to-BIM process indicates the shift from raw data survey output (point cloud) to parametric informative models of the heritage asset (Croce 2019). This is mirrored by increased 3D modelling for designing and referencing buildings.

Bidirectional reflectance distribution function (BRDF) involves measuring the amount of light that is scattered by some medium from one direction into another. BRDF is a specific quality of the optical features of the object, as integrating it over specified distance and reflected solid angles defines the reflectance of the object. The topography of the material interface determines how radiation, whether visible, infrared, radar, or otherwise, is scattered when it hits the surface, as smooth surfaces reflect almost all the way into the specular direction, while increasing roughness tends to refract the light into all different directions. Measuring and modelling these characteristics of the object can be used to represent the material of the object, but it is not efficient in terms of time and resources (Calleri et al. 2009).

18 Building Information Modelling is a digital representation of physical and functional characteristics of a facility. A BIM is therefore a knowledge resource for information about the facility design. Building Information Modelling is therefore a 3D model-based process of architecture, engineering, and construction professionals to more efficiently design and build buildings.

Cieslik 32

Chapter 8: File Formats

3D models can be modelled in many different file formats. Technical advancements in 3D digitization necessitate the development of standard formats to ensure data accessibility. High resolution models refer to sampled models counting from 5 million up to hundreds of millions of faces or points (Di Benedetto et al. 2014). According to the Collections Curbed survey in 2015, the PLY and OBJ file formats were tied for the most common followed by STL (used for ), PDF, 3DS, and more (Urban 2016).

RAW file formats: greater manipulation

RAW images are those with pixels that are unprocessed and allow for the camera’s maximum metric potential, removing limitations using JPEG format but this compression technique results in loss of pixel intensity information (Stamatopoulos et al. 2012). Some projects explicitly capture raw images in order to have a dynamic greater than the 8 bit per channel allowed by the JPEG format (this despite JPEG being the only format that Photoscan can process (Guidi et al. 2015). RAW files allow greater manipulation of light and color during image processing and ensure higher resolution (Molloy et al. 2016).

Cieslik 33

Some projects also make the unprocessed RAW files accessible to researchers (Akhtar et a. 2017). For example, for post-processing Highlight RTI data, it is recommended to use tools that convert RAW files to DNG format, using an RTIBuilder and RTIViewer (Reflectance Transformation Imaging, 2013). Similar to the above example, García-León et al. (2019) utilized RAW files to achieve reliable and consistent color representation for the whole set of photographed items, using an Xrite ColourChecker. Stamatopoulos recommends using the open source program “dcraw” to gain access to RAW proprietary file formats. By using dcraw, it is possible to decrypt a RAW file and create a true-color image, changing the process of image creation by removing every step that can modify the RAW values. This creates full color RGB images that can be used for photogrammetry. A preprocessing approach for RAW images produced from the photographs can lead to better photogrammetric output than JPEG images (Stamatopoulos et al. 2012).

Issue of proprietary file formats

One problem with file formats is that many 3D scanners, including laser scanners, export the resulting point cloud in proprietary file format. Often, the results of a 3D scanning campaign are delivered only as files encoded in a closed format, accessible only through whose longevity is not guaranteed (Calleri et al. 2011). Similarly, some commercial model repositories, like Sketchfab, offer consistent formats for disseminating downloadable models, but they are trapped inside a proprietary format that is designed to prevent flexible use, are expensive, prohibit modification of the file, or the accuracy cannot be verified before purchase (Champion 2017). This necessitates the standardization of 3D file formats so that they can be easily downloadable and shareable in different platforms.

Need to standardize file formats

Standardization for 3D file types is currently low, as there are so many file formats (OBJ, PLY, DAE, STIL, X3D, gLFT, and DICOM). Some 3D models are specific to the software in which they were created, which makes them difficult to share, review or embed on a website (Fernie 2019). A small number of file formats (X3D, X3DOM, and gITF) can be streamed directly to a webpage, using WebGL, Javascript, and HTML5. Final processing may involve conversion into a new format for 3D printing, online publication, or publication as a multi-resolution model, or conversion to a X3D, X3DOM, and gITF file (Fernie 2019). One CAD19 standard for engineers, the Standard for the Exchange of Product (STEP) support parametric models. As part of the FACADE (Future-proofing architectural computer-aided design) Project MIT, the program recommends a format migration plan where for every 3D file acquired, a standards-based version of it, in either IFC or STEM, is created by exporting the file from the original software to the new standard format (Smith 2009).

Koller et al. (2009) also advocates for open repositories of scientifically authenticated 3D models, which would involve standard mechanisms for preservation, peer review, publication, updating and dissemination similar to the review of articles for scholarly journals (Koller et al.

19 Computer-Aided Design, used in architecture, engineering, archaeology, and conservation (Smith 2009)

Cieslik 34

2009). This could also include watermarking as a technique of authenticating 3D models. This is a useful goal to ensure the authenticity of 3D models produced, and it may not even be necessary to develop standardized file formats like IFC and STEM. While there are many proprietary and open standard formats (DWG/DXF/DWF, XRML/X3D, IGES, STEP, OpenNURBS, Collada, FBX, OpenFlight, JTOpen, 3Dxml, HMF/HFS, IFC20), 3DS and OBJ are older formats still used as file exchange formats, so the creation of specialized 3D formats is unnecessary (Boeykens and Bogani 2008).

Wavefront (OBJ) file format and X3D file format

The Wavefront (OBJ) file is a “text based, open file format developed by Wavefront technologies” (McHenry and Bajcsy, 2008:12). The OBJ format stores both geometry and textures and consists of an OBJ file (ASCII or binary formation) along with an MTL file (material and texture) and an image (actual texture). This model is especially helpful for preservation of wire frame or textured models (Trognitz et al. 2016c). OBJ files do not have light sources or transformation, which would be something lacking compared to GIF files (Trogitz et al. 2016d). As there is no single 3D format that is universally portable and accepted by all software manufacturers and researchers, leading researchers use OBJ or PLY files, which are the most commonly used file formats (Adams et al. 2010). Certain 3D formats, OBJ and PLY files, are easily migrated than software-specific formats and have different capability in terms of what elements are used (Johnston 2017).

Most research projects that look at the usability of 3D file formats for virtual heritage appear to focus on OBJ, 3DS, and U3D21. The most promising file formats for archiving 3D files and for web-based viewing are OBJ, X3D, and DAE file formats. For OBJ files, WebGL, a JavaScript Application Programming Interface (API) will allow 3D interactive graphics to work inside any major browser requiring a plugin, which will load OBJ models into WebGL without requiring advanced programming (Champion 2017). For example, after processing with Agisoft, 3D model files for “3DIcons” Project were saved with image texture in the OBJ format, which were imported to Polyworks to avoid excessive polygon density (Barsanti and Guidi 2013). Similarly, the digitization of ethnographic objects involves using a digital camera to generate output files that are compatible for display on websites or with other 3D modelling software packages, such as OBJ, 3DS, and DXF, which can also be used for 3D printing or augmented reality (Young 2017).

Wide adoption of the file format can give more confidence in preservation strategy (Beagrie and Kilbride 2020). This is an additional strength of OBJ files. For photogrammetry, one of the most common 3D digitization methods, the resulting surface model and image texture may be an OBJ or XYZRGB format (Matthews and Noble 2010). While many scanner save the scans in unique file formats to each scanner, the author determined that the most commonly interchangeable file format was the Wavefront Object format (OBJ), along with the Computer Aided Design format

20 The Industrial Foundation Classes (IFG) format preserves the full geometric description of the model in 3D and transfers information about properties and parameters of each object (Boeykens and Bogani 2008). 21 This file type allows the 3D model to be embedded inside the PDF file format (Champion 2017).

Cieslik 35

(CAD), so all models were preserved in their nartive file format and then the OBJ file format if possible for more accessible viewing (Groenendyk 2013).

Just as many projects and processes involve OBJ as an output, the cultural heritage community overall mainly produces OBJ files and many viewers can import OBJ files for sharing. According to a survey of members of Europeana Aggregator’s Forum and EuropeanaTech community, The range of file formats used included: PDF (rotation enabled), MTL, OBJ, JPG, FBX, e57, PTS, POD, and XYZ files, polygon file format, glb and gift. There are 264 items in OBJ, X3D, and FBX files from 2 organizations (Fernie 2019). According to a survey conducted by Sketchfab conducted in 2019, the most common type of file format was Wavefront OBJ files (61,054 files), followed by filmbox FBX (16, 284 files), and Polygon PLY files (5,154 files) (Flynn 2019a). Many viewers import OBJ files, including 3ds Max, Adobe 3D Reviewer, Acrobat Pro Extended, Blender, K-3D, Wings 3D, as well 3D viewers exporting OBJ files, including 3ds Max, Blender, Lightwave 3D Modeler, Wings 3D. The following viewers open OBJ files: Adobe 3D Reviewer, CINEMA 4 (McHenry and Bajcsy 2008).

Other than OBJ files, X3D files also offer another fairly standardized files format. Open source software solutions are best, as X3D has 11 open source and 12 commercial implementations. This file type has reliable, fixed terms of availability of standards, and it provides greater software reusability, including backward compatibility (Havele 2014).

PLY file format

The PLY file format was designed to be a flexible and portable 3D file format, and the format allows for user defined type, allowing it to extend to the needs for future data, which could be key to 3D digitization preservation (McHenry and Bajcsy 2008). For many 3D models, the OBJ and PLY formats have the ability to preserve geometry and visual surface properties for a 3D object but are not good for preserving scenes with light sources, animation, or interactive files. For more complex 3D datasets, it is recommended to use COLLADA and X3D formats (Trognitz et al. 2016e).

One example of exporting PLY files includes creating a virtual comparative anatomy collection, all final meshes were saved as PLY data which was stored in high resolution for archival purposes, converted to STL format in middle resolution for printing replicas, and converted to OBJ format at lower resolution for further conversion to PDF (Niven et al. 2009). PLY files, similar to OBJ, offer a useful intermediary file format for conversion purposes.

STL file formats and 3D printing formats

Stereolithography files (STL) files are used to capture 3D models in a format that can be printed using commercially available 3D printers, which is useful when replacing a missing portion of an object or completely recreating the object. STL is the file format most closely associated with 3D printing, and the Dalhousie Libraries saved 3D models in this information so that they could be reproduced by Dalhousie Libraries Printing (Groenendyk 2013).

Cieslik 36

Despite their usefulness with 3D printing, STL files pose a problem because it is hard to edit information saved in these files. It was noted whether the 3D model was born digital or else digitized, including the model branch of the 3D scanner or the modelling software used to create it as minimum metadata (Groenendyk 2013).

PDF and XMP: Standard Adobe File Formats

3D PDF file formats are good for dissemination because they can allow for integrated text, images, and links alongside the 3D data and can be viewed in the freely-downloadable Adobe Reader (Trognitz et al. 2016e). Champion (2017) argues that many models in standard forms are encased in proprietary PDF file formats that cannot be extended, altered, or otherwise removed from the PDF. Image files, including RAW, TIF, JPG, can include embedded data in XMP format, an open standard file developed by Adobe. Researchers should also embed provenance information into the XMP file, including who, where, when, and how the object was imaged (Ashley 2010).

Smith (2009) also recommends creating a presentation version, 3D PDF, that will need to be replaced as often as Web formats evolve (Smith 2009). Similarly, CARARE recommends 3D- PDFs as the format suitable for publishing 3D models with contextual information to minimize users required download plug-ins. 3D-PDF files are accepted as a good presentation format that will allow 3D models to be encapsulated and presented to users with contextual information. The PDF viewer is widely available and is often pre-installed in web browsers. This format has been adopted in CARARE, 3D-ICONS, and Protage (Fernie 2013). For example, after digitizing a Neanderthal talus from Spy, Belgium, creating models using computed tomography, laser scanning, and photogrammetry, all files were also available in 3D PDF format to allow the reader to interact with and manipulate the files (Mathys et al. 2013).

TIFF vs. JPEG file formats: Master versus Manageable

TIFF is a file format that supports lossless data, holding a high-resolution image. JPEG is an example of a lossy image file format (lossy formats are those where data is compressed or thrown away as part of encoding (Beagrie and Kilbride 2020).

Both open source and commercial formats are vulnerable to obsolescence. Open source file formats, such as JPEG2000, are popular as they are non-proprietary, but proprietary formats such as TIFF are seen as more robust (Beagrie and Kilbride 2020). For this reason, the Guidelines for the Creation for Digital Collections recommends using TIFF file formats or JPEG2000 file formats for the master or archival image and JPEG file formats for the derivative image (CARTI 2020). TIFF files have been implemented for library still image digitization (FADGI 2017). Future work for Federal Agencies Digital Guidelines Initiative (FADGI) will focus on implementation of JPEG200 for access and for the web and investigating JPEG200 as a master format for some applications (FADGI 2017).

Cieslik 37

The Canadian Museum of Civilization Corporation uses TIFF files as the master file and archived version of the image with JPEG files for circulation and consultation and attachment to the catalogue record in the database (Brosseau et al. 2006). All master images are recommended to be saved as uncompressed TIFF files. The Image Capture Standards from the National Library of Australia contains image specifications for different types of objects and museum collection items. For object digitization, the National Library of Australia recommends a tonal resolution of RGB 8 bit (Image capture standards, 2020). The Archival master image represents the information contained in the original in an uncompressed or lossless format. This file is unedited and serves as a long term, sustainable resource, typically having a large file size and stored as TIFF file format (Rieger 2016).

Some advocate for an intermediary file format, the production master image that can be modified. To avoid obsolescence, Smith (2009) actually recommends three formats, original, standard, geometry, and presentation are needed to ensure that 3D models are maintained. The production master image is produced from the archival master, also as an uncompressed or lossless file to be edited for technical corrections, which is usually also stored in the TIFF format (Rieger 2016). Beyond this production master image, the final access image is used for general web access, generalling fitting within the viewing area of the average monitor. It is a reasonable file size for fast download time and does not require a fast network connection, usually stored in JPEG or JPEG2000 file format (Rieger 2016).

In order to keep the Preservation Master file intact, preservation involves bit-level preservation of all digital objects, which mean keeping original files intact, ensuring the authenticity and provenance, ensuring the appropriate preservation information, understand and reporting on risks to access, performing actions on sets of digital objects to ensure that objects continue to be accessible, and periodic review of preferred formats and digital metadata standards (Digital Preservation Policy 2013). Beagrie and Kilbride (2020) recommend choosing lossless formats for the creation and storage of “archival master” files and lossy formats should only be used for delivery or access purposes and not for archival. The master model is “the highest resolution output from processing, this model is a definitive version from which other downstream derivatives can be produced” (Blundell et al., n.d.: 3).

Cieslik 38

Chapter 9: Preservation Metadata

Digital Asset Management (DAM) includes the creation, cataloguing, storing, retrieving, and backing up of these assets, working to integrate best practices with workflows to improve access to resources and make them available for use and reuse (Digital Asset Management and Museums 2017). A DAM system is therefore all of the operations involved in the collection of digital materials related to collection objects.

Two-dimensional digitization standards

Many current 3D digitization standards, of those that exist, are based on 2D digitization standards, including the Open Archival Information System Reference Model (OAIS) The Open Archival Information System Reference Model (OAIS) is a common framework for the environment, components, and content in a system preserving digital materials. This document states that “the repository shall specify minimum information requirements to enable the Designated Community to discover and identify material of interest” (Audit and Certification of Trustworthy Digital Repositories 2011: 4-23). The digital repository must test and ensure that the public can access the 3D digitized object, which may involve uploading it to Sketchfab or a collection depository. The Open Archival Information System (OAIS) appears to be the most widely used reference model for digital preservation, defining an information model and the basic functional model of a digital archive. (Doyle et al. 2009b; Doerr and Theodoridou 2011, Digital Preservation Policy 2013).

Need for standardized preservation metadata

Cieslik 39

According to a study conducted in 2014 by the Institute of Museum and Library Services, where 1,714 institutional respondents represent an estimated population 31,290 U.S. collecting institutions, nearly two-thirds (63%) of collecting institutions are involved in either digitizing their collections or preserving born - again collections. Over half of the digital collections were images, with libraries accounting for 73% of all digital collections although archives were the most likely institutions to participate in digital preservation. Libraries (54%) were the most likely to be involved in third-party digital curation and preservation networks. Planning for digital preservation remains a serious gap among the collecting institutions that preserve born-digital collections, with 73% indicating that they had neither a preservation plan nor an assessment of their digital collections, less than 10% having done both (The Institute of Museum and Library Sciences 2019). Preservation metadata is therefore a crucial step for archives, libraries, and museums working to invest in 3D digitization.

Preservation metadata is vital semantic data that supports the long-term preservation of digital object records (Doyle et al. 2009a, Doyle et al. 2009b, Doerr and Theodoridou 2011). The two greatest challenges to preservation metadata is developing a uniform framework of preservation metadata, including what semantic information should be included in the framework and in what format the objects should be modelled (Doerr and Theodoridou 2011). It is not sufficient to preserve solely the digital object and its software environment through emulation, but it also requires metadata describing how to render and use the object useful to storing raw data to support future use of this data (Doyle et al. 2009a). According to a 2017 survey on 3D content published by Europeana, respondents would like more specific information in the metadata to each 3D model, including metadata for the digitization process, technical information about the model, metrics, structure information, and details about the object (Ioannides et al. 2018). Different institutions are working to develop collaborative digitization standards. The Frances Loeb Library at Harvard University Graduate School of Design requested a grant to convene a group of stakeholder groups to create a national and international collaborative infrastructure for 3D CAD and 2D drawing files, Building Information Models (BIM), digital images, videos, documents, and more. 2D and 3D CAD software is problematic for libraries, museums, and archives because Computer Aided Design (CAD) relies on proprietary mathematical algorithms and these models are packaged in proprietary, expensive software products that are digitally encrypted and obsolete within years (Harvard College President and Fellows 2017). While collections managers, librarians, and archivists are critical in this process, conservators should act as the major beneficiary of 3D condition monitoring and are well placed to ensure that 3D data avoids a crisis of obsolescence (Kilbride 2017).

Preservation metadata standards should also be embedded into websites that share 3D models uploaded by cultural institutions. Despite advances in metadata standards, many cultural heritage institutions on Sketchfab do not even tag their Sketchfab models in a consistent manner to take advantage of the platform’s APIs to link 3D model data to their online collection databases to keep them updated and connected (Flynn 2019b). According to a survey of 142 Sketchfab users conducted by Sketchfab conducted in 2019, 67% of respondents say that they collect metadata related to the 3D objects, including raw data files, capture or creation details, including date of creation, author name, location, software project files, processing, editing, and conversion workflow (Flynn 2019a).

Cieslik 40

Methods for preservation metadata

As specific software is required for future post-processing of a model may have holes or color discrepancies, a detailed list of specifications related to the original modeling process is needed (scanner, digital camera, calibration targets, etc.) as are standardized metadata (Doyle et al. 2009a). For the National Library of Australia, preservation Intent is a large part of Pandora, Australia’s web archive, focusing on expectations for preservation of different content, who is responsible for preservation, the period over which content is preserved, and the required level of support of access over time (Digital Preservation Policy 2013).

Ashley (2010) discusses how to embed essential metadata inside files, recommending a two- pronged approach of maintaining accurate, current information in the image files, descriptive and technical metadata, and an endpoint of image production, but then also keeping a separate, external database of metadata (Ashley 2010). It is important to create copies of “original data” before cleaning, noise reduction, or merging with other data streams. In contrast to preservation formats of 3D data (ASCII XYZ text files), other formats used for disseminating 3D data change (X3D and U3D) (Johnston 2017). This follows previous metadata standards for cultural heritage collections, including Dublin Core and Darwin Core.

The three types of preservation metadata metadata for 3D models include catalogue metadata (Dublin Core Metadata Initiative regulations), commentary metadata (elements of reconstruction and commentary on modeling design decisions), and bibliographic metadata (sources, published or unpublished, used in making the model (Koller et al. 2009). According to the Collections Curbed Survey in 2015, most respondents utilize Darwin Core, MARC, Dublin Core, and VRA CORE metadata standards (Urban 2016). However, on top of these standards, 3D data and metadata preservation also requires additional information related to the creation of the 3D model. To enable access to a 3D model for repository search, metadata for the 3D model (number of vertices, reference textures, geometric primitive types used) should be included (Boeykens and Bogani 2008).

To create a metadata standard for 3D created with generic 3D and CAD software for architects and engineers, the EU Project MACE (Metadata for Architectural Contents in Europe) utilized the Learning Object Metadata standard (LOM), extending LOM, which had no attention to 3D content, into MACE Application Profile. Dublin Core, the standard for structured metadata, has too many ambiguities for 3D data, so this document specifies metadata differences for the learning object (LO), metadata about media object (MO, information about file, size, location, and software), 3D objects (format, amount of geometric objects and scale), and real world object (ROW) (Boeykens and Bogani 2008). Similarly, The 3D Knowledge (3DK) project works to develop a knowledge network for the acquisition, representation, query, and analysis of 3D knowledge. The project produced improved processing and integration of knowledge from different sources, domains, and non text media types. Increased effectiveness of teams, organizations, classrooms and communities, and some pilot projects focused on characterization of archaeological artifacts, including bones, vessels, and lithics (Collins et al. 2004).

Cieslik 41

Migration versus emulation

Migration is the process of transferring 3D files from one system to another file format, often for preservation or dissemination purposes. Migration often works to prevent losing valuable 3D data contained in a proprietary or older file format by converting it to another newer file format. During data migration, if parametric representation (curves and surfaces allowing scalability) is not supported by a target 3D file format, the model then has to be converted to a wire-frame model, leading to the loss of information about surface structure (Trognitz et al. 2016b). For 3D object digitization, migration is not the best strategy as it involves waiting until the software or hardware required to access the digital object becomes obsolete and then transferring the rules to a new software application or hardware configuration, which is labor-intensive, time-consuming, expensive and error prone (Doyle et al. 2009b). This is one option for avoiding data obsolescence, but it may not be the best one.

Emulation rather is saving the program that was created and that displays the created 3D model. This method utilizes emulators, which are filter systems that can be used to read data. This technique can be updated regularly instead of waiting for there to be a problem with the file and to start data migration. Emulation serves as the most realistic approach as it supports the digital document’s behavior as well as the original content, which is made possible with the use of an emulator that runs on the emulation virtual machine (Doyle et al. 2009b). In this case, the 3D object to be preserved in an Archival Information Package (AIP). In order to preserve the behavior of the 3D object, the authors store an emulator and application software necessary to run the object as separate AIPs. The metadata associated with the 3D digital object is enclosed also in an AIP (Doyle et al. 2009b).

XML-based text files

For long-term digital preservation, Doerr and Theodoridou (2011) recommend using Semantic Web Language, OWL, to encode metadata specifications, as it is a XML-based W3C standard, which is beneficial for long-term preservation. Prot’ege, an ontology editor, is a good application for presenting metadata as well as converting OWL code to HTML for display on website pages (Doerr and Theodoridou 2011).XML is just one of several different types of emulations alongside other emulators including Line Printer, ASCII, CSV, Channel Skip, Database, and PDF, the last of which is commonly used for a shareable file format.

Given the value of XML files, many CARARE content providers have established repositories to provide metadata to CARARE and European via OAI-PMH or XML exports (Fernie 2013). Standard “software-independent” formats, such as ASCII or XML text files, may work well for cost, implementation, or purely digital documents, but they may not replace the behavior display and look and feel of the digital object to be preserved (Doyle et al. 2009b). Therefore, XML emulators can be used but this should be done with caution.

Examples of projects including metadata

Cieslik 42

Cultural heritage site Project Anqa collects metadata following the CIDOC-CRM and ICOMOS CIPA standards, requiring metadata related to basic information about the site (site name, year built, architect/patron, location, etc.), width, room, dimension, description of overall site condition, and history, as well as metadata related to data collected through laser-light scanning and photogrammetry (Akhtar et al. 2017).

Archaeological artifacts For the digitization of archaeological items in the Civic Archaeological Museum in Mila, Italy for the 3D-ICONS project, technical and descriptive metadata was captured, including another set of data (paradata) representing the technical description of the process and what was used to digitize the object (type of scanner, resolution, bit/pixel, etc.). The set of descriptive and technical metadata are standardized by rules and terms in the “Dublin Core” Initiative (Guidi et al. 2015).

Chapter 10: Current Evolution of Metadata Standards

Still Image Working Group: Federal Agencies Digital Guidelines Initiative

The Still Image Working Group is a subset of the Federal Agencies produced a Digital Imaging and Digitization Activities: Project Planning and Management Outline. The Digital Imaging Framework contains specific information related to creation of digital imaging files, and this document contains information related to sensitivity, tone and exposure, white balance and neutrality, and color encoding and rendering accuracy. This document contains material specific to visual resources not to audio resources (Still Image Working Group, 2009a). The Digitization Activities: Project Planning and Management Outline contains general policy issues and information related to digitization techniques. This document also contains a comparison of the approaches for digitization, including external and internal projects, along with the purposes for digitization, including classified records review, preservation reformatting, exhibitions, publications, or websites, reference requests, documentation for object inventory, and support of current business records (Still Image Working Group, 2009b). These two documents represent the starting point for 3D digitization guidelines, which are still being developed today.

The London Charter: General recommendations

The London Charter was one of the first models for computer-based visualizations of cultural heritage objects, sites, and buildings, but this document offers more general guidelines rather than specific metadata examples (Denard 2009). The London Charter for the Computer-based Visualization of Cultural Heritage was created in 2006 in order to give more recognition to computer-based visualization strategies in cultural heritage. The second version of the charter, The London Charter for the Computer-based Visualization of Cultural Heritage (2.1) includes computer-based visualization, including 2D, 3D, 4D, and computer-generated physical objects such as museum artifact replicas (Denard 2012). Using the recommendations in London Chater, Tognitz et al. (2016a) developed the Archaeology Data Service / Digital Antiquity: Guides to

Cieslik 43

Good Practice for focus on elements and properties of 3D data for digital preservation. This guide represents one of the most up-to-date sources.

3D Icons Project, CARARE, and Metadata and Paradata Standards

The 3D ICONS project works to establish a metadata schema to support the provenance and paradata required for quality assurance of 3D models, which is needed because the London Charter does not implement a metadata schema or prescribe specific method (D’Andrea and Fernie 2013). Among models for descriptive metadata, an object centric (Dublin Core metadata) and event-centric approaches (Even in which object was involved). The 3D-ICONS metadata scheme builds on CARARE, another network funded by the European Commission’s ICT Policy Support, working to enhance the technical 3D capture and modelling description (Guidi et al. 2015).

CARARE, a project establishing an aggregation service to integrate 3D and VR content into Europeana, created a common library format for the 3D models produced but trapped inside the Adobe PDF format so people could not modify this or develop their own content (D’Andre and Fernie 2013). The goal of the Europeana task force is to collect details of 3D data content, file formats, viewers, and methods of 3D publishing online (Fernie 2019). Over three years (2010 to 2013), CARARE worked to define the CARARE metadata schema that is compliant with the European Data Model and is now used by the 3D ICONS project, and by the project the different metadata schemes in play (Fernie 2013). This need for specific metadata examples was addressed in future metadata standards, including the CARARE Schema (D’Andrea and Fernie 2013), developed as part of the CIDOC-CRM project to create a metadata standard for 3D object and site models and the Seville Chapter by the International Forum of Virtual Archaeology (Lopez-Mechro and Grande 2011).

The CARARE 2.0 to provide a model, the CRMdig model, for the typical workflow to create 3D models. This model provides reliable information about the capture device instruments, the parameters in using data acquisition (geometry, light sources, obstacles, sources of noise or reflections), and those used during processing (registration, meshing, texturing, decimation, simplification, etc.) (D’Andre and Fernie 2013). The CIDOC CRM works to provide formal descriptions focused on the integration of data from multiple sources, specifically the shared understanding of cultural heritage information by providing a common and extensible semantic framework for evidence-based cultural heritage information integration (Le Bouef et al. 2018). This living standard is the work of the CIDOC CRM Special Interest Group, serving as a committee of the International Council of Museums (Doerr 2020).

SYNTHESYS3 Project and Natural History Collections

For the natural history collection, the SYNTHESYS3 project, running from 2013 to 2017, produced an online Wiki with comprehensive, comparative information on diverse 2D+ and 3D imaging techniques used by different natural history institutions called the “Handbook for best practice and standards for 3D imaging of NH specimens” hosted by the Museum für Naturkunde

Cieslik 44

in Berlin. This and other links to current digitization standards are below. A physical version of this text provides additional information related to 2D digitization and 3D digitization techniques, challenging materials, and digitization workflows (Brecko et al. 2016). SYNTHESYS3 has created an accessible and integrated European resource for researchers in the natural sciences in Europe and globally, providing access to leading European natural history collections in museums. The SYNTHESYS3 Joint Research Activity has focused on extracting and enhancing data from digitized collections by (1) developing and delivering open source software (Inselect), developing 3D techniques to digitize natural history collections, reviewing and utilizing innovative methods of data capture, and providing open access to major research datasets (Fraile et al. 2016). Breco and Mathys (2020) developed the “Handbook of best practice and standards for 2D+ and 3D imaging of natural history collections” published by the European Journal of Taxonomy (Breco and Mathys 2020).

From analysis of different 3D models, four methods were found to be useful in mass digitization of natural history collections, including multi lane photography, photogrammetry, structured light scanning, and laser scanning (Nieva de la Hidalga et al. 2019). Photogrammetry is the most future-proof, as long as the raw data is saved (as DNG, for example), the 3D geometry and color information can always be recreated (Nieva de la Hidalga et al. 2019). ZooSphere was developed by the Museum für Naturkunde (MfN) to create digital 3D spherical representations of pinned insects. To obtain a full resolution image, a Java software component developed with the project is free for download. More than 110 specimens have been digitized using Zoosphere and placed online (SYNTHESYS3). In personal communication with Dr. Frederik Berger at the Museum fuer Naturkunde in Berlin, he noted that they focus on photogrammetry and are currently using Agisoft Metashape for 3D modelling. In the case of fossil collections, laser scanning has been used by scan ichospecies Eubrontes glenrosensis that is part of an excavated theropod track, using wide mode on the NextEngine HD Desktop 3D scanner and a text was used to control the light exposure outside (Adams et al. 2010). The NextEngine scanner could not scan some biological objects, such as insects that Dalhousie University needed (Groenendyk 2013). In order to expand the breadth of this project, the library would need to purchase a high-end 3D scanner in the range of $100,000 to $200,000, such as the $100,000 Minolta laser scanner at the Smithsonian Institution (Groenendyk 2013). It therefore appears that photogrammetry is the best option for digitizing small collections.

The Angora Project, funded by the Federal Belgian Science Poly Office compared various 2D+ and 3D recording methods. When digitizing a human skull from the Royal Belgian Institute of Natural Sciences, the authors found that the structured light scanner (Mesphisto Ex-Pro) was best for recording external structures; whereas, CT scanning (Siemens Sensation 64) was best for recording internal structures for digitizing reflective materials such as enamel (Mathys et al. 2013). Photogrammetry, however, is a good option considering portability of the system, price of supplies, and quantity (Mathys et al. 2013). CT scanning has also been used by the Natural History Museum in London to produce physical models with 3D printing to establish if this can replace moulding and casting (Payne 2013).

The Belgian Federal Agora 3D Project works to develop guidelines for creating 3D guidelines with the best quality, price, and time ratio for each natural history and cultural heritage collection, evaluating computed tomography, µ-computed tomography, laser surface scanning,

Cieslik 45

structured light surface scanning, and photogrammetry (Mathys et al. 2013). The Collections Curbed Survey finds that research and documentation are the primary goals of library, archives, and museum digitization efforts, and especially for natural science specimens, 3D representations allow remote researchers to study artifacts in ways not possible using 3D physical surrogates (Urban 2016). This push, however, also includes that for 3D object digitization came from the Center for Biological Research Collections, which had paleontology and zoology collections that contain 3D digitized specimens of bone and other 3D artifacts. Unlike SketchFab requiring the 3D model to be uploaded to the service, Indiana University explored the Universal Viewer that can load a 3D object (Wittenberg and Hardesty 2017). Natural history collections therefore require metadata and display standards.

CS3DP: Community Standards for 3D Data Preservation

CS3DP has built a community of researchers, which works to produce recommendations for 3D modelling. This effort began in 2017, when the Institute of Museum and Library Science provided funding to Washington University in St. Louis, the University of Michigan, and the University of Iowa to outline the current status of 3D data preservation efforts and provide an update on the project progress (Moore et al. 2009b). The CS3DP working group is working to develop flexible and extensible standards for the preservation, documentation, and dissemination of 3D data. The survey allowed for the development of working groups focusing on a five-part framework related to (1) preservation best practices, (2) management and storage, (3) metadata, (4) copyright and ownership, and (5) access and discoverability. The CS3DP working group developed 3D standards including perspective not included, including indigenous and native communities with technologies bandwidths different from those commonly found in North American and Europe (Moore et al. 2019a). According to personal correspondence with Adam Rountrey, the CS3DP group is currently producing an eBook that will be published by ACRL regarding the collaborative standards developed by the working group.

What new metadata and 3D digitization standards are on the horizon?

Many new developments are occurring in the area of 3D digitization due to the rapid expansion of 3D documentation. A conference was held in December 2016 on how to preserve 3D data indicated that 3D data and the technologies that create it have to understand the challenges of handling metadata, which is likely to be only more complex and important than previous faced archives and libraries and that those creating 3D models must show the long-term value of these models (Kilbride 2017).

Just this year, the 2020 Call for papers related to the “Digital transformation in cultural heritage institutions,” and 158 papers were published in 2019 (Liao et al. 2020). The notion of digital transformation appears to have taken on a more organization, human capacity and a human resource approach, specifically with discussion for the workforce and shaping design spaces for collaboration (Liao 2020). According to a survey conducted by the CS3DP working group, 72% of the 100 respondents said that they were not using documented past practices or standards areas for preservation. 69% said that they did not use them because they were unaware of such

Cieslik 46

standards. 85% say that they would like collaboratively developed standards (Moore et al. 2019a).

Chapter 11: End Goals of 3D Object Digitization

Digitizing archaeological sites

3D recording has been used to digitize archaeological sites since 2003 (Pollefeys et al. 2003), which was revolutionary compared to non-image-based methods because the authors extracted surface texture directly from the images leading to greater realism. 3D modelling can allow for creation of a 3D model of each excavated sector of each layer, allowing for greater documentation of stratigraphy (Pollefeys et al. 2003). From then on, 3D digital datasets present a number of key advantages of archaeological recording and artifact reconstruction because 3D data can be easily scaled, rotated, and viewed from any direction and angle (Trognitz et al. 2016a).

Most digitized excavation sites currently created using photogrammetric methods based on regular photographs are already a part of the excavation process. Currently, the 3D reconstruction of archaeological excavation sites is expensive, but Zollhöfer et al. (2015) presents a low-cost pipeline using a handheld RGB-D scanner and using Microsoft Kinect, an inexpensive and interactive alternative. A digital model of the excavation site is captured by walking around it with a handheld RGB-D sensor, so the user can see a live view of the current construction, allowing the user to intervene immediately and adapt the sensor path to the current scanning result to prevent costly post-processing because of a system error. The results of this technique were comparable to those of photogrammetry but still could not compete with high resolution 3D scanners (Zollhöfer et al. 2015). High resolution 3D scanners therefore offer the best option for scanning for scanning the site.

Conservation

Prevents damage to artifacts from handling or replication methods 3D digitization can help prevent damage that occurs to objects through extensive handling or error with storage and packaging. Rubber and silicone molds were traditionally used to replicate fossils but pose great cost of material, labor, and damage, so 3D digitization was a low-cost, safer alternative (Adams et al. 2010). 3D data of artifacts are being used for museum collection storage and package design (Wachowiak and Karas 2009).

Cieslik 47

One example of preventing damage involves taking measurements on historic violins. Dondi et al. (2017) utilized a RS3 Integrated Scanner (a linear laser scanner with an accuracy of 30 µm) mounted on a mobile arm (Romer Absolute Arm 7-Axis SI) to digitize delicate historical violins, as laser scanners are affordable, require only one operator and can be used in the museum without need to move the instrument. Utilizing the open source CAD software FreeCAD to take virtual measurements on the violin, Dondi et al. (2017) found little difference (0.11 to 0.14 mm) compared to the traditional caliper measurements that risk damage to the instruments. Even if measurements taken from the 3D models were accurate enough for violin making, they should not be used for comparisons (Dondi et al. 2017). Measurements taken on 3D models of specimens provide advantages over physical measurements, including repeatability and reduced risk of damage to the specimen (Nieva de la Hidalga et al. 2019). 3D digitization can therefore prevent further damage occurring to artifacts.

Potential risks when digitizing collections While 3D digitization avoids the problem of frequently handling, this process still involves contact with the artifact. When conducting photogrammetry and light-dependent methods, LED lights are good because they generate less heat than halogen lights, which is valuable for fragile materials (Mathys et al. 2019). Risks for digitization include light exposure, especially for 3D laser scanners, radiation, and handling (Payne 2013). For lighting sources for reflectance transformation imaging, this guide recommends using a camera-mountable flash unit along with a UV filter for the light related to conservation needs (Reflectance Transformation Imaging, 2013).

Assessment of object condition, structure, and damage over time 3D digitization can be used to assess and correct erosion. The volume loss, calculated by modifying the produced model, can be produced as a 3D mesh and used to construct physical 3D models of lost volumes to study erosion (Adams et al. 2010). For example, after scanning lithic artifacts with a 3D camera produced by Polygon Technology and conversion of data into a 3D model by the QTSculptor program, the 3D information allows extraction of important parameters, including volume, surface area, the coordinates of the center of Mass (CM), and the coordinates of the enclosing cube (CMEC) that are useful to measurement and conservation (Grosman 2008).

Digital photogrammetry is able to record the current condition and damages on the surface of an artifact offering visualization on the order of 50 microns (Hess et al. 2009). Using real-time rendering (RTR), with a semi-automated photogrammetry-based solution with accurate color and shape capture, conservators can compare the surfaces before, during, and after cleaning treatment, used for the ongoing restoration of the Neptune Foundation in Bologna (Apollonio et al. 2017).In another example, the location of the source and the material characteristics, such as porosity and degradation state, were determined using a 3D Ultrasonic Tomographic Imaging Test that was performed and a multi-sensor survey was conducted to evaluate the locations of the source and material changes of the Eypgtian nosophoros state of Amendments and Reshpu. This survey involves photogrammetry and laser scanning to generate a 3D model evaluating the locations of the source (Di Pietra 2017). This survey showed that photogrammetry and laser scanning could be used to investigate the internal damage of stone objects (Donadio et al. 2020).

Cieslik 48

Along with the English Heritage and the Hampshire Wight Trust for Maritime Archaeology, RTI datasets along with non-contact digitizing via a Minolta laser scanner was used to provide a conservation record for wooden artifacts. Comparisons between RTI datasets pre- and post- conservation identify clear transformations in structure of wooden objects (Earl et al. 2010). When scanning blades, laser scanning can capture small variations in surface topography, revealing surface porosity, grinding, or polishing or surfaces, but traces of flashing and noise sometimes create an artificially coarse or rough surface. SfM is the most appropriate method for capturing metalwork wear data, with close-up images augmenting instances of damage but it is still interior to still images produced by macro-photography or microscope (Molloy et al. 2016). PTM is the best for creating interactive, high-resolution pictures of objects to document surface features and conditions (Payne 2013).

3D scanning can also be used for environmental monitoring or for determining the impact of the material decay process, to create 3D scanning point-in-time records of cultural heritage facing destruction (Kilbride 2017). For example, 3D digitization via photogrammetry was used to create 3D models of the Maqsura at the Mosque-Cathedral of Cordoba in order to observe small details on the surface and analyze model degradation over time by comparing 3D models created of the site in the future (Gómez-Moron et al. 2019). According to the Collections Cubed survey, conservation also motivates object digitization, as CT scans can also allow researchers to non- destructively study internal structures (Urban 2016).

Preventative conservation: preservation of artifacts from future damage Monitoring of object damage can also focus on man-made changes to the sculpture, including vandalism. When the state of Fouara in Setif, Algeria was defaced, removing facial features and more with a hammer and chisel, Ali Khodja et al. (2019) used a prior laser scanner survey of the statue that was stored in the cultural heritage monument database for the restoration of the statue. The accuracy of the prior laser scanning survey provided useful information for the restoration of the statue after man-made change over time. This technique of creating laser scanning surveys for time reference points is also important in Algeria and in many other sites where cultural heritage objects face attacks and deterioration over time from the elements. Some researchers utilize 3D digitization to document cultural heritage sites in places of conflict, including Project Anqa which combines digitization and storytelling in at-risk heritage sites in the Middle East and Sahara Africa (Akhtar et al. 2017). Similar efforts focus on digitizing the Great Buddhas because they are vulnerable to fires and other natural disasters given their material, wood or paper (Ikeuchi et al. 2007).

Preventative conservation is useful for avoiding future cultural heritage problems. The European HeritageCare project developed a standardized set of protocols for assessing conservation status of cultural heritage over time, including information digitization. This involves using advanced geometric techniques to create high-resolution virtual replicas of buildings for mapping and identifying damages. These 3D models of the buildings were created within the Heritage Building Information Modelling environment, allowing easy sharing and updating of information about conservation status (Masciotta et al. 2019). 3D digitization can be used to monitor environmental conditions. One new interesting technique for preventative conservation involved using to develop a 3D printed prosthesis implanted with sensors that can monitor the environmental conditions and send them to a remote sensor. This system was used for the

Cieslik 49

Stone Sepulcher of Queen Mary of Castile located in the Royal Monastery of the Holy Trinity of Valencia, Spain (Niquet et al. 2020). This preservation can extend beyond the physical stability of the object of building. RAW data captured by photogrammetry and laser scanning at the site is combined with photographs, videos, interviews, on-site observation of rituals, and recordings of building specifics, linking material evidence to intangible personal histories and culture (Akhtar et al. 2017).

Preservation of artifacts in case of destructive sampling This atomic emission spectroscopy uses a high-energy laser pulse to atomize, excite, and ionize a small amount of sample extracted from the source (less than 100 pg, micro-destructive technique) to create a plasma. The light emitted by the plasma is analyzed through a spectrometer, and the emission lines are characteristic of each type of atom present in the plasma. LIBS reduced the need for sampling by selecting only a few areas and obtaining topographic information about the distribution of substances on the surface. This technique can provide in-depth access to layers beyond the surface and has been applied in situ for studies of mural paintings and polychromy (Detalle et al. 2018).

Virtual restoration using 3D digitization data In conservation, there are two types of renovation: manual and virtual. The manual renovation process can contain human error that causes secondary harm to the artefacts, but virtual renotation does not cause extra harm because it is not in contact with the artifact (Thamir and Abed 2020). There are three types of virtual restoration, including (1) where missing parts can be found and reassembled after scanning, (2) the missing parts cannot be found by can be estimated based on the existing part of the model, or (3) when restoration is applied based on photographs, archival documents, or other historical research (Li et al. 2010). 3DReshaper software offers tools for reconstructing the missing and damaged parts, including symmetry, registration for a surface to surface, scaling more. 3DCoat software, part of a gaming engine, offers tools to enhance the mesh and sculpt, modify, smooth, or clean it (Thamir and Abed 2020).

Virtual reconstruction is useful for many fields, including paleontological and archaeological sites. Santamaría et al. (2020) documented the use of structured light (LED and infrared) and convergent photogrammetry to geometrically document and reconstruct replicas of paleontological sites, including a dinosaur footprint reconstructed model. Precision for the model, depending on model illumination, proper settings on scanning equipment, and postprocessing, could obtain precision in the range of a tenth of a millimeter (Santamaria et al. 2020). One example of archaeological sites is the virtual restoration and regeneration of the tomb of Seti I (Lowe et al. 2017). The original replicator, Giovanni Battista Belzoni, used a technique that was directly applied to the paint that removed some paint. The Factum Foundation used the Lucida scanner for multiple fabrication applications; the most distinct project was their scan and facsimile of a section of Tomb of Seti I. Restoration encompasses any project that uses digital recording and fabrication to have a direct effect on the original sculpture element of concern, including repairs, replacements, and reconstructions (Weigert et al. 2019).

Accessibility of collections and outreach use of materials

Cieslik 50

Contextualizing objects By engaging upcoming and diverse user groups and presenting collections in a more engaging way (3D objects, immersive environments and 3D semantic structures), younger members have more browning options over specific searches. Game technology, including those with 3D environments and browsing tools, can create a context around collections objects (Cameron 2003). For Project Anqa, RAW data captured by photogrammetry and laser scanning at the site is combined with photographs, videos, interviews, on-site observation of rituals, and recordings of building specifics, linking material evidence to intangible personal histories and culture (Akhtar et al. 2017).

Augmented reality, virtual reality, and game engines The MayaArch3D project exemplified using 3D digitization to digitally record three-dimensional sites for virtual exploration. Remondino et al. (2009) conducted reality-based 3D modelling of the East Court with its Temple 33 inside the old Acropolis of the ancient Maya kingdom of Copán, Honduras (Remondino et al. 2009). Richards-Rissetto et al. (2012) discuss using Microsoft’s Kinect to move through the archaeological site in order to create a sense of spatial awareness and embodiment to enhance user experience of the model by promoting natural-based instead of device-based interactions (Richards-Rissetto et al. (2012). Mankoff and Russo (2013) discuss using the Kinect application, an input device designed for the Microsoft Xbox 360 game system for use by earth scientists as a low-cost, high-resolution, short-range 3D/4D camera imaging system producing data very similar to a terrestrial light detection and ranging (LiDAR) sensor. The cost is about $200, and the optimal operating range is 0.5 to 5 m. The Kinect detects the distance from itself to objects in its field of view by emitting a known pattern of infrared dots with a projector and recording it with a IR camera (Mankoff and Russo 2013).

Similar to the MayaArch3D project and Kinect applications, García-León et al (2019) used gaming technologies to promote interaction with a Baroque altarpiece of Our Father Jesus Nazarene of Cartagena with movable statues and objects to better understand religious symbolism and traditional folklore. The problem is that the mesh had 6 million triangular elements, which is not suitable for gaming programs that require low-poly models out of a few thousand triangles (García-León et al. 2019). Gaming technology can therefore create interactive models to cultural heritage to connect users to cultural heritage objects and sites they may never encounter in person.

Although content creators can assist in content preservation by choosing archival formats, capturing metadata, and preparing their materials for archiving, libraries should take the lead in providing information about standards and best practices (Grayburn et al. 2019). Champion (2017) applies UNESCO’s definition of intangible heritage to virtual heritage to create Virtual Heritage Environments in order to convey the situated meaning of the cultural heritage of the shareholders (oral history, etc.). Similar to combining 3D digitization with associated cultural meaning, efforts to create 3D models for conservation purposes and created in the Heritage Building Information Model environment can also be used for immersive and interactive reality purposes, often within a dedicated app for mobile devices (Masciotta et al. 2019).

HafenCity University Hamburg developed a virtual reality museum of an Old-Segelberg town house. Digital photogrammetry and CAD modelling were used to document the six different

Cieslik 51

historic construction stages of the town house. This offered the public an interactive computer- based tour for visitors to explore the exhibit and collect information and for others to immerse themselves in 3D with HTC Vive Virtual Reality System (Kersten 2017). Grayburn et al. (2019) argue that libraries and digital curators should treat the academic outputs that use 3D/VR as scholarly products, build a 3D/VR scholarly community to support knowledge exchange across a range of stakeholder groups, and develop technical tools, training, and infrastructure to support a 3D/VR research ecosystem. Libraries offer expertise in data management and digital scholarship practices along with their ability to engage community members (Grayburn et al. 2019). While many museums aim to produce an online gallery or interactive model, these models have limited reuse potential and do not represent the raw data for the archive’s aim to preserve and document the object (Johnston 2017).

Easier downloads and use on mobile devices: WebGL standards In the early 2000s, 3D interactive content was published as a Stockwave file, which is viewable by anyone with the latest version of the free Shockwave viewer plugin. Unfortunately, Shockwave 3D does not allow a simple navigational 3D experience to be constructed as easily as virtual reality markup language (Drake et al. 2003). Web and mobile devices are good ways to disseminate cultural heritage assets. By using the WebGL standard, modern web browsers are able to navigate access to the 3D hardware needed without plug-ins or extensions to be installed on the computer (Di Benedetto et al. 2014).

Transmission and rendering of 3D models is possible on mobile devices but there are still some issues including efficient transmission of complex data, efficient and interactive rendering, and the need for efficient and easy-to-use manipulation interfaces. Mobile device usage of 3D models could be useful for museum visitors and restorers (platform supporting inspection and annotation process) (Di Benedetto et al. 2014). Respondents noted the important outcomes of a 3D digitization project, including easy viewing and navigation without specialized software, conducting accurate measurements on a 3D digital surrogate of the surface of the volume and for creating cross-sections (CT), and creating a high-resolution output, tying into use in outreach activities (Hess 2015). According to a 2017 survey on 3D content published by Europeana, respondents would like the ability to interact with 3D content in Europeana and to download the model for reuse (Ioannides et al. 2018).

Turco et al. (2019) discuss a workflow for creating a methodology related to the creation and publication of difficult models. It is important to consider publishing museum collections on the semantic web, and the proposed methodology focuses on visualizing 3D object shapes and additional information through networking 3D models, often by annotating models (Turco et al. 2019). For display of 3D files in a gallery exhibition, the models were uploaded to the ISTI-CNR Visual Computing Labs open source WebRTI viewer. For issues with file load rates, one option besides using low-res files is using interactives hosted locally on Mac mini computers hidden under the touchscreens, allowing high resolution images in the local web environment. To avoid lag, the application reploaded the field upon startup each morning. The exhibit utilized STL files, which did not have color data (Walthew et al. 2020). For the Web3D technique based on HTML5, WebGL is a 3D drawing standard, allowing for combining JavaScript and OpenGL together. WebGL can provide the hardware 3D acceleration rendering for HTML5Canvas and can exhibit 3D scenarios and model more smoothly (Shao 2013).

Cieslik 52

Sharing results on Sketchfab Use of Sketchfab and Advantages Many digitization projects share their results on Sketchfab (Akhtar et al. 2017; CARTI 2020). The IUPUI Center uses Sketchfab to provide access to the objects and embeds the Sketchfab viewer into the digital object management system CONTENTdm (CARTI 2020). The final job is to publish the work by uploading the polygon mesh to a web 3D viewer that is viewable on the internet, including Sketchfab and Google Poly (Johnson et al. 2018). According to a survey of members of Europeana Aggregator’s Forum and EuropeanaTech community, 50% of respondents published their content with Sketchfab. The two main platforms for housing 3D cultural heritage content are Sketchfab and Scan the World/MyMiniFactory (Fernie 2019).

There are currently more than 650 museums using Sketchfab. Sketchfab is popular because it is a free, easy-to-use service with web browser and embedded support for 3D annotations, audio, and annotations, or the ability to add a clickable hotspot to the 3D model that display a text or image pop-up and transports 3D viewer perspective to a user-defined angle, zoom and pan), making it possible to view any uploaded model on the website in VR and AR (Flynn 2019b). There was an increase in museum sign-ups in late 2015 due to the launch of Sketchfab’s cultural heritage program, offering free PRO tier subscriptions to cultural heritage institutions. This PRO version allowed for larger file uploads, more annotations, private models, and other advantages. Another increase occurred in 2016, with the addition of support for webVR, a new standard for browser- based VR content (Flynn 2019b).

The simplest way to put 3D models online is to add links to embedded models from respective institutional profile accounts on Sketchfab in official collection pages and exhibition sites. This provides outreach data, as each model on Sketchfab has some publicity visible statistics, including the number of times the model has been viewed anywhere online and the number of likes and comments from members of the Sketchfab community (Flynn 2019b). Sketchfab is also the only community where a viewer can leave a comment directly related to the individual collection item, including 3D models, and Sketchfab allows any users to create their own groupings of 3D models from other users in collections (Flynn 2019b).

Smith (2018) used low cost but cutting-edge 3D scanning technology to create high resolution images of ethnographic objects in the David T. Vernon Collection of Native American social and cultural objects held in the museum collections of the Grand Teton National Park. The team cremated 45 high resolution 3D models and uploaded them to Sketchfab (Smith 2018). For sharing 3D models, Sketchfab recommends Sketchfab, which supports the upload, publications, and visualization of 3D models on the web; 3DHOP, the academic open-source platform, which is more flexible that Sketchfab, supporting different interaction modes; and Aton, which focuses on scene-graph concepts (Trognitz et al. 2016e).

Disadvantages of Sketchfab Viewers that allow access to 3D content online offer an advantage as it does not require software to be downloaded, including service platforms like Sketchfab, and self-hosted viewers, like the Smithsonian’s Voyager (Fernie 2019). The problem with online public fora like SketchFab is that these services could lose corporate abandonment (Kilbride 2017).

Cieslik 53

Among those interviewed at biology, paleontology, and archaeology museums, 88% consider it important to share 3D models to communicate research but only 36% were happy with the current infrastructure of storing and sharing this data (Wittenberg and Hardesty 2017).

3D printing: The varied uses of the replica

Stereolithography refers to the polymerization of photocurable resins to produce a part in a stepwise manner. The machine used to create this is called a stereolithography apparatus, or SLA. SLA produces an object in an additive way. Software divides a 3D model into slices and assists in the orientation of the part during the build and creation of supports, and these supports are removed by hand later (Wachowiak and Karas 2009). Using 3D scanning and fabrication, it is possible to recreate whole objects or portions of an object, including restoration of missing parts too delicate to be molded (Wachowiak and Karas 2009). 3D scanning can also be used for the sale of object replicas (Kilbride 2017).

The experience of creating a full 360-degree scan, which can then produce a 3D print, allows a visit to go deeper into experiences with the object. The different parts of 3D interaction, including scanning, designing, manipulating, printing, and sharing increases visitor’s interaction time with the object and can stimulate a deeper engagement. To create a 3D model of an object, the visitor must photograph it from every angle, requiring close examination of the object’s form, thinking about angles, shadows, and physical details (Neely and Langer 2013). Once these meshes are published, websites such as , , and Turboquid can be used to print and use the point cloud and mesh data (Johnson et al. 2018).

In a digital fabrication survey, Scopigno et al. (2017) found that the current and potential applications for fabrication in cultural heritage support production of copies in any scale, supporting visually impared people, temporary or permanent replacement of originally, temporary loan of artworks for temporary exhibitions, production of tailored packaging for shipping or displaying culture objects, education and experiment in museums, sensorized replicas in museums, and restoration (Scopigno et al. 2017).

Cieslik 54

Conclusion

There are many 3D digitization technologies available, including those in different price ranges, expertise levels, and more. In some cases, it is best to have a combination of technologies, specific to each type of item that is being digitized. Advances in technology necessitate the fusion of different 3D technologies. For example, Morishima et al. (2017) discovered a void inside the Queen’s chamber in the Great Pyramid using nuclear emulsion film. As part of the project, the team designed an accurate 3D model of the pyramid by combining architectural drawings, photogrammetry, and laser scanning measurements, inside and outside of the pyramid. This model was used in the Real Time Muography Simulator (Morishima et al. 2017). This project focuses on using nuclear emission technology to identify voids in the structure after 3D digitization, and this represents another innovation.

More researchers are looking to combine 3D surface digitization with 3D internal digitization. As Weber et al. (2019) note, 3D digitization involves capturing the geometry, texture, and material properties but recently developed techniques as part of the Fraunhofer Innovations for Cultural Heritage process works to combine Terahertz-technology, confocal microscopy, and ultrasound-tomography with photogrammetry, stripe light acquisition, and endoscopy to create an interior and exterior statue of the object. While 3D surface analysis aids understanding of surface material deterioration over time, the status of internal 3D structures ensures stability of the statue, building, or other cultural heritage object over time.

For this reason, it is important to understand the advantages and disadvantages to each 3D object digitization technique depending on the object. In moving forward, tit is also important to identify useful, standardized file formats to ensure data usability throughout time. 3D digitization standards are already underway, and it is exciting to see what new technologies and updates will be completed over the next decade. For now, this guide offers a preliminary encounter with literature related to this topic and content from personal research and correspondents with leaders of the field.

Cieslik 55

References

Adams, Thomas L., Christopher Strganac, Michael J. Polcyn, and Louis L. Jacobs. "High resolution three-dimensional laser-scanning of the type specimen of Eubrontes (?) glenrosensis Shuler, 1935, from the Comanchean (Lower Cretaceous) of Texas: implications for digital archiving and preservation." Palaeontologia Electronica 13, no. 3 (2010): 1-12. http://palaeo-electronica.org/2010_3/226/index.html. Akhtar, Saima, Goze Akoglu, Stefan Simon, and Holly Rushmeier. “Project Anqa: Digitizing and Documenting Cultural Heritage in the Middle East.” International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences 42 (2017): 1-5. doi: 10.5194/isprs-archives-XLII-2-W5-1-2017. Alliez, Pierre, Laurent Bergerot, Jean-François Bernard, Clotilde Boust, George Bruseker, Nicola Carboni, Mehdi Chayani et al. "Digital 3D Objects in Art and Humanities: challenges of creation, interoperability and preservation. White paper." In Digital 3D Objects in Art and Humanities: challenges of creation, interoperability and preservation, 2017. hal- 01526713v2. Ali Khodja, N., H. Zeghlache, F. Benali, and O. Guani. "The Use of New Technologies in the Restoration and Conservation of Build Cultural Heritage/The Case of the Statue of Fouara, Setif, Algeria." International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences (2019). doi: 10.5194/isprs-archives-XLII-2-W11-43- 2019. Apollonio, F. I., M. Gaiani, W. Basilissi, and L. Rivaroli. "Photogrammetry driven tools to support the restoration of open-air bronze surfaces of sculptures: an integrated solution starting from the experience of the Neptune Fountain in Bologna." The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 42 (2017): 47-54. doi: 10.5194/isprs-archives-XLII-2-W3-47-2017. Ashley, Michael. “Digital Preservation Workflows for Museum Imaging Environments.” From "Principles and practices of robust photography-based digital imaging techniques for museums," edited by Mark Mudget et al. 2010. In VAST 2010: The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage. edited by Artusi, A., Joly-Parvex, M., Lucet, G., Ribes, A., and Pitzalis, D. https://static1.squarespace.com/static/5827866be3df28280c942f49/t/5914d40744024311d 8eeac5b/1494537239177/ep_vast_2010_ma.pdf. Audit and Certification of Trustworthy Digital Repositories. Recommended Practice CCSDS 652.0-M-1. Magenta Book. The Consultative Committee for Space Data Systems, 2011. https://public.ccsds.org/pubs/652x0m1.pdf. Balzani, M., F. Maietti, and L. Rossato. "3D Data Processing Toward Maintenance and Conservation. The Integrated Digital Documentation of Casa de Vidro." In 8th International Workshop 3D-ARCH 3D Virtual Reconstruction and Visualization of Complex Architectures", vol. 42, pp. 65-72. Copernicus, 2019. http://hdl.handle.net/11392/2399436. Barns, Adam. Close Range-Photogrammetry: Guide to Good Practice. Section 1. Introduction.

Cieslik 56

1.1 Close-Range Digital Photogrammetry in an Archaeological Context. Archaeology Data Service / Digital Antiquity: Guides to Good Practice. https://guides.archaeologydataservice.ac.uk/g2gp/Photogram_1-1. Barsanti, S. Gonizzi, and G. Guidi. "3D digitization of museum content within the 3D-ICONS project." ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci, II-5 W 1 (2013): 151- 156. BCR’s CDP Digital Imaging Best Practices Working Group. BCR’s CDP Digital Imaging Best Practices Version 2.0. Bibliographic Center for Research, 2008. https://mwdl.org/docs/digital-imaging-bp_2.0.pdf. Beagrie, N., and Kilbride, W. File formats and standards, Digital Preservation Handbook, 2nd. Edition. Digital Preservation Coalition, 2020. https://www.dpconline.org/handbook/technical-solutions-and-tools/file-formats-and- standards. Blundell, Jon, Fuhrig, Lynda Schmitz, Little, Holly, Pilsk, Suzanne, Rossi, Vince, Snyder, Rebecca, Stern Beth, Sullivan, Ben, Tomerlin, Melinda Jane, and Webb, Keats. Smithsonian Institution 3D Metadata Overview, v0.6. Smithsonian’s Digitization Program Advisory Committee’s 3D Sub-Committee’s Metadata Working Group. Smithsonian 3D Digitization. https://dpo.si.edu/sites/default/files/resources/Smithsonian%20Institution%203D%20Met adata%20Model%20-%20Overview%20Document%20v0.6.pdf. Boeykens, Stefan, and Elena Bogani. "Metadata for 3D models. How to search in 3D model repositories?" ICERI 2008 Proceedings (2008): 11. Brecko, Jonathan, and Aurore Mathys. "Handbook of best practice and standards for 2D+ and 3D imaging of natural history collections." European Journal of Taxonomy 623 (2020). doi: https://doi.org/10.5852/ejt.2020.623. Brecko, Jonathan, Mathys, Aurore, VandenSpiegel, Didier, Semal, Patrick, etc. Handbook of best practice and standards for 3D imaging of Natural History Specimens. SYNTHESYS3, 2016. https://drive.google.com/file/d/0B2yIFo9B44xfWk1WSHgxVU51WVk/view. Brosseau, Kathleen, Choquette, Myléne and Louise Renaud. "Digitization Standards for the Canadian Museum of Civilization Corporation.” Library and Archives Canada Cataloguing in Publication. Canadian Museum of Civilization, 2006. https://www.canada.ca/content/dam/chin-rcip/documents/services/digitization/standards- canadian-museum-civilization/smcc_numerisation-cmcc_digitization- eng.pdf?WT.contentAuthority=4.4.10. Budak, Igor, Zeljko Santosi, Vesna Stojakovic, Daniela Korolija Crkvenjakov, Ratko Obradovic, Mijodrag Milosevic, and Mario Sokac. "Development of Expert System for the Selection of 3D Digitization Method in Tangible Cultural Heritage." Tehnički vjesnik 26, no. 3 (2019): 837-844. doi: 10.17559/TV-20180531120300. Callieri, Marco, Matteo Dellepiane, Paolo Cignoni, and Roberto Scopigno. “Processing sampled 3D data: reconstruction and visualization technologies,” in Digital Imaging for Cultural Heritage Preservation: Analysis, Restoration, and Reconstruction of Ancient Artworks, 2- 11, ed. by Filippo Stanco, Sebastiano Battiato, and Giovanni Gallo (CRC Press, 2011): 103-132. Callieri, Marco, Guido Ranzuglia, Matteo Dellepiane, Paolo Cignoni, and Roberto Scopigno.

Cieslik 57

"Meshlab as a complete open tool for the integration of photos and colour with high- resolution 3D geometry data." Computational Application of Quantitative Methods in Archaeology (2012): 406-16. Cameron, Fiona. "Digital Futures I: Museum collections, digital technologies, and the cultural construction of knowledge." Curator: The Museum Journal 46, no. 3 (2003): 325-340. doi: 10.1111/j.2151-6952.2003.tb00098.x. CARTI. Guidelines for the Creation for Digital Collections: Digitization Best Practices for Three-Dimensional Objects. Consortium of Academic and Research Libraries in Illinois, 2020. https://www.carli.illinois.edu/sites/files/digital_collections/documentation/guidelines_for _images.pdf. Ceballos, Laura. “The Coremans project: Conservation Criteria of polychrome Sculpture.” Ministerio de Educación, Cultura y Deporte. Secretaria General Técnica, 2017. Champion, Erik. "The role of 3D models in virtual heritage infrastructures." Cultural Heritage Infrastructures in Digital Humanities (2017): 15-35. Clini, P., Frapiccini, N., Mengoni, M., Nespeca, R., Ruggeri, L. “SfM Technique and Focus Stacking for Digital Documentation of Archaeological Artefacts.” Proc. 23rd ISPRS Congress, Prague, 229–36. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B5. ISPRS, 2016. Collins, Daniel, Capco, David, and Sethuraman Panchanathan. “KDI: 3D Knowledge: Acquisition, Representation, and Analysis in a Distributed Environment.” School of Human Revolution and Social Change, Arizona State University, 2004. https://shesc.asu.edu/research/research-topic/kdi-3d-knowledge-acquisition- representation-and-analysis-distributed. Cornelius, Izak, L. C. Swanepoel, Anton Du Plessis, and Ruhan Slabbert. "Looking inside votive creatures: Computed tomography (CT) scanning of ancient Egyptian mummified animals in Iziko Museums of South Africa: A preliminary report." Akroterion 57, no. 1 (2012): 129-148. https://hdl.handle.net/10520/EJC132288. Croce, Valeria, Gabriella Caroti, Andrea Piemonte, and MARCO GIORGIO Bevilacqua. "Geomatics for Cultural Heritage conservation: integrated survey and 3D modeling." In 2019 IMEKO TC-4 International Conference on Metrology for Archaeology and Cultural Heritage, pp. 271-276. 2019 IMEKO, 2019. Crouch, Michelle. "Digitization as Repatriation?." Journal of Information Ethics 19, no. 1 (2010): 45-56. doi: 10.3172/JIE.19.1.45. D'Andrea, Andrea, and Kate Fernie. "CARARE 2.0: a metadata schema for 3D Cultural Objects." In 2013 Digital Heritage International Congress (DigitalHeritage), vol. 2, pp. 137-143. IEEE, 2013. doi: 10.1109/DigitalHeritage.2013.6744745. Dellepiane, Matteo. 3D models from un-calibrated images and uses of MeshLab. In VAST 2010: The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (2010). Denard, Hugh. The London Charter for the Computer-Based Visualisation of Cultural Heritage, 2009. Accessed at http://www.londoncharter.org/. Denard, Hugh. "A new introduction to the London Charter," in Paradata and transparency in virtual heritage, ed. Anna Bentkowska-Kafel, Denard Hugh, and Drew Baker (Ashgate Publishing, Ltd., 2012): 57-72. Detalle, Vincent. “Laser-Induced Breakdown Spectroscopy (LIBS),” in Digital Techniques for

Cieslik 58

Documenting and Preserving Cultural Heritage, ed. Anna Bentkowska-Kafel and Lindsay MacDonald (ARC Humanities Press, 2018): 221-224. Di Benedetto, Marco, Federico Ponchio, Luigi Malomo, Marco Callieri, Matteo Dellepiane, Paolo Cignoni, and Roberto Scopigno. "Web and mobile visualization for cultural heritage." In 3D Research Challenges in Cultural Heritage, ed. Marinos Ioannides and Q. Ewald. (Berlin: Springer, 2014): 18-35. doi: 10.1007/978-3-662-44630-0_2. “Digital Asset Management and Museums - An Introduction.” Government of Canada, 2017. https://www.canada.ca/en/heritage-information- network/services/collections-management-systems/digital-asset-management- museums.html. “Digital Preservation Policy 4th Edition.” National Library of Australia, 2013. https://www.nla.gov.au/policy-and-planning/digital-preservation-policy. Di Pietra, V., E. Donadio, D. Picchi, L. Sambuelli, and A. Spanò. "Multi-source 3D models supporting ultrasonic test to investigate an egyptian sculpture of the archaeological museum in Bologna." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 42, no. 2/W3 (2017): 259-266. doi: 10.5194/isprs- archives-XLII-2-W3-259-2017. Doerr, Martin, Bruseker, George, Bekiari, Chryssoula, Ore, Christian Emil, Belios, Thanasis, and Stead, Stephen. Volume A: Definition of the CIDOC Conceptual Reference Model. Version 6.2.9. ICOM/CIDOC Documentation Standards Group, Continued by the CRM Special Interest Group, 2020. http://www.cidoc-crm.org/Version/version-6.2.9. Doerr, Martin, and Maria Theodoridou. "CRMdig: A Generic Digital Provenance Model for Scientific Observation." TaPP 11 (2011): 20-21. Donadio, Elisabetta, A. Spanò, L. Sambuelli, and Daniela Picchi. "Three-Dimensional (3D) modelling and optimization for multipurpose analysis and representation of ancient statues." Latest Developments in Reality-Based 3D Surveying and Modelling, ed. Fabio Remondino, Andreas Georgopoulos, Diego González-Aguilera, and Panagiotis Agrafiotis (MDPI AG-Multidisciplinary Digital Publishing Institute, 2018): 95-118. Dondi, Piercarlo, Luca Lombardi, Marco Malagodi, and Maurizio Licchelli. "3D modelling and measurements of historical violins." Acta IMEKO 6, no. 3 (2017): 29-34. Doyle, Julie, Herna Viktor, and Eric Paquet. "A metadata framework for long term digital preservation of 3D data." International Journal of Information Studies 1, no. 3 (2009a). Doyle, Julie, Herna Viktor, and Eric Paquet. "Long-term digital preservation: preserving authenticity and usability of 3-D data." International journal on digital libraries 10, no. 1 (2009b): 33-47. doi: 10.1007/s00799-009-0051-7. Drake, Karl-Magnus, Justrell, Borje, and Tammaro, Anna Maria. Good Practice Handbook Version 1.2. Minerva Working Group 6. Identification of good practices and competence centers, 2003. https://www.minervaeurope.org/structure/workinggroups/goodpract/document/bestpractic ehandbook1_2.pdf. Earl, G., K. Martinez, and H. Pagi. "Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts." (2010): 116-119. http://eprints.soton.ac.uk/id/eprint/271582. “FADGI Program: Impacts and Benefits.” FADGI, 2017. http://www.digitizationguidelines.gov/about/FADGI-impacts_20170126.pdf. Fernie, Kate. "CARARE: Connecting archaeology and architecture in Europeana." Uncommon

Cieslik 59

Culture (2012): 85-92. https://uncommonculture.org/ojs/index.php/UC/article/view/4753. Fernie, Kate. 3D content in Europeana task force. Europeana Network Association Members Council. Task Force Report, 2019. https://pro.europeana.eu/project/3d-content-in- europeana. Flynn, Thomas. “Cultural Heritage Users Survey 2019.” Sketchfab, 2019a. https://docs.google.com/presentation/d/1XNwdeKAZCOgkAi8UdrpKy3vxuZKQwO8yo QDMWQ2BVno/edit?usp=sharing. Flynn, Thomas. "What Happens When You Share 3D Models Online (In 3D)?” in 3D/VR in the Academic Library: Emerging Practices and Trends, ed. Jennifer Grayburn, Zack Lischer- Katz, Kristina Golubiewski-Davis, and Veronica Ikeshoji-Orlati (Council on Library and Information Resources, 2019b): 73-86. Fraile, Isabel Rey, Dorda, Beatriz Álvarez, Inés, Fernández Álvarez, Aguilar, Javier Fuertes, Haring, Elisabeth, Bouetel, Virginie, Fulcher, Tim K., von Rintelen, Thomas, Dekker, René, Mackenzie-Dodds, Jacqueline, Hollingsworth, Michelle, Vacek, Frantisek, Handsdörfer, Anna, Smitz, Nathalie, Smirnova, Larissa, and Virgilio Massimilano. “Protocols for DNA extraction: Protocols for DNA extraction made available.” SYNTHESYS 3 Synthesis of Systematic Resources, 2016. Gabellone, Francesco, Maria Chiffi, Davide Tanasi, and Michael Decker. "Integrated Technologies for Indirect Documentation, Conservation and Engagement of the Roman Mosaics of Piazza Armerina (Enna, Italy)." In International and Interdisciplinary Conference on Image and Imagination, ed. Enrico Cicaló (Springer, Cham, 2019): 1016- 1028. doi: 10.1007/978-3-030-41018-6_83. Gaiani, Marco, Fabrizio Ivan Apollonio, and Filippo Fantini. "A comprehensive smart methodology for museum collection digitization." EGA Expresión Gráfica Arquitectónica 25, no. 38 (2020): 170-181. doi: 10.4995/ega.2020.12281. Gianpaolo, P, and Corsini, M. Visualization of RTI images. In VAST 2010: The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (2010). García-León, Josefina, Paloma Sánchez-Allegue, Concepción Peña-Velasco, Luca Cipriani, and Filippo Fantini. "Interactive dissemination of the 3D model of a baroque altarpiece: a pipeline from digital survey to game engines." SCIRES-IT-SCIentific RESearch and Information Technology 8, no. 2 (2019): 59-76. doi: 10.2423/i22394303v8n2p59. Gómez-Moron, A., P. Ortiz, R. Ortiz, J. Becerra, R. Radvan, A. Chelmus, L. Ratoiu et al. "Non-destructive techniques applied to in situ study of Maqsura at Cordoba cathedral (Spain)." In Science and Digital Technology for Cultural Heritage-Interdisciplinary Approach to Diagnosis, Vulnerability, Risk Assessment and Graphic Information Models: Proceedings of the 4th International Congress Science and Technology for the Conservation of Cultural Heritage (TechnoHeritage 2019), March 26-30, 2019, Sevilla, Spain, edited by Pilar Ortiz Calderón, Francisco Pinto Puerto, Philip Verhagen, Andrés J. Prieto (CRC Press, 2019): p. 338-342. Grayburn, Jennifer, Lischer-Katz, Zach, Golubiewski-Davis, Kristina, and Ikeshoji-Orlati, Veronica. 3D/VR in the Academic Library: Emerging Practices and Trends. Arlington, Virginia: Council on Library and Information Resources, 2019. https://www.clir.org/wp- content/uploads/sites/6/2019/02/Pub-176.pdf. Grosman, Leore, Oded Smikt, and Uzy Smilansky. "On the application of 3-D scanning technology for the documentation and typology of lithic artifacts." Journal of Archaeological Science 35, no. 12 (2008): 3101-3110. doi: 10.1016/j.jas.2008.06.011.

Cieslik 60

Guidi, Gabriele, Sara Gonizzi Barsanti, Laura Loredana Micoli, and Michele Russo. "Massive 3D digitization of museum contents." In Built heritage: Monitoring conservation management, ed. Lucia Toniolo, Boriani Maurizio, and Gabriele Guidi (Springer, Cham, 2015): 335-346. doi: 10.1007/978-3-319-08533-3_28. Guidi, Gabriele, and Fabio Remondino. "3D Modelling from real data,” in Modeling and simulation in engineering, ed. Catalin Alexandru (Rijeka, Croatia: InTech, 2012): 69- 102. http://hdl.handle.net/11582/239425. Groenendyk, Michael. "A further investigation into 3D printing and 3D scanning at the Dalhousie University Libraries: A year long case study." Canadian Association of Research Libraries (2013). Harvard College President and Fellows. Building for Tomorrow: Collaborative Development of Sustainable Infrastructure for Architectural and Design Documentation abstract, President and Fellows of Harvard College, 2017. https://www.imls.gov/sites/default/files/grants/lg-73-17-0004-17/proposals/lg-73-17- 0004-17-full-proposal-documents.pdf. Havele, A. “Why are open standards important for 3D? Web3D Showcase.” Virginia Tech Research Center, 2014. https://www.web3d.org/sites/default/files/presentations/Web3D%20Emerging%20Techn ology%20Showcase/3D%20Modernization%20%26amp%3B%20Innovation%20ROI%2 0Using%20Open%20Standards/3D-Modernization-Innovation--ROI-Using-Open- Standards.pdf. He, Y., Y. H. Ma, and X. R. Zhang. “‘Digital Heritage’ Theory and Innovative Practice.” International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences 42 (2017): 335-342. doi: 10.5194/isprs-archives-XLII-2-W5-335-2017. Hess, Mona. "Online survey about current use of 3D imaging and its user requirements in cultural heritage institutions." 2015 Digital Heritage, vol. 2 (2015): 333-338. IEEE. doi: 10.1109/DigitalHeritage.2015.7419517. Hess, Mona. “3D laser scanning,” in Digital Techniques for Documenting and Preserving Cultural Heritage, ed. Anna Bentkowska-Kafel and Lindsay MacDonald (ARC Humanities Press, 2018): 199-206. Guery, Julien, Hess, Moa, and Aurore Mathys. “Photogrammetry,” in Digital Techniques for Documenting and Preserving Cultural Heritage, ed. Anna Bentkowska-Kafel and Lindsay MacDonald (ARC Humanities Press, 2018): 229-235. Hirst, Cara S., Suzanna White, and Sian E. Smith. "Standardisation in 3D Geometric Morphometrics: Ethics, Ownership, and Methods." Archaeologies 14, no. 2 (2018): 272- 298. doi: 10.1007/s11759-018-9349-7. Hollinger, R. Eric, Edwell John Jr, Harold Jacobs, Lora Moran-Collins, Carolyn Thome, Jonathan Zastrow, Adam Metallo, Günter Waibel, and Vince Rossi. "Tlingit-Smithsonian collaborations with 3D digitization of cultural objects." Museum Anthropology Review 7, no. 1-2 (2013): 201-253. Ikeuchi, Katsushi, Takeshi Oishi, Jun Takamatsu, Ryusuke Sagawa, Atsushi Nakazawa, Ryo Kurazume, Ko Nishino, Mawo Kamakura, and Yasuhide Okamoto. "The great buddha project: Digitally archiving, restoring, and analyzing cultural heritage objects." International Journal of Computer Vision 75, no. 1 (2007): 189-208. doi: 10.1007/s11263-007-0039-y. “Image capture standards.” National Library of Australia, 2020.

Cieslik 61

https://www.nla.gov.au/standards/image-capture. Ioannides, Marinos, Zarnic, Roko, Hagedorn-Saupe, Monika, Gordia, Sergiu, Pollé, Ad, Hazan, Susan, and Robert Davies. Final Report on Advanced documentation of 3D Digital Assets Task Force, Europeana Network Task Force, 2018. https://pro.europeana.eu/files/Europeana_Professional/Europeana_Network/Europeana_N etwork_Task_Forces/Final_reports/Europeana_Task_Force_on_Advanced_3D_Documen tation_Final%20report.pdf. Johnston, Lisa R. Curating Research Data Volume Two: A Handbook of Current Practice. Association of College & Research Libraries, 2017. Johnson, Jennifer, Miller, Derek, and Kristi Palmer. Advancing 3D Digitization for Libraries, Museums, and Archives. LYRASIS, 2018. https://www.lyrasis.org/Leadership/Documents/Advancing-3D-Digitization.pdf. Keklikoglou, Kleoniki, Sarah Faulwetter, Eva Chatzinikolaou, Patricia Wils, Jonathan Brecko, Jiří Kvaček, Brian Metscher, and Christos Arvanitidis. "Micro-computed tomography for natural history specimens: a handbook of best practice protocols." European Journal of Taxonomy 522 (2019): 1-55. doi: 10.5852/ejt.2019.522. Kersten, T. P., M. Lindstaedt, and D. Starosta. "Comparative Geometrical Accuracy Investigations of Hand-held 3D Scanning Systems-An Update." International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences 42, no. 2 (2018): 487-494. doi: 10.5194/isprs-archives-XLII-2-487-2018. Kilbride, William. "3D4Ever: why is it so hard to talk about the preservation of 3D data?." Journal of the Institute of Conservation 40, no. 2 (2017): 183-189. doi: 10.1080/19455224.2017.1317006. Koller, David, Bernard Frischer, and Greg Humphreys. "Research challenges for digital archives of 3D cultural heritage models." Journal on Computing and Cultural Heritage (JOCCH) 2, no. 3 (2010): 1-17. doi: 10.1145/1658346.1658347. Le Bouef, Patrick, Doerr, Martin, Emil Ore, Christian, and Stephen Stead. Definition of the CIDOC Conceptual Reference Model. ICOM/CIDOC CRM Special Interest Group, 2018. http://www.cidoc-crm.org/sites/default/files/2018-10- 26%23CIDOC%20CRM_v6.2.4_esIP.pdf. Levoy, Marc, Kari Pulli, Brian Curless, Szymon Rusinkiewicz, David Koller, Lucas Pereira, Matt Ginzton et al. "The digital Michelangelo project: 3D scanning of large statues." In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, ed. Judith Brown (New York: ACM Press, 2000): 131-144. doi: 10.1145/344779.344849. Li, Renju, Tao Luo, and Hongbin Zha. "3D digitization and its applications in cultural heritage." In Digital Heritage EuropMed 2010. Lecture Notes in Computer Science 6436, ed. Marinos Ioannides, Dieter Fellner, Andreas Georgopoulos, and Diofantos G. Hadjimitsis (Berling: Springer, 2010): 381-388. doi: 10.1007/978-3-642-16873-4_29. Liao, Han-Teng, Man Zhao, and Si-Pan Sun. "A Literature Review of Museum and Heritage on Digitization, Digitalization, and Digital Transformation." In 6th International Conference on Humanities and Social Science Research, ed. Xeumei Du, Chunyan Huang, and Yulin Zhong (Atlantis Press, 2020) : 474-477. doi: 10.2991/assehr.k.200428.101. Lowe, Adam. Scanning Seti: The Re-Generation of a Pharaonic Tomb 200 Years in the Life

Cieslik 62

of a Tomb: Digital Recording in an Age of Mass Tourism and Anti-Ageing. Factum-Arte & Factum Foundation, 2017. https://www.factum- arte.com/resources/files/ff/articles/seti_basel_36.pdf. MacDonald, Lindsay. “Reflectance Transformation Imaging,” in Digital Techniques for Documenting and Preserving Cultural Heritage, ed. Anna Bentkowska-Kafel and Lindsay MacDonald (ARC Humanities Press, 2018): 237-242. Mankoff, Kenneth David, and Tess Alethea Russo. "The Kinect: a low‐cost, high‐resolution, short‐range 3D camera." Earth Surface Processes and Landforms 38, no. 9 (2013): 926- 936. doi: 10.1002/esp.3332. Martorelli, Massimo, Claudio Pensa, and Domenico Speranza. "Digital photogrammetry for documentation of maritime heritage." Journal of Maritime Archaeology 9, no. 1 (2014): 81-93. doi: 10.1007/s11457-014-9124-x. Masciotta, M. G., M. J. Morais, L. F. Ramos, D. V. Oliveira, L. J. Sánchez-Aparicio, and D. González-Aguilera. "A Digital-based Integrated Methodology for the Preventive Conservation of Cultural Heritage: The Experience of HeritageCare Project." International Journal of Architectural Heritage (2019): 1-20. doi: 10.1080/15583058.2019.1668985. Matthews, Neffra and Noble, Tommy. “Photogrammetric Principles, Examples, and Demonstration,” in VAST 2010: The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage, ed. Alessandra Artusi (Eurographics Association, 2010). Mathys, Aurore and Jonathan Brecko. “Focus Stacking,” in Digital Techniques for Documenting and Preserving Cultural Heritage, ed. Anna Bentkowska-Kafel and Lindsay MacDonald (ARC Humanities Press, 2018): 213-215. Mathys, Aurore, Lemaitre, Serge, Brecko, Jonathan, & Semal, Patrick. “Agora 3D: evaluating 3D imaging technology for the research, conservation and display of museum collections.” Antiquity Journal 336 (2013). http://antiquity.ac.uk/projgall/mathys336/. Mathys, Aurore, Jonathan Brecko, and Patrick Semal. "Comparing 3D digitizing technologies: what are the differences?." In 2013 Digital Heritage International Congress (DigitalHeritage), vol. 1 (IEEE, 2013): 201-204. doi: 10.1109/DigitalHeritage.2013.6743733. Mathys, Aurore, Patrick Semal, Jonathan Brecko, and Didier Van den Spiegel. "Improving 3D photogrammetry models through spectral imaging: Tooth enamel as a case study." PloS one 14, no. 8 (2019): e0220949. doi: 10.1371/journal.pone.0220949. McHenry, Kenton, and Peter Bajcsy. "An overview of 3d data content, file formats and viewers." National Center for Supercomputing Applications 1205 (2008): 22. Menna, Fabio, Erica Nocerino, Fabio Remondino, M. Dellepiane, M. Callieri, and R. Scopigno. "3D Digitization of a Heritage Masterpiece - A Critical Analysis on Quality Assessment." International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences 41 (2016): 675-683. doi: 10.5194/isprsarchives-XLI-B5-675-2016. Mitchell, H. L., and R. G. Chadwick. "Challenges of photogrammetric intra-oral tooth measurement." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Kyoto, Japan (2008): 779-782. Molloy, Barry, Mariusz Wiśniewski, Frank Lynam, Brendan O'Neill, Aidan O'Sullivan, and Alan

Cieslik 63

Peatfield. "Tracing edges: A consideration of the applications of 3D modelling for metalwork wear analysis on Bronze Age bladed artefacts." Journal of archaeological science 76 (2016): 79-87. doi: 10.1016/j.jas.2016.09.007. Moore, Jennifer, Adam Rountrey, and Hannah Scates Kettler. "CS3DP: Developing agreement for 3D standards and practices based on community needs and values," in 3D/VR in the Academic Library: Emerging Practices and Trends, ed. Jennifer Grayburn, Zack Lischer- Katz, Kristina Golubiewski-Davis, and Veronica Ikeshoji-Orlati, CLIR Report 176 (Council on Library and Information Resources, 2019a): 114-121. Moore, Jennifer, Rountrey, Adam, and Hannah Scates Kettler. Community Standards of 3D Data Preservation (CS3DP). Coalition for Networked Information, 2019b. https://www.cni.org/topics/digital-curation/community-standards-for-3d-data- preservation-cs3dp. Morishima, Kunihiro, Mitsuaki Kuno, Akira Nishio, Nobuko Kitagawa, Yuta Manabe, Masaki Moto, Fumihiko Takasaki et al. "Discovery of a big void in Khufu’s Pyramid by observation of cosmic-ray muons." Nature 552, no. 7685 (2017): 386-390. doi: 10.1038/nature24647. Mudge, Mark, Schroer, Carla., and Lum, Marlin. “Integrated Methods for the Generation of Multiple Scientifically Reliable Digital Representations for museums.” In VAST 2010: The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage, ed. Alessandro Artusi (Eurographics Association, 2010). National Library of Australia. “Collection Digitisation Policy,” National Library of Australia. https://www.nla.gov.au/policy-and-planning/collection-digitisation-policy. Neely, Liz, & Langer, Miriam. Please feel the museum: the emergence of 3D printing and scanning. In Museums and the Web 2013, ed. N. Proctor & R. Cherry (Silver Spring, MD: Museums and the Web, 2013). https://mw2013.museumsandtheweb.com/paper/please-feel-the-museum-the-emergence- of-3d-printing-and-scanning/. Nieva de la Hidalga, A., Rosin, P., Sun, Z., van Walsum, M., & Wu, Z. (2019, September). Rapid 3D Capture Methods in Biological Collections and Related Fields. 2020 Framework of the European Union. Grant Agreement No 777483. Niquet, Nicolás-Didier, Miguel Sánchez-López, and Xavier Mas-Barberà. "Development of reversible intelligent prosthesis for the conservation of sculptures. A case study." Journal of Cultural Heritage 43 (2020): 227-234. doi: 10.1016/j.culher.2019.12.010. Niven, Laura, Teresa E. Steele, Hannes Finke, Tim Gernat, and Jean-Jacques Hublin. "Virtual skeletons: using a structured light scanner to create a 3D faunal comparative collection." Journal of Archaeological Science 36, no. 9 (2009): 2018-2023. doi: 10.1016/j.jas.2009.05.021. Patrucco, G., F. Rinaudo, and A. Spreafico. "A Handheld Scanner for 3D Survey of Small Artifacts: The Stonex F6." International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences (2019). doi: 10.5194/isprs-archives-XLII-2- W15-895-2019. Pavlidis, George, and Santiago Royo. “3D Depth Sensing,” in Digital Techniques for Documenting and Preserving Cultural Heritage, ed. Anna Bentkowska-Kafel and Lindsay MacDonald (ARC Humanities Press, 2018): 195-198. Payne, Emma Marie. "Imaging techniques in conservation." Journal of conservation and museum studies 10, no. 2 (2012): 17-29. doi: 10.5334/jcms.1021201.

Cieslik 64

Plisson, Hugues, and Lydia V. Zotkina. "From 2D to 3D at macro-and microscopic scale in rock art studies." Digital Applications in Archaeology and Cultural Heritage 2, no. 2-3 (2015): 102-119. doi: 10.1016/j.daach.2015.06.002. Pollefeys, Marc, Luc Van Gool, Maarten Vergauwen, Kurt Cornelis, Frank Verbiest, and Jan Tops. "3D recording for archaeological fieldwork." IEEE Computer Graphics and applications 23, no. 3 (2003): 20-27. doi: 1109/MCG.2003.1198259. Ray, Sidney F. Applied Photographic Optics: Lenses and Optical Systems for Photography, Film, Video. Electronic and Digital Imaging, Focal, 2002. Reflectance Transformation Imaging: Guide to Highlight Image Capture v2.0. Cultural Heritage Imaging, 2013. http://culturalheritageimaging.org/What_We_Offer/Downloads/RTI_Hlt_Capture_Guide _v2_0.pdf. Remondino, Fabio, Armin Gruen, Jennifer von Schwerin, Henri Eisenbeiss, Alessandro Rizzi, M. Sauerbier, and H. Richards-Rissetto. "Multi-sensor 3D documentation of the Maya site of Copan." In Proceedings of 22nd CIPA Symposium, Kyoto, Japan (CIPA Heritage Documentation, 2009). Richards-Rissetto, Heather, Fabio Remondino, Giorgio Agugiaro, Jennifer von Schwerin, Jim Robertsson, and Gabrio Girardi. "Kinect and 3D GIS in archaeology." In 2012 18th International Conference on Virtual Systems and Multimedia, ed. Gabriele Guidi (Piscataway, New Jersey: IEEE, 2012). doi: 10.1109/VSMM.2012.6365942. Rieger, Thomas. Technical Guidelines for Digitizing Cultural Heritage Materials: Creation of Raster Image Files. Federal Agencies Digital Guidelines Initiative, 2016. http://www.digitizationguidelines.gov/guidelines/FADGI%20Federal%20%20Agencies% 20Digital%20Guidelines%20Initiative-2016%20Final_rev1.pdf. Saleri, Renato, Valeria Cappellini, Nicolas Nony, Livio De Luca, Marc Pierrot-Deseilligny, Emmanuel Bardiere, and Massimiliano Campi. "UAV photogrammetry for archaeological survey: The Theaters area of Pompeii." In 2013 Digital heritage international congress (DigitalHeritage), vol. 2 (IEEE, 2013): 497-502. doi: 10.1109/DigitalHeritage.2013.6744818. Santamaria, N., Jacinto Santamaría Peña, J. M. Valle, and Félix Sanz Adán. "3D digitization of the archaeological and palaeontological heritage through non-contact low-cost scanners. Comparative analysis." In Advances in Design Engineering: Proceedings of the XXIX International Congress INGEGRAF, 20-21 June 2019, Logroño, Spain (Springer International Publishing AG, 2020): 586-596. Scopigno, Roberto, Paolo Cignoni, Nico Pietroni, Marco Callieri, and Matteo Dellepiane. "Digital fabrication techniques for cultural heritage: A survey." In Computer Graphics Forum, vol. 36, no. 1 (2017): 6-21. doi: 10.1111/cgf.12781. Smith, MacKenzie. "Curating architectural 3D CAD models." International journal of digital curation 4, no. 1 (2009): 99-106. doi: 10.2218/ijdc.v4i1.81. Smith, D. “3D Documentation for Museum Collections (2017-16).” National Center for Preservation Technology and Training, 2018. https://www.ncptt.nps.gov/blog/3d- documentation-for-museum-collections-2017-16/. Stamatopoulos, C., C. S. Fraser, and S. Cronk. "Accuracy aspects of utilizing raw imagery in photogrammetric measurement." Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci 39 (2012): 387-392. Stentoumis, Christos. “Multiple View Stereovision,” in Digital Techniques for Documenting and

Cieslik 65

Preserving Cultural Heritage, ed. Anna Bentkowska-Kafel and Lindsay MacDonald (ARC Humanities Press, 2018): 225-228. Still Image Working Group. “Digital Imaging Framework.” Federal Agencies Digital Guidelines Initiative, 2009a. http://www.digitizationguidelines.gov/guidelines/digitize- framework.html. Still Image Working Group. “Digitization Activities: Project Planning and Management Outline. Version 1.0.” Federal Agencies Digitization Guidelines Initiative, 2009b. http://www.digitizationguidelines.gov/guidelines/DigActivities-FADGI-v1- 20091104.pdf. Thamir, Zahraa S., and Fanar M. Abed. "How geometric reverse engineering techniques can conserve our heritage; a case study in Iraq using 3D laser scanning." MS&E 737, no. 1 (2020): 012231. doi: 10.1088/1757-899X/737/1/012231. The Institute of Museum and Library Services. Protecting America’s Collections: Results from the Heritage Health Information Survey. Washington, DC: The Institute, 2019. https://www.imls.gov/sites/default/files/publications/documents/imls-hhis-report.pdf. Toler-Franklin, C, and Rusinkiewicz, S. (2010). Visualizing and Re-Assembling Cultural Heritage Artifacts Using Images with Normals. In VAST 2010: The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage, ed. Alessandro Artusi (Eurographics Association, 2010). Trognitz, Martina, Niven, Kieron, and Gilissen, Valentijn. “Section 1. Aims and Objectives.” Archaeology Data Service / Digital Antiquity: Guides to Good Practice, 2016a. https://guides.archaeologydataservice.ac.uk/g2gp/3d_1. Trognitz, Martina, Niven, Kieron, and Gilissen, Valentijn. “Section 2. Creating 3D Data. 2.2 Sources and Types of 3D data.” Archaeology Data Service / Digital Antiquity: Guides to Good Practice, 2016b. https://guides.archaeologydataservice.ac.uk/g2gp/3d_2-2. Trognitz, Martina, Niven, Kieron, and Gilissen, Valentijn. “Section 2. Creating 3D Data. 2.3 File formats.” Archaeology Data Service / Digital Antiquity: Guides to Good Practice, 2016c. https://guides.archaeologydataservice.ac.uk/g2gp/3d_2-3. Trognitz, Martina, Niven, Kieron, and Gilissen, Valentijn. “Section 3. Archiving 3D data. 3.1 Significant Properties.” Archaeology Data Service / Digital Antiquity: Guides to Good Practice, 2016d. https://guides.archaeologydataservice.ac.uk/g2gp/3d_3-1. Trognitz, M., Niven, K., and Gilissen, V. “Section 3. Archiving 3D data. 3.2 File types for Archiving and Dissemination.” Archaeology Data Service / Digital Antiquity: Guides to Good Practice, 2016e.. https://guides.archaeologydataservice.ac.uk/g2gp/3d_2-2. Tucci, G., D. Cini, and A. Nobile. "Effective 3D digitization of archaeological artifacts for interactive virtual museum,” in Proceedings of the 4th ISPRS International Workshop 3D-ARCH 2011, ed. F. remondino and S. El-Hakin (International Society for Photogrammetry and Remote Sensing, 2011): 413-420. Turco, M. Lo, M. Calvano, and E. C. Giovannini. "Data modeling for museum collections." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 42, no. 2/W9 (2019): 433-440. doi: 10.5194/isprs-archives-XLII-2-W9-433- 2019. Urban, Richard. "Collections Cubed: Into the Third Dimension." MW20I6: Museums and the Web (2016): 6-9. https://mw2016.museumsandtheweb.com/paper/collections-cubed-into- the-third-dimension/. Wachowiak, Melvin J., and Basiliki Vicky Karas. "3D scanning and replication for museum and

Cieslik 66

cultural heritage applications." Journal of the American Institute for Conservation 48, no. 2 (2009): 141-158. doi: 10.1179/019713609804516992. Wachowiak, Melvin. and Keats Webb, Elizabeth. Museum uses of RTI at the Smithsonian Institution. In VAST 2010: The 11th International Symposium on Virtual Reality, Archaeology and Cultural Heritage, ed. Alessandro Artusi (Eurographics Association, 2010). Walthew, Jessica, Sarah Barack, Adam Quinn, and Nolan Hill. "Sharing Conservation Imaging Research with the Public." Journal of the American Institute for Conservation 59, no. 1 (2020): 18-26. doi: 10.1080/01971360.2019.1690908. Weber, Peter, Erich Jelen, Fabian Friederich, Pedro Santos, Andreas Hoffmann, Alexander Hollaender, Oliver Schreer, Michael Maeder, and Johanna Leisner. "Fraunhofer Innovations for Cultural Heritage, Novel Methods for the Analysis of Materials and Damages in 3D. Geophysical Research Abstracts, vol. 21. 2019. Weinberg, Michael. 3D Scanning: A World Without Copyright*. Shapeways, 2016. Weigert, A., A. Dhanda, J. Cano, C. Bayod, Stephen Fai, and Mario Santana Quintero. "A review of recording technologies for digital fabrication in heritage conservation." Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci 42, no. 2 (2019): 773-778. doi: 10.5194/isprs-archives-XLII-2-W9-773-2019. Westoby, Matthew J., James Brasington, Niel F. Glasser, Michael J. Hambrey, and Jennifer M. Reynolds. "‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications." Geomorphology 179 (2012): 300-314. doi: 10.1016/j.geomorph.2012.08.021. Wheeler, Andrew. “3D Scanning: Understanding the Differences In LIDAR, Photogrammetry, and Infrared Techniques.” Engineering.com, 2017. https://www.engineering.com/Hardware/ArticleID/14541/3D-Scanning-Understanding- the-Differences-In-LIDAR-Photogrammetry-and-Infrared-Techniques.aspx. Wittenberg, Jamie. and Hardesty, Juliet. “3D Two Ways: Researcher requirements and repository design for three-dimensional objects.” Conference Presentation. DLF Forum. Pittsburg: Pennsylvania, 2017. https://dlfforum2017.sched.com/event/4cadae62c3e0b55d99e17d75ac2223ba. Woody, Rachael Cristine. “3D Digitization in the Museum, Part 1: Photogrammetry.” Lucidea, 2018a. https://lucidea.com/blog/3d-digitization-in-the-museum-part-2/. Woody, Rachael Cristine. “3D Digitization in the Museum, Part 2: LIDAR.” Lucidea, 2018b. https://lucidea.com/blog/3d-digitization-in-the-museum-part-2/. Yastikli, Naci. "Documentation of cultural heritage using digital photogrammetry and laser scanning." Journal of Cultural Heritage 8, no. 4 (2007): 423-427. doi: 10.1016/j.culher.2007.06.003. Youngs, Yolanda. 3D Documentation and Visualization Techniques for Cultural Resources and Museum Collections, Grand Teton National Park, Intermountain Region. National Center for Preservation Technology and Training. Typescript, 2017. Zollhöfer, Michael, Christian Siegl, Mark Vetter, Boris Dreyer, Marc Stamminger, Serdar Aybek, and Frank Bauer. "Low-cost real-time 3d reconstruction of large-scale excavation sites." Journal on Computing and Cultural Heritage (JOCCH) 9, no. 1 (2015): 1-20. doi: 10.1145/2770877.

Cieslik 67

Appendix #1: Current Museum Digitization Projects, Digitization Labs, and Independent Digitization Projects

As part of this guidebook, the author has created a list of current cultural heritage institutions that maintain 3D model collections of objects, archaeological sites, and architecture.

Name of Information about Institution Example

Cieslik 68

Institution

Smithsonian The Smithsonian Institution Digitization Mammuthus primigenius Institution Office focuses on the development of 3D (Blumbach), models to share through three-dimensional https://3d.si.edu/object/3d/ Washington, technology and analysis tools. The office mammuthus-primigenius- D.C., United partners with all Smithsonian museums. blumbach:341c96cd-f967- States Some, but not all, of the 3D models on their 4540-8ed1-d3fc56d31f12 website are open access.22 This website also offers features to change the material of the object on the x, y, or z axis to visualize this object with an autopsy view.23 These models Embedded viewer are housed on the Smithsonian viewer.

Metropolitan The Met has worked to start digitizing 5 Nydia, The Blind Flower Museum of Art collections of objects. The Met MediaLab has Girl of Pompeii (1853- been using photogrammetry and hand-held 1854), by Randolph New York, New 3D scanners to render objects. In 2017, the Rogers York, United Met launched the Open Access Initiative, https://www.thingiverse.c States making 375,000 images available under the om/thing:24089 Creative Common Zero code. All 3D scans have been uploaded to the MakerBot Thingiverse Platform. The Met’s public data set was also added to Google’s BigQuery platform.24 Thingiverse website

The British The British Museum has created 14 3D Lewis Chess Set from Museum models of busts, statues, and sarcophagi from Trondheim, Norway its collection that can be downloaded from (1150-1200), London, England, Sketchfab and printed at home. Some of these https://sketchfab.com/3d- United Kingdom models have embedded audio descriptions or models/lewis-chess-set- can be viewed on a virtual reality device as eddbebab12424c8aa610a2 well. The models also contain additional 1b9b7e19e5 preservation metadata.25 Sketchfab website

The British The British Library, along with funding from Pentatech with the Five Library the Polonsky Foundation, has digitized 1,275 Scrolls, Psalms, Job, and Hebrew manuscripts under the Hebrew Haftarot, Italy, 1486, London, England, Manuscript Digitisation Project (2013-2016). https://sketchfab.com/3d- United Kingdom The manuscripts were digitized with models/pentateuch-add-

22 Open access means that the user can download, modify, and share the 3D model of the Smithsonian object for any purpose. This specific characteristic is designated by the Creative Commons Zero (CC0) code by the model on the website. 23 https://3d.si.edu 24 https://www.thingiverse.com/met/about 25 https://sketchfab.com/britishmuseum

Cieslik 69

photogrammetry to produce 3D models. The ms-4709- 3D models were published to Sketchfab.26 d6ca51e05b9a4d52b6dc60 92cb71093f Sketchfab website

Guimet Museum Most of the objects from the Guimet Museum Bodhisattva statue of that are digitized are housed on Avalokitesvara, Paris, France MyMiniFactory and were scanned as part of https://www.myminifactor the Scan the World project. On y.com/object/3d-print- MyMiniFactory, each model contains a bodhisattva- description about the object and its avalokitesvara-potalaka- background, including the example which at-the-guimet-museum- was made during the Song Dynasty.27 paris-6336 MyMiniFactory website

Museum für This natural history museum has created a 3D Graphosoma lineatum Naturkunde scanner for digitizing the insect collection. insect, The Darmstadt Insect Scanner (DISC3D), can https://sketchfab.com/3d- Berlin, Germany digitally image the smallest insects into high- models/graphosoma- resolution, 3D objects. This opens access to lineatum- insect collections to researchers across the 59fa16902b924d26bde9e6 world. By 2030, all 300 million objects in the 26c76bb28d museum will be digitally recorded. The collection is also housed on Sketchfab.28

Sketchfab website

Illinois Library: The Sousa Archives contains the Music Antoniophone, Sousa Archives Instrument and and 3D Model https://digital.library.illino collection contains instruments dated between is.edu/items/9836e220- Champaign, 1810 and 1972. Each set of images includes ea50-0134-23c2- Illinois, United the provenance of the instrument. The 0050569601ca- States database contains high resolution 3D still 8#?cv=0&r=0&xywh=- images of the front, back, side, top and 1337%2C- bottom of each instrument and 3D digital 48%2C4929%2C4787 models.29 Embedded viewer

University of This University of Michigan Online Specimen UMMP VP Michigan Repository of Fossils was established to 55037 (Dimetrodon

26http://explore.bl.uk/primo_library/libweb/action/dlSearch.do?vid=BLVU1&institution=BL&search_scope= LSCOP-WEBSITE&query=any,contains,3D&tab=website_tab 27 https://www.myminifactory.com/category/guimet-museum 28https://www.museumfuernaturkunde.berlin/en/press/press-releases/collection-opens-live-digitization- insects-exhibition 29 https://digital.library.illinois.edu/collections/fe824900-6c8d-0134-1e34-0050569601ca-b

Cieslik 70

Museum of increase accessibility to fossil specimens composite skeleton), Paleontology through online 3D and 2D representations, https://umorf.ummp.lsa.u including 3D invertebrates and 3D mich.edu/wp/wp- Ann Arbor, vertebrates.30 content/3d/viewer.html?na Michigan, United me=1401&extension=ctm States Embedded Viewer

Royal Belgian This natural history museum houses 38 Skull excavated from Spy Museum of million specimens, including entomology, Cave, Lohest (1885-1886), Natural Sciences anthropology, paleontology, and geology. http://virtualcollections.na The museum uses structured light scanning turalsciences.be/virtual- Brussels, Belgium Mesh Scan to digitize objects. For delivery of collections/anthropology- the finalized 3D models, a Sketchfab iFrame prehistory/anthropology/n is embedded on a webpage of the virtual eandertals/spy/spy-1-skull museum.31 Embedded Sketchfab viewer

Barcelona The Barcelona Natural History Museum Crocuta crocuta (spotted Natural History houses BioExplore, an online catalogue of 3D hyena) skull, Museum models including specimens not often on https://www.bioexplora.ca display. This 3D portal also includes an t/en/osteologic- Barcelona, Spain explanation describing the organism’s place atlas/mzb_2002-0873/32 in the collections and linking them to the natural environment. The 3D models are Embedded Sketchfab embedded into the museum website in a viewer Sketchfab viewer.32

Harvard Museum The Harvard Museum of the Ancient Near Coffin of Anku-Khonsu of the Ancient East is home to Near East archaeological (includes animation), Near East artifacts. These artifacts include cylinder https://sketchfab.com/3d- seals, sculpture, coins, and cuneiform tablets. models/coffin-of-ankh- Cambridge, The Harvard Museum has enabled users to khonsu- Massachusetts, engage with objects in the collection through 6132b52aa5904b1dbdd63 United States 3D models on Sketchfab and on viewers 1235fc52c66 embedded in the website.33 Embedded Sketchfab viewer

3D digitization laboratories

Virtual World This laboratory is based at the Indiana Digital reconstruction of Heritage Lab University School of Informatics and Hadrian’s villa,

30 https://umorf.ummp.lsa.umich.edu/wp/ 31 https://www.naturalsciences.be/en/science/collections 32 https://www.bioexplora.cat/en/3dmodels/ 33 https://hmane.harvard.edu/3d-models

Cieslik 71

Computing.34 The focus of this lab's http://projects.idialab.org/ investigations include 3D scientific HVWebGL/ simulations of spaces and cultural heritage objects. This lab conducted the Uffizi Embedded viewer Digitization Project, the Digital Hadrian’s Villa Project, and the Digital Atzompa Project.

Competence This laboratory is based at the Fraunhofer Helmet of the Lord of CenterCultral Institute for Computer Graphics Research Morken (around 600 AD), Heritage IGD. This lab works to create true-to-original https://www.cultlab3d.de/i Digitization Lab 3D representations with accuracy down to the ndex.php/helmet/ micrometer level. The 3D data gained serves for quality control, reconstructions, or surface reconstructions. This lab is working on the first autonomous 3D scanning pipeline.35 Embedded viewer

Factum This research center in Londonworks is Lamentation over the Foundation focused on recording, process, outputs, and Dead Christ (1463), by experimenting with techniques. This lab Niccolò dell’Arca, actively works with photogrammetry, Lucida https://www.factumfounda 3D scanning, , tion.org/pag/1606/lamenta LiDAR scanning, structured or white light tion-over-the-dead-christ- scanning, and Veronica Chorographic by-niccolò-dellarca Scanner.36 Descriptive video

3D digitization projects

3D Icons Project The EU Icons Ireland, part of 3D Icons, Clones High Cross, digitized 130 monuments and buildings https://sketchfab.com/3d- across England to provide this data online. models/clones-high-cross- This project has digitized 130 monuments 190e52cb659f4cbb98dd67 and buildings from Ireland, including 2d2d18a210 decorated high crosses, the island monastery of Skellig Michael, and more. Sketchfab website

Scan the World This project, started in June 2014, is a David (1501-1504), by Project cultural heritage community-based project to Michangelo share 3D printable sculptures and artifacts. https://www.myminifactor This project has already scanned 16,173 y.com/object/3d-print- objects in 790 places around the world. This michelangelo-s-david-in-

34 http://www.vwhl.org 35 https://www.cultlab3d.de/#home 36 https://www.factumfoundation.org

Cieslik 72

project addresses the need for museums to florence-italy-2052 have 3D data of their artifacts accessible or downloadable to the public. This project also creates 3D-printable models for the partially- sighted. This website downloadable 3D models are housed on MyMiniFactory.37 MyMiniFactory

MayaCityBuilder This project uses procedural modelling to MayaCityBuilder, produce 3D visualizations of Ancient Maya http://mayacitybuilder.org cityscapes. As part of this project, the WebGL application also contains information and images related to the ceramic topologies that were used for the architecture, including the variety of the specific types of ceramic and what it was used form. Embedded viewer

The Scottish Ten This project, started in 2009, works to Marshawn Chambered document Scotland’s world heritage sites and Cairn, Orkney, Scotland, other international heritage sites to create 3D https://www.youtube.com/ data related to their conservation and watch?v=_DX- management, interpretation, and visual OBFdUTE&feature=relate access. The project was conducted by d&noredirect=1 Historic Environment Scotland and the Glasgow School of Art’s School of Simulation and Visualization with CyArk.38 Descriptive video

The Virtual This project involved the digitization of the Wooden Egyptian artifact, Amarna Project Egyptian site of Amarna using a Konica https://archaeologydataser Minolta Vivid 9i triangulation laser scanner, vice.ac.uk/archives/view/a and these 3D scans are now held in the marna_leap_2011/downlo Virtual Amarna Museums. Some of the ads.cfm?obj=yes&obj_id= scanned objects were used as part of the 5239&CFID=d3f0ff48- LEAP II project and were part of the d5d4-4fda-a346- Archaeology Data Service Archive. 6827090a0c2e&CFTOKE N=0

Downloadable mesh

The Uffizi This project was conducted by the VIrtual Medici Venus (1 CE), by Digitization World Heritage Laboratory, the Politecnico di Kleomenes, son of Project Milano, and the to Apollodorus,

37 https://www.myminifactory.com/scantheworld/ 38 CyArk is a non-profit organization founded in 2003 to digitally record, archive, and share cultural heritage sites across the world, specifically addressing the loss of cultural heritage sites as a result of climate change, urban development, natural disasters, and armed conflict. CyArk uses LiDAR or laser scanning.

Cieslik 73

digitize the complete collection of Greek and http://www.digitalsculptur Roan sculpture in the Uffizi, Pitti Palace, and e.org/florence/main/model Boboli Gardens. The statues include works of /31df125d63b747af841d8f Greek and Roman art. ca4eb7b8f8 Embedded Sketchfab viewer

Connecting Early This is an EU-funded cooperation project Box Broach, Medieval focused on Connecting Early Medieval https://sketchfab.com/3d- European European Collections (CEMEC) to create a models/box-brooch- Collections collaborative network and cost-effective dobozfibula-dunapataj- Project business model. Drawing on objects in the 86dc26bdff474e0bb90537 participating museums, the project curated 509be0c23d “Crossroads,” a travelling exhibition focused on connectivity and cultural exchange during the Early Middle Ages (300-1000) in Europe. Sketchfab website

New Zealand The 3D Digital Archive of the Future was a 3D printing exploring Digital Library project created in collaboration with Victoria artist Paul Jenden, Project University of Wellington and the National https://natlib.govt.nz/blog/ Library of New Zealand, exploring the use of posts/the-digital-archive- 3D media in libraries. The project focused on of-the-future three goals related to engaging the public with WWI: (1) 3D animation of a WWI photograph, (2) 3D printing as a reference image for educational and contextual media, (3) and augmented reality through 3D modelled objects Descriptive article

Florence As It This digital project works to reconstruct the Orsanmichele building, was: the Digital city of Florence in the way in which it https://florenceasitwas.wlu Reconstruction of appeared in the 15th century. Through .edu/sites/orsanmichele.ht a Medieval City visualized models, visitors can inspect, tour, ml and visit the streets, palaces, churches, and offices in Florence during this time period. This platform allows people to read the interpretations of these places written by people in the 17th, 18th, and 19th centuries. Embedded viewer Appendix #2: Commercial Online Viewers for Cultural Heritage 3D Models

As part of this guidebook, the author has created a list of current online viewers that house and share 3D models, including those that are open or closed access.

Name of 3D platform Description of 3D platform

Cieslik 74

Thingiverse This platform is dedicated to sharing user-created digital design files. All models on this website are free to download. This platform is useful for 3D printing (Johnson et al. 2017).

Sketchfab This platform supports the upload, publication, and visualization of 3D models on the web and supports the inclusion of 3D content in standard web pages and social media. This is the most commonly used viewer for cultural heritage content.

MyMiniFactory This platform houses 3D models that can be purchased. This website also houses the models for the Scan the World project,e asking all uploaders if they are Scan the World affiliated.

3D Heritage Online This platform is more flexible than Sketchfab in supporting more Presenter (3DHOP) presentation and interaction modes. 3DHOP is based on the SpiderGL library, implementing plug-free free 3D rendering on web browsers. p3d.in This platform allows for models to live on the cloud, meaning that it runs on all modern web browsers without requiring a plug-in to download. This platform allows 3D models to be taken on tablets or mobile devices but only accepts OBJ uploads.

Universal Viewer This is an open source platform to allow sharing 3D content. The benefit to this website is that it is zoomable, embeddable, extensible (supporting 3D), and securable, allowing content to be password protected.

Google Poly This platform allows for the upload of 3D objects on the web. This platform also houses virtual reality products, Brush and Blacks. Many of the objects have low-polygon style, which fits with Google’s Daydream platform which allows it to run more efficiently if the object is less complex.

Appendix #3: Most Recent Digitization Standards Reference Guides

This appendix includes descriptions of and links to current 3D digitization guidebooks focusing on specific objects and collections as well as the creation of file formats and associated preservation metadata. All of these guides, including articles, websites, and printed documents were referenced throughout this guide.

Cieslik 75

General three-dimensional objects

Barns, A. Close Range-Photogrammetry: Guide to Good Practice. Archaeology Data Service / Digital Antiquity: Guides to Good Practice. https://guides.archaeologydataservice.ac.uk/g2gp/Photogram_1-1. This website serves as a guide for producing 3D digital data from aerial survey, UV survey, laser scanning, close-range photogrammetry and more. This guide also provides information about archiving data produced from these methods and offers guidance for selecting the best file format given content needs and end products. Along with the “Guide for Good Practice,” the website also offers two other print guides: “Caring for Digital Data in Archaeology” and “Geophysical Data in Archaeology.”

Bentkowska-Kafel, A., & MacDonald, L. (Eds.). (2018). Digital techniques for documenting and preserving cultural heritage. ISD LLC. https://scholarworks.wmich.edu/cgi/viewcontent.cgi?article=1000&=&context=mip_arc_ cdh&=&sei- redir=1&referer=https%253A%252F%252Fscholar.google.com%252Fscholar%253Fhl% 253Den%2526as_sdt%253D0%25252C14%2526q%253DDigital%252BTechniques%25 2Bfor%252B%252BDocumenting%252Band%252BPreserving%252BCultural%252BHe ritage%2526btnG%253D#search=%22Digital%20Techniques%20Documenting%20Pres erving%20Cultural%20Heritage%22. This document includes case studies including specific techniques used for particular collections, including pottery and wall paintings and also includes short introductions to different methods, including laser scanning, photogrammetry, RTI, Structure from Motion, structured light scanning, and x-ray fluorescence spectrometry. This guide introduces each 2D+ and 3D digitization technique and also links them to techniques utilized in different digitization case studies. This guidebook followed the 3D digitization hierarchy employed by Pavlidis and Royo (2018) in this document when organizing the different 3D methods available. This is a good guidebook to start learning about digitization and the methods that are available for different collections.

CARTI. Guidelines for the Creation for Digital Collections: Digitization Best Practices for Three-Dimensional Objects. (February 2020). Consortium of Academic and Research Libraries in Illinois. https://www.carli.illinois.edu/sites/files/digital_collections/documentation/guidelines_for _images.pdf. This document produced by the Consortium of Academic and Research Libraries in Illinois provides guidance for how to process digital images from three-dimensional objects and make them accessible via the digital object management system CONTENTdm. This document provides specifications for digital SLR camera settings, including , , white balance, file settings, and file type. The creation of digital master files detailed in this guide follows specifications outlined in the Guidelines for Digitizing Cultural Heritage materials: Creation of Raster Image Master Files.

Natural history collections

Cieslik 76

Brecko, J., & Mathys, A. (2020). Handbook of best practice and standards for 2D+ and 3D imaging of natural history collections. European Journal of Taxonomy, (623). https://europeanjournaloftaxonomy.eu/index.php/ejt/article/view/895. This document describes 2D+ and 3D digitization methods that are available for digitizing natural history collections. This guide also discusses the strengths and weaknesses of each individual technique that is used and which technique should be used for digitizing certain collections and specimens. This guide also includes examples to specimens that have already been 3D digitized, similar to appendix one of this guide. The authors also discuss managing and sharing the 3D digital models that are produced.

File formats

Beagrie, N., and Kilbride, W. (2020). File formats and standards, Digital Preservation Handbook, 2nd. Edition. Digital Preservation Coalition. https://www.dpconline.org/handbook/technical-solutions-and-tools/file-formats-and- standards. This article is part of the second edition of the Digital Preservation Handbook. The Digital Preservation Handbook was first compiled by Neil Beagrie and Maggie Joes in 2001 but is now maintained and updated by the Digital Preservation Coalition. The second edition was compiled with input from 45 practitioners and experts in digital preservation under the direction of Neil Beagrie as managing editor and William Kilbride as chair of the Management and Advising Boards. This guide provides information about selecting file formats.

Preservation metadata

Blundell, J., Fuhrig, L., Little, H., Pilsk, S., Rossi, V., Snyder, R., Stern B., Sullivan, B., Tomerlin, M., and Webb, K. Smithsonian Institution 3D Metadata Overview, v0.6. Smithsonian’s Digitization Program Advisory Committee’s 3D Sub-Committee’s Metadata Working Group. Smithsonian 3D Digitization. https://dpo.si.edu/sites/default/files/resources/Smithsonian%20Institution%203D%20Met adata%20Model%20-%20Overview%20Document%20v0.6.pdf. This document was produced by the Smithsonian and provides guidance for how to records associated with 3D models, including how to create a project record, subject record, item record, capture dataset rights record, a capture dataset record, a capture data element record, a capture data file record, a model record, a UV map record, a processing action record, a capture device configuration record, a capture device component record, an actor record, a physical scale bar record, and a scale bar target pair record.

D'Andrea, A., & Fernie, K. (2013, October). CARARE 2.0: a metadata schema for 3D Cultural Objects. In 2013 Digital Heritage International Congress (DigitalHeritage) (Vol. 2, pp. 137-143). IEEE. CARARE is a three year project establishing an aggregation service for archaeological and architectural sources to integrate 3D and VR content in Europeana. This model provides reliable information about the capture device instruments, the parameters in using data acquisition

Cieslik 77

(geometry, light sources, obstacles, sources of noise or reflections), and those used during processing (registration, meshing, texturing, decimation, simplification, etc.).

Moore, J., Rountrey, A., Scates Kettler, H. (April 2019). Community Standards of 3D Data Preservation (CS3DP). Coalition for Networked Information. https://www.cni.org/topics/digital-curation/community-standards-for-3d-data- preservation-cs3dp. The Community Standards for 3D Data Preservation (CS3DP) has built up a community of practice, which works to make recommendations for long-term accessibility, usability, and interoperability for digital 3D objects. This working group therefore aims to address these variables that have not been acknowledged or discussed. This project was conducted as a result of funding from the Institute of Museum and Library Services to Washington University in St. Louis, University of Michigan, and University of Iowa.

Digital Preservation Policy 4th Edition. (February 2013). National Library of Australia. https://www.nla.gov.au/policy-and-planning/digital-preservation-policy. This website was produced by the National Library of Australia and provides guidelines for how digital resources are created, selected, acquired, described, and assessed. The National Library has been active in working to collect, manage, preserve, and maintain digital collections. In order to develop good digital preservation standards, it also refers to the Open Archival Information Systems Reference Model and other standards related to digital repositories, including PREMIS and Open Planets Foundation, mentioned earlier.

Fernie, K. (November 2019). 3D content in Europeana task force. Europeana Network Association Members Council. Task Force Report. https://pro.europeana.eu/project/3d- content-in-europeana. This document is the product of the Europeana task force. This task force worked to update publishing frameworks for cultural heritage models 3D models and to provide guidance and good examples for cultural heritage institutions. The Europeana task force addresses how the content available under the 3D label in Europeana is very variable, and this task force works to ensure that 3D media is correctly labeled and promotes availability of functional 3D content.

Author Biography

Emma Cieslik is a student at Ball State University studying public history and anthropology. Over the past three years, Emma has worked in museums of different sizes and subject matters, including the McHenry County History Society and Museum, the Midway Village Museum, the Field Museum, the David Owsley Museum of Art and the Dr. Samuel Dr. Harris National Museum of Dentistry. Utilizing her previous training in photogrammetry at the Field Museum, Emma is eager to advance cultural heritage digitization and imaging. This guidebook highlights her efforts to make object digitization accessible for all cultural heritage professionals.

Cieslik 78

Cieslik 79