Conservation in Collections of Digital Works of Art Ben
Total Page:16
File Type:pdf, Size:1020Kb
Presented at the Electronic Media Group Session, AIC 40th Annual Meeting, May 8–11, 2012, Albuquerque, NM. CONSERVATION IN COLLECTIONS OF DIGITAL WORKS OF ART BEN FINORADIN ABSTRACT Rather than addressing entropy and deterioration of objects, paintings, prints, or textiles, many collections must now contend with ensuring the conservation of works that are entirely born-digital. The constantly shifting compatibility of tech- nologies, and looming wave of obsolescence dictates that this emerging practice is often preemptive, in some cases even leaning towards the archival perspective. Since 1999, Rhizome at the New Museum of Contemporary Art, New York, has maintained an online archive called the ArtBase, a collection containing works of art that employ materials such as software, code, websites, moving images, games, and browsers, towards aesthetic and critical ends. This paper offers three case studies on the conservation of works from Rhizome’s collection. These three works will be used specifcally to illustrate conservation methods for: • Works that exist as performative actions within commercial web services or social networking platforms • Performative web-based works that employ real-time data • The obsolescence of crucial client-side JavaScript in web based works Offered in conclusion is a discussion of metadata, from a collection management standpoint, to ensure the long-term sustainability of digital conservation measures. The Electronic Media Review ■ Volume Two 2013 101 BEN FINORADIN INTRODUCTION we will consider Pulse (1999) by American artist Mark Rhizome is a non-proft institution (established in 1996) Napier (b. 1961). This work fell victim to obsolescence devoted to the creation, presentation, preservation, and of crucial client-side JavaScript. This section will pro- critique of emerging artistic practices that engage tech- vide a qualitative analysis of various emulation strategies nology. In 1999 the Rhizome ArtBase was founded as that restored function to the work, as well as consider- an online archive that sought to collect, preserve, and ation of the scalability of maintaining code-based works contextualize the creative practices that had emerged in collections and archives. in the preceding decade, along with the advent of the world wide web. The ArtBase collects works that employ LEGENDARY ACCOUNT materials diverse as software, code, websites, moving As is often the case in the broad scope of new media and images, games, and web browsers towards aesthetic and technology based works of art, Holmberg’s Legendary critical ends. As the years have passed, shifting threats Account initially posed a non-technical challenge to con- to the longevity of these works have emerged, along with ventions of curation, collection, and ontology of artworks. a discourse and philosophy of the conservation of born- In a sense, the work could be said to have no primary digital works of art. In 2011 Rhizome published a paper form or medium, existing originally as a disparate set titled “Digital Preservation Practices and the Rhizome of actions. Over the course of two years, the artist en- ArtBase” (Fino-Radin), which explored in detail the fun- gaged users on Yahoo! Answers, asking questions (fg. 1) damental threats to its collection, and extrapolated a that range from unanswerable provocations (e.g., “How theoretical framework for approaching the management possible is it to convince people that you are an artist?” of these risks. This paper, in contrast, focuses explicitly June 6, 2008, 4:24 PM), to the absurdly mundane (e.g., on methods and techniques of digital conservation and “where did you get that chair you are sitting in?” June preservation, through the documentation of three case 19, 2008, 2:28 AM). The work was an attempt to mildly studies. All methods described and documented herein subvert the platform’s intended mode of interaction, ex- make use of open source tools and common technologies posing its inherent limitations, and to engage its users in and standards, yielding workfows that are reproducible a manner oddly personal. in similar scenarios. Discussed frst is Legendary Ac- count (2008–2010) by American artist Joel Holmberg The artist exhibited a manifestation of the work in the (b. 1982). This artwork will present an approach for pre- 2010 exhibition Free at the New Museum of Contempo- serving works whose primary materials exist within a 3rd rary Art. For the exhibition, the artist painted a gallery party commercial web platform not controlled by the art- wall to match the background color of the Yahoo! An- ist or collecting institution. The provided methodology is swers website, and mounted digital prints of the work to grounded in web archiving practices, and makes use of a the wall. This manifestation was a temporary installation, Python tool for the batch production of archival screen- its photographic documentation being the only surviving shots of web resources. Second, we will consider i’m artifact. The artist had at some point in the work’s history here and there .com (2011), by Dutch artist Jonas Lund documented each question he had asked through his Ya- (b. 1984), which poses the problem of how to collect hoo! Answers account, as screenshots. These were saved and preserve a dynamic web based artwork that included in the JPG format with severe lossy compression, caus- a dimension of time and performance. The approach ing the images to display compression artifacts. Survey- discussed offers a model of conservation that produces ing these manifestations, instantiations, and documents an archival solution without compromising the sensitive, of the work, it emerges that they as a whole constitute real-time, and performative nature of the work. Finally the work, rather than one discrete artifact. The goal then 102 The Electronic Media Review ■ Volume Two 2013 CONSERVATION IN COLLECTIONS OF DIGITAL WORKS OF ART Fig. 1. Holmberg asks: “What is the goal of online?” was not so much the conservation and preservation of perform a precise, targeted crawl of a list of specifc the material as the artwork itself, but rather, accurate URLs, and (2) that the tool provided the ability to output and future-proof documentation of evidence of the work. the Web ARChive fle format (WARC) (National Digital The material that was the target of this documentation Stewardship Alliance 2011), as well as the retrieved ma- was the original collection of web pages that contained terials in an uncompressed directory structure. In typical the individual questions belonging to Holmberg’s Yahoo! large-scale web archiving settings, where the intent is Answers account and his profle page, which contained to perform broad crawls of entire websites, recursion is links to each of these individual questions. It was de- heavily employed. The frst page hit by the crawler is sired to archive these pages such that the links between parsed to identify hyperlinks so that these linked pages all of them were preserved, so that an offine version, or may be also crawled, archived, and parsed for additional one hosted in a repository would be browsable as it was pages, at which point the process continues indefnitely originally within the Yahoo! Answers platform. depending on the level of recursion specifed. Recursion was not an option in this case, as this would cause the The requirements for selecting the appropriate tool in- crawler to extend far beyond scope, eventually down- cluded primarily that: (1) the tool offered the ability to loading Yahoo! Answers in its entirety. Thus, it was cru- The Electronic Media Review ■ Volume Two 2013 103 BEN FINORADIN cial that the tool used allow for a list of targets to be ence for their content. In this case our crawl was quite specifed. While WARC is widely considered to be the limited, and crucial components would have been omit- standard archival format for web resources, it requires ted were robots.txt to be respected. HTTP based crawlers additional software (such as the Internet Archive’s Way- are only able to access that which is publicly accessible, back software) (Internet Archive 2011) in order to render so ignoring robots.txt would seem to pose no privacy the archived materials. For this reason it was desired threat. The second option, “--random-wait” is another that in addition to WARC, to also output the crawled option that must be used with caution and care. Some content in a standard directory structure, preserving the servers are confgured to detect patterns in multiple relative directory structure. This format of output yields requests from the same user, for the purpose of block- a collection of fles that may be hosted and served from ing a crawler making requests on a predictable interval. any web server, without special software. The GNU Wget “--random-wait” causes Wget to randomize the intervals (GNU 2012) software was selected as the tool of choice, at which it makes requests of the server, thus eluding as it offers all of the aforementioned aspects of utility detection (GNU 2013c). The third fag, “-p” tells Wget and has a very diverse community of users, which yields to retrieve all page “requisites.” Without this fag, none a support system. Wget is quite adaptable to small scale, of the images, video, sound, linked CSS (cascading style highly specifc and customized crawls, and as of recently, sheets), and JavaScript fles—essentially any embedded offers the option to output crawls in WARC format as well content—would be retrieved. The fourth option, “-H” as a directory structure. specifes that the crawl may span hosts. If the pages listed in “url_list” were at the http://answers.yahoo.com The methodology and procedure used with Wget was to host, but all embedded images were hosted at http://l. frst compile the list of URLs to be included in the crawl. yimg.com/, without the “-H” fag these images would This list was saved as a plaintext fle named “url_list,” not be retrieved, even with the “-p” fag having been with each URL on a new line, without a delimiting char- invoked.