Proceedings Template
Total Page:16
File Type:pdf, Size:1020Kb
Meeting the Challenges of Preserving the UK Web Helen Hockx-Yu British Library 96 Euston Road, London NW1 2DB United Kingdom [email protected] ABSTRACT Collecting and providing continued access to the UK’s digital heritage is a core purpose for the British Library. An important element of this is the World Wide Web. The British Library The British Library started archiving UK websites in 2004, started web archiving in 2004, building from scratch the based on the consent from site owners. This resulted in the 1 capability of eventually preserving the entire UK web domain. Open UK Web Archive , a curated collection currently This is required by the non-print Legal Deposit Regulations consisting of over 70,000 point-in-time snapshots of nearly which came into force in April 2013, charging the Legal 16,000 selected websites, archived by the British Library and 2 Deposit Libraries with capturing, among a wide range of digital partners. publications, the contents of every site carrying the .uk suffix Non-Print Legal Deposit (NPLD) Regulations became effective (and more), preserving the material and making it accessible in in the UK in April 2013, applying to all digitally published and the Legal Deposit Libraries’ reading rooms. on line work. NPLD is a joint responsibility of publishers and The paper provides an overview of the key challenges related to Legal Deposit Libraries (LDLs). An important requirement is archiving the UK web, and the approaches the British Library that access to NPLD content is restricted to premises controlled has taken to meet these challenges. Specific attention will be by the LDLs. given to issues such as the “right to be forgotten” and the treatment for social networks. The paper will also discuss the The British Library leads the implementation of NPLD for the access and scholarly use of web archives, using the Big UK UK web. While the many existing web archiving challenges Domain Data for Arts and Humanities project as an example. described in detail by Hockx-Yu[7] remain valid, the significant increase of scale, from archiving hundreds of Keywords websites to millions, has brought about new and additional Web Archiving, Non-print Legal Deposit, Digital Preservation, challenges. The key ones are discussed in this paper. Big data, Scholarly use, Digital Humanities. 2. IMPLEMENTING NON-PRINT LEGAL 1. INTRODUCTION DEPOSIT Web Archiving was initiated by the Internet Archive in the NPLD of UK websites is mainly implemented through periodic mid-1990’s, followed by memory institutions including crawling of the openly available UK web domain, following an national libraries and archives around the world. Web archiving automated harvesting process where web crawlers request has now become a mainstream digital heritage activity. Many resources from web servers hosting in-scope content. countries expanded the existing mandatory deposit scheme to include digital publications and passed regulations to enable systematic collection of the national web domain. A recent 2.1 Collecting Strategy survey identified 68 web archiving initiatives and estimated With over 10 million registered domain names, .uk is one of the that 534 billion files (measuring 17PB) had been archived since largest Top Level Domains (TLDs) in the world. A strategy to 1996. [2] archive such a large web space requires striking a balance between comprehensive snapshots of the entire domain and adequate coverage of changes of important resources. Figure 1. outlines our current strategy, which is a mixed model allowing annual crawl of the UK web in its entirety, augmented by prioritisation of the parts which are deemed important and Copyright Helen Hockx-Yu (2015), licensed under a Creative receive greater curatorial attention. Commons 4.0 Attribution International Licence. When citing this paper please use the following: Hockx-Yu, H., “Meeting the Challenges of Preserving the UK Web”, Digital Preservation for the Arts, Social Sciences and Humanities, Dublin, 25-26 June 2015. Dublin: The Digital Repository of Ireland. 1 Open UK Web Archive, http://www.webarchive.org.uk. 2 Another key collection is the UK Government UK Web Archive provided by the National Archives, containing government records on the web, http://www.nationalarchives.gov.uk/webarchive. Data volume limitation3 4 A default per-host data cap of 512MB or 20 hops is applied to our domain crawls with the exception of a few selected hosts. As soon as one of the pre-configured caps has been reached, the crawl of a given host will terminate automatically. Robots.txt policy We obey robots.txt and META exclusions, except for the home pages and content required to render a page (e.g. JavaScript, CSS). Embedded resources Resources which are essential to the coherent Figure 1. UK Web Archive collecting strategy interpretation of a web page (e.g. JavaSrcript, CSS) are considered in-scope and collected, regardless of where The domain crawl is intended to capture the UK domain as these are hosted. comprehensively as possible, providing the “big picture”. The key sites represent UK organisations and individuals of general and enduring interest in a particular sector of 3. “RIGHT TO BE FORGOTTEN” [6] the life of the UK and its constituent nations. The “right to be forgotten” relates to the European Court of News websites contain news published frequently on the Justice (ECJ)’s ruling against Google, who were asked to web by journalistic organisations. remove the index and access to a 16-year old newspaper article The events-based collections are intended to capture concerning an individual’s proceedings over social security political, cultural, social and economic events of national debts. [10] interest, as reflected on the web. “Right to be forgotten” reflects the principle of an individual The key sites, news sites and events collections are maintained being able to remove traces of past events in life from the by curators across the LDLs and governed by a sub-group Internet or other records. When considering this, it is important overseeing web archiving for NPLD. These are typically not to lose sight of the purpose of NPLD. By keeping a captured more than once a year. historical record of the UK web for heritage purposes, it ensures the “right to be remembered”. Websites archived for NPLD are only accessible within the LDL’s reading rooms and 2.2 UK Territoriality the content of the archive is not available for search engines. The Regulations define an on line work as in scope if: This significantly reduces the potential damage and impact to individuals and the libraries’ exposure to take-down requests. a) it is made available to the public from a website with a There is at present no formal and general “right to be forgotten” domain name which relates to the United Kingdom or to a place in UK law by which a person may demand withdrawal of the within the United Kingdom; or lawfully archived copy of lawfully published material, just b) it is made available to the public by a person and any of that because they do not wish it to be available any longer. We person’s activities relating to the creation or the publication of apply the Data Protection Act 1998 for withdrawing material the work take place within the United Kingdom. [13] containing sensitive personal data from the NPLD collection. A notice and takedown policy is in place allowing withdrawal of Part a) is interpreted as including all .uk websites, plus websites public access or removal of deposited material under specific in future geographic top level domains that relate to the UK circumstances.[12] "Evidence of damage and distress to such as .scotland, .wales or .london. This part of the UK individuals" is a key criterion used to review complaints. territoriality criteria can be implemented using automated methods, by assembling various lists or directories, or through discovery crawls, which identify linked resources from an 4. ARCHIVING SOCIAL MEDIA [5] initial list and extract additional URLs recursively. A sampling approach was taken to archiving social media Part b) concerns websites using non .uk domains. It is a content prior to NPLD. The Open UK Web Archive contains a statement about the location of the publisher or the publishing limited amount of pages from Twitter, Facebook and YouTube. process without defining explicitly what “takes place within the These typically are parts of “special collections”, groups of United Kingdom” constitutes. We use a mixture of automated websites about a particular theme or an event, usually archived and manual means to discover content relevant to this category. for a fixed period of time. An example is the special collection Manual checks include UK postal address, private on the UK General Election 2010, which includes Twitter communication, who-is records and professional judgment. A pages belonging to the Prospective Parliamentary Candidates crawl-time process has also been developed, to check non .uk (PPCs). The decision not to systematically archive social media URLs against an external Geo-IP database and add UK-hosted related to the selective nature of the archive itself, the difficulty content to the fetch-chain. This helped us identify over 2 in obtaining permissions and resources constraints – even the million non .uk hosts during our 2014 domain crawl. exemplar content required highly skilled technical staff to At a more detailed level, crawler configurations also determine develop customised solutions outside standard workflow. the scope and boundary of a national web archive collection. Some key parameters of our current implementation are as follows: 3 This is a common way to manage large scale crawls, which otherwise could require significant machine resources or time to complete. 4 Each page below the level of the seed, i.e. the starting point of a crawl, is considered a hop.