
University of Texas Rio Grande Valley ScholarWorks @ UTRGV University Library Publications and Presentations University Library 2019 Link Rot, Reference Rot, and Link Resolvers Justin White University of Texas Rio Grande Valley, [email protected] Follow this and additional works at: https://scholarworks.utrgv.edu/lib_pub Part of the Library and Information Science Commons Recommended Citation White, Justin, "Link Rot, Reference Rot, and Link Resolvers" (2019). University Library Publications and Presentations. 1. https://scholarworks.utrgv.edu/lib_pub/1 This Article is brought to you for free and open access by the University Library at ScholarWorks @ UTRGV. It has been accepted for inclusion in University Library Publications and Presentations by an authorized administrator of ScholarWorks @ UTRGV. For more information, please contact [email protected], [email protected]. 1 Link Rot, Reference Rot, and Link Resolvers Justin M. White From the earliest days of the web, users have been aware of the fickleness of linking to content. In some ways, 1998 was a simpler time for the Internet. In other ways, like basic website design principles, everything old is new again. Jakob Nielson, writing “Fighting Linkrot” in 1998, reported on a then-recent survey that suggested 6% of links on the web were broken. The advice then hasn’t changed: run a link validator on your site regularly, and update or remove broken links. Also set up redirects for links that do change. The mantra for Nielson was “you are not allowed to break any old links.”1 Several years later, partly in response to Nielson, John S. Rhodes wrote a very interesting piece called “Web Sites That Heal.” Rhodes was interested in the causes of link rot and listed several technological and habitual causes. These included the growing use of Content Management Systems (CMSs), which relied on back-end databases and server-side scripting that generated unreliable URLs, and the growing complexity of websites which was leading to sloppy information architecture. On the behavioral side, website owners were satisfied to tell their users to “update their bookmarks,” websites were not tested for usability, content was seen as temporary, and many website owners were simply apathetic about link rot. Rhodes also noted the issue of government censorship and filtering, though he did not foresee the major way in which government would obfuscate old web pages, which will be discussed below. Rhodes made a pitch for a web server tool that would rely on the Semantic Web and allow websites to talk to each other automatically to resolve broken links on their own.2 Although that approach hasn’t taken off, there are other solutions to the problem of link rot that are gaining traction. 2 What is link rot? Link rot is the process of hyperlinks no longer pointing to the most current or available web page. However, this isn’t the only problem facing users: content on web pages isn’t static. As authors and editors update and edit web pages over time, the original URL may stay the same but the page at that URL may be about something very different. This evolution of a page’s function has been termed “content drift,” and when combined with link rot, creates “reference rot.” In general, when linking to a resource as a reference, the author is faced with a twofold problem: the link may no longer work, and even if it does, the material being referenced may no longer exist in the same context.3 Luckily, the technological solution for reference rot is already at hand. Rather than relying on link checkers, website owners can use link resolvers and decentralized web preservation through software like the Amber project (http://amberlink.org). The Amber project, out of the Berkman Klein Center for Internet and Society, works from a simple enough premise. When a web page is published, the software goes through it, takes a snapshot of each linked page, and saves it locally or to a centralized web archiving platform, such as the Internet Archive or Perma.cc. When it detects a link is broken or misbehaving, Amber suggests the archived version to the user. Amber emphasizes decentralized web archiving as a philosophical commitment to the need to avoid centralized responsibility of a few organizations for preserving the web. There are also link checkers that have begun to integrate web archiving into their workflows, but still tend to function in the same “scan for issues, change broken links” paradigm that Nielson described back in 1998. The scope of the problem is extremely wide. The average lifespan of a URL is 44 days, according to the Internet Archive’s Brewster Kahle.4 This number is hard to estimate, and will vary widely depending on who is asked and what context the links exist in. As it stands, there is 3 too much dependence on platforms that have no mandate to do the work of preservation. Consider the third-party vendors libraries and their institutions rely on for data management. Where will they be by five years? And if they find data to be objectionable in some way, what is to stop them from deleting it? This is part of the problem that the Amber project responded to by creating independent snapshots of web pages, rather than relying on the Internet Archive and Perma.cc. Clifford Lynch gave a speech in 2016 about the shift from print news media to broadcast and web news, in which the preservation systems previously put in place began to break down. Now that news organizations rely on links to underlying evidence, rather than utilizing extensive summaries, their context relies on information not controlled by them.5 It’s easy to imagine a situation in which a website owner realizes their work has been linked to in a way they disapprove of, and changes the context as a “response” to the linked work. Link and reference rot have a particular history in scholarly communication. In 2014, a group of researchers found that one in five articles suffers from reference rot.6 A 2016 study found that three out of four URI references led to content that had been changed since the study cited it, leading to the possibility of malicious changes to undermine a citation (particularly in legal decisions). Most preservation is concerned with the long term preservation of journal articles themselves, not the content referenced in them.7 Much like in the news world, there is a reliance on the publishers of data to preserve information, not libraries or other “faithful guardians of immortal works.”8 The larger trend of citing web sources means that scholarly communication will have to focus its priorities on larger web preservation.9 In 2017, the Coalition for Networked Information’s Clifford Lynch gave an opening plenary on “Resilience and Engagement in an Era of Uncertainty.” Lynch emphasized that the crisis was not in the preservation of scholarly literature itself, but of information that scholars will use in the future. 4 Lynch also covered difficulties in our current preservation assumptions and questioned whether the government was a reliable steward of research data.10 Legal scholars have been particularly prominent in the discussion over reference rot, particularly as it affects the citations in legal decisions. The most prominent paper is that by Zittrain, Albert, and Lessig in 2014, but a year before their landmark paper, Raizel Liebler and June Liebert had surveyed the life span of web links in United States Supreme Court decisions from 1996-2010. They found that 29% of websites cited in these decisions were no longer working, with no discernible pattern of which links were most likely to rot.11 Zittrain, Albert, and Lessig in 2014 looked at the legal implications for link rot and found reasons for alarm. The authors determined that approximately 50% of the URLs in Supreme Court opinions no longer linked to the original information, and that a selection of articles in legal journals, including The Harvard Law Review, published between 1999 and 2011 had a link rot rate of 70%. The authors of the 2014 study suggest that libraries be involved in the publishing process and take on the “distributed, long-term preservation of link contents” [emphasis added].12 How Link Rot and Web Archiving Apply to Libraries Apart from the mentions above that call for libraries to take a part in the preservation process for scholarly and legal literature, what does link rot mean for libraries? Many libraries are not involved in academic or legal publishing but rely extensively on web resources for their users. Approaching the issue from a basic educational approach, a 2003 study found that link rot seriously limited the usefulness of web-based educational materials in biochemistry and molecular biology.13 It is not much of a stretch to imagine this issue is broader than the biological sciences. 5 The Chesapeake Project is a collaborative digital preservation initiative undertaken to preserve legal references. In exploring the materials preserved by the Project, Sarah Rhodes measured rates of link rot over a three-year period. Rhodes found that links most libraries would consider stable — government and state websites — degraded at an increasing rate over time.14 With the Whitehouse.gov reset at the beginning of the Trump administration, approximately 1,935 links on Wikipedia broke at the flip of a switch, as detailed in The Outline.15 Librarians who maintain LibGuides and Pathfinders to government information know the value of link checkers to their guides, as any government shakeup can mean many of their resources now live somewhere else, even in another government agency, with no overriding link routing system.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages19 Page
-
File Size-