Surveying the Landscape: Use and Usability Assessment of Digital

Libraries

Prepared by the Digital Federation Assessment Interest Group User Studies Working Group

December 2015 1

Acknowledgements:

White Paper Writing Joyce Chapman, Duke University Elizabeth Joan Kelly, Loyola University New Orleans Liz Woolcott, Utah State University (Leader) Tao Zhang, Purdue University

White Paper Review Megan Hurst, Athenaeum21 Consulting Caroline Muglia, University of Southern California Genya O’Gara, Virtual Library of Virginia Ayla Stein, University of Illinois at Urbana­Champaign Santi Thompson, University of Houston (Leader)

Tag Analysis & Content Synthesis Elizabeth Joan Kelly, Loyola University New Orleans Martha Kyrillidou, Association of Research Caroline Muglia, University of Southern California Genya O’Gara, Virtual Library of Virginia Ayla Stein, University of Illinois at Urbana­Champaign Santi Thompson, University of Houston (Leader) Liz Woolcott, Utah State University

Literature Review Joyce Chapman, Duke University Jody DeRidder, University of Alabama Elizabeth Joan Kelly, Loyola University New Orleans Martha Kyrillidou, Association of Research Libraries Caroline Muglia, University of Southern California Genya O’Gara, Virtual Library of Virginia Ayla Stein, University of Illinois at Urbana­Champaign Santi Thompson, University of Houston (Leader) Rachel Trent, State Library of North Carolina Liz Woolcott, Utah State University

Bibliography Development Elizabeth Joan Kelly, Loyola University New Orleans Santi Thompson, University of Houston (Leader) Rachel Trent, George Washington University Liz Woolcott, Utah State University Tao Zhang, Purdue University 2

Table of Contents

Executive Summary Introduction Background Goal and Scope of the Project Methodology Bibliography Subgroup Literature Review Subgroup Tagging Process Generating Reports Data Analysis White Paper Subgroup Results User and Usability Studies Question 1: What research strengths exist in this area of research? Question 2: What gaps exist in this area of research? Question 3: What are the next steps? Return On Investment (ROI) Question 1: What strengths exist in this area of research? Question 2: What gaps exist in this area of research? Question 3: What are the next steps? Content Reuse Question 1: What strengths exist is this area of research? Question 2: What gaps exist in this area of research? Question 3: What are the next steps? Conclusion Table 1: Summary of strengths, research gaps, and next steps Bibliography Appendices Appendix A : Research Bibliography Appendix B: Tagging Articles Spreadsheet Appendix C: Tagging Dictionary Appendix D: Tagging Results Appendix E: User and Usability Studies Summary Grid

3

Executive Summary

Despite an increased focus on providing online access to , archival materials, and research outputs, there are no standard methods for assessing digital libraries in terms of user and usability studies, return on investment, and content reuse. Over the past year, the Federation Assessment Interest Group (DLF AIG) has initiated an effort to bridge this gap by engaging the community in the development of best practices for different areas of digital library assessment.

This white paper analyzes the current landscape of use and usability studies conducted for digital libraries, focusing on three core areas identified at the 2014 Digital Library Federation forum:

● Making usability studies more accessible to (“usability studies”) ● Tracking the return on investment for digital libraries (“return on investment”) ● Understanding the reuse of digital library materials (“content reuse”)

This paper identifies the strengths and gaps in the current literature in each of these three areas and makes recommendations for next steps toward the development of best practices.

This paper was authored by the DLF AIG User Studies Working Group, one of four DLF AIG working groups formed following the 2014 DLF forum to develop best practices centered around assessment of digital libraries. The other working groups are focused on analytics, cost, and citations.1

Methodology The User Studies Working Group began its research into best practices by surveying current user and usability literature related to digital libraries. The group compiled a bibliography of literature published from January 2010 to December 2014 focused on the three areas: Usability Studies, Return on Investment, and Content Reuse as related to digital libraries.2 A comprehensive literature review was conducted, which included developing standardized terms to “tag” each article, summarizing the works identified, and generating reports using the different tags. The reviewers also briefly summarized answers to the following questions for each of the three focus areas:

1 The DLF Assessment Interest Group wiki can be found here http://wiki.diglib.org/Assessment. As of the ​ ​ DLF 2015 annual meeting, the citations and analytics working groups have also produced white papers and the cost assessment working group has defined digitization processes for data collection and created a digitization cost calculator using data contributed by the community. See each of the group’s individual wiki pages for links and details.

2 The group’s analysis included one research article from 2015: Harriet E. Green and Angela Courtney, “Beyond the Scanned Image: A Needs Assessment of Scholarly Users of Digital Collections,” College and Research Libraries ​ (2015). Retrieved from http://crl.acrl.org/gca?allch=citmgr&submit=Go&gca=crl;crl14­612v1. ​ ​ ​ 4

● What are the research strengths in each area of focus? ● What gaps exist in the research? ● What are potential next steps for research and development?

The summaries, notes, and analysis generated during this process were compiled into this white paper, which provides a comprehensive review and analysis of current research, its critical gaps and strengths, and possible future actions for the digital library community. Presentation of results, an invitation for public comment, and a collaborative discussion of next steps occurred at the October 2015 DLF forum. Community feedback was incorporated into the final version of the white paper.

User and Usability Studies In the area of User and Usability Studies, an identified strength within the digital library community is its wide acceptance of user­centered design and assessment approaches, that include a strong focus on developing design strategies through better understanding of user search behaviors. Beyond the need to make usability studies more accessible to librarians (as established at the 2014 Digital Library Federation Forum), critical gaps identified in the methodology and data analysis include: a lack of behavioral observation and examination of users' task context (e.g., known­item search vs. exploratory search, familiarities with the search topic, individual research vs. group work, etc.); an over­reliance on standard testing tasks and user feedback rather than behavioral evidence; and a lack of studies on user interactions with digital libraries and institutional repositories. The group recommends that future user studies further examine users' research needs within digital libraries and determine how to actively engage users in the system development and content contribution process, particularly for institutional repositories. Cross­institutional collaboration to normalize practices and share study findings is also essential to establishing best practices and consistent methodologies.

Return on Investment (ROI) The working group found only eleven scholarly articles published since 2010 on the subject of ROI assessment of digital libraries, indicating a need for greater exploration of this topic. Strengths of the available research include measurements of time and cost for processes ​ associated with digital libraries (such as digitization or image capture equipment), and the theoretical application of ROI to the landscape of library project management. The group identified several research gaps, including a lack of data on the benefit side of cost/benefit analysis, a limited corpus of cost data, no standardized methodology for implementing cost/benefit analyses for online resources, and a lack of measurable indicators for the economic sustainability of digital libraries. The group recommends that more ROI studies ­­ especially those related to methodology and evidence­based results ­­ be undertaken, that more data on both cost and benefit be collected, and that measurement tools be developed.

Content Reuse In the area of Content Reuse, there is a greater understanding of user behavior and requirements for digital repositories because the literature is prevalent on how specific 5

disciplines ­­ particularly humanities and the arts ­­ reuse digital library materials. In particular, the reuse of digital images has received much attention, and there is a significant enough foundation of research to begin developing best practices and methodologies. Another strength in the area of Content Reuse is the research on the mapping of the digitized resource life­cycle. Because there has been significant analysis in this area, the measurement of use, value, and impact is already possible. One of the most prominent gaps in the literature involves isolating and understanding patterns of reuse. While Content Reuse using hyperlinks is a strength, there is a dearth of studies examining other mechanisms for tracking reuse of digital objects (such as using embedded identifiers), exacerbated by a lack of consistent citation metrics for digital cultural heritage objects. A gap also exists in the use of web log analysis as a method for predicting user needs. Without the ability to track individual objects via persistency in URLs and identifiers, analysis may lead to potentially biased perceptions of needs. Additionally, these methods do not allow for tracking non­digital reuse of once digital material (i.e. physical exhibits, publications, etc.). Obstacles also exist for researchers attempting to analyze digital object reuse for materials residing on un­indexed platforms. Next steps in the study of digital library content reuse should focus on less explored topics, including the identification and understanding of user groups, particularly in the science and social science fields; more research on measuring how the functionality of a digital repository interface impacts reuse; the development of a reuse assessment framework; and more evidence­based practice in the area of tracking individual objects in web log analysis.

6

Introduction

Discussions at the 2014 Digital Library Federation Forum identified three core areas that the profession should focus on to better understand the current landscape of user and usability studies:

● Making usability studies more accessible to librarians (“usability studies”) ● Tracking the return on investment (ROI) for digital libraries (“return on investment”) ● Developing a better understanding of the reuse of digital library materials (“content reuse”).

This paper responds directly to the DLF call by identifying the strengths and gaps in the current literature in each of these three areas and making further recommendations for the development of best practices.

This paper was authored by the DLF AIG User Studies working group, one of four DLF AIG groups formed following the 2014 DLF forum. These groups were tasked with developing to best practices centered in the assessment of digital libraries. The other three working groups focused their attention on analytics, cost, and citations.3

A digital library may be defined in a variety of ways based on specific factors, such as economic, audience, or institutional considerations. For example, an economic approach may distinguish between subscription, purchased, donated, or locally created digital resources. The audience of a digital library may vary between undergraduate students, faculty/researchers, or disciplinary users. The materials made accessible by digital libraries may vary based on an institution’s needs. These repositories could contain digitized special collections, institutional content, born­digital materials, data sets, or specialized databases. Today the term “digital library” is used to describe radically different services. Google Books, HathiTrust, WorldCat, and Digital of America (DPLA) can all be described as “digital libraries,” as can aggregated journal and ebook packages such as Elsevier’s ScienceDirect, or even library applications such as Overdrive that allow users to access local library holdings. Hybrid collections (those such as ARTstor’s Shared Shelf, which allows for the management and discovery of locally created collections alongside licensed content) may also be considered digital libraries.

Given the complexity of this landscape, for the purposes of this investigation, the DLF AIG has adopted Matusiak’s definition of a digital library as “collections of digitized or digitally born items

3 The DLF Assessment Interest Group wiki can be found here http://wiki.diglib.org/Assessment. As of the ​ ​ DLF 2015 annual meeting the citations and analytics working groups have also produced white papers and the cost assessment working group has defined digitization processes for data collection and created a digitization cost calculator using data contributed by the community. See each of the group’s individual wiki pages for links and details. 7

that are stored, managed, serviced, and preserved by libraries or cultural heritage institutions, excluding the digital content purchased from publishers.”4

This definition breaks down in situations where the library itself is the publisher and sells or licenses content to a for­profit publisher. It may also be difficult to apply when investigating “hybrid” collections, or those digital libraries that host purchased content alongside non­purchased content on one platform for users. However, for the purpose of this investigation, as long as the content is stored, managed, serviced, and preserved by libraries or cultural heritage institutions, we will consider the material within scope. Within the constraints of this investigation's definition this means that the term “digital library” often refers to archival or special collections content, institutional and disciplinary repository content, and their associated websites.

Background

The goals of the DLF AIG are to develop documentation, tools, and suggested best practices for evaluating various assessment methods for digital libraries. The goals are not only to assist institutions who are less familiar with assessment, but to provide baselines that can be used cross­institutionally, thus promoting the collection of interoperable metrics for comparative purposes. In its first year, the DLF AIG has formed working groups in four areas:

● User Studies ● Costs ● Analytics ● Citations

These groups are not meant to cover the breadth of all possible digital library assessment; they were merely the areas in which the most community interest was present.

The DLF AIG was born5 at the 2013 DLF Forum6 when a working session called “Hunting for Best Practices in Library Assessment”7 garnered over 50 volunteers to continue the discussion

4 Krystyna K. Matusiak, “Perceptions of usability and usefulness of digital libraries,” Journal Of Humanities & Arts ​ ​ Computing: A Journal Of Digital Humanities, 6(1/2) (2012): 133­147, doi:10.3366/ijhac.2012.0044. ​

5 Joyce Chapman, “Introducing the New DLF Assessment Interest Group,” Digital Library Federation Blog, ​ ​ May 12 2014, http://www.diglib.org/archives/5901/. ​ ​ ​

6 “2013 DLF Forum: Austin, Texas,” Digital Library Federation, http://www.diglib.org/forums/2013forum/. ​ ​ ​ ​

7 Jody DeRidder et al., “Hunting for Best Practices in Library Assessment” (presentation, Digital Library Federation Forum, Austin, TX, November 4, 2013). http://www.diglib.org/forums/2013forum/schedule/30­2/. ​ ​ ​

8

after the conference, and a second working session on altmetrics8 was also met with huge interest. A Digital Library Assessment Google Group9 was established to provide a space for practitioners to discuss assessment efforts. At the following 2014 DLF Forum, an assessment panel10 discussed a new NISO initiative to develop standards for altmetrics,11 a new web­based cost estimation tool for digitization,12 and both qualitative13 and quantitative14 results from digital library user studies. The panel, like the working group the year before, was followed by a lively discussion about how to further the development of best practices for digital library assessment. Again, many community members volunteered to continue the discussion, and so the four existing working groups formed in November 2014. DLF established a wiki site15 that is used by the AIG working groups to document resources and best practices as they are developed; the initial goal is to make significant progress to report at the 2015 DLF Forum.16

Coordinated by Santi Thompson, the User Studies Working Group’s first aim was to produce this white paper detailing the current state of research by identifying strengths, gaps, and next steps. The group’s work began during the assessment breakout session at the 2014 DLF Forum, where three core areas of focus were identified: making usability studies more accessible to staff, tracking the return on investment, and understanding the reuse of materials.

8 “Hunting for Best Practices in Library Assessment,” (collaborative Google document, Digital Library Federation Forum, Austin, TX, November 4, 2013). http://bit.ly/dlf­assessment. ​ ​ ​

9 David Scherer, Stacy Konkiel, and Michelle Dalmau, “Determining Assessment Strategies for Digital Libraries and Institutional Repositories Using Usage Statistics and Altmetrics” (presentation, Digital Library Federation Forum, Austin, TX, November 5, 2013). http://www.diglib.org/forums/2013forum/schedule/21­2/. ​ ​ ​

10 Jody DeRidder et al., “Moving Forward with Digital Library Assessment” (presentation, Digital Library Federation Forum, Atlanta, GA, October 29, 2014). http://www.diglib.org/forums/2014forum/program/32z/. ​ ​

11 National Information Standards Organization, ”NISO Alternative Metrics (Altmetrics) Initiative,” http://www.niso.org/topics/tl/altmetrics_initiative/. ​

12 Joyce Chapman, “Library Digitization Cost Calculator,” 2014, http://statelibrarync.org/plstats/digitization_calculator.php. ​

13 Jody DeRidder, “Did We Get the Cart Before the Horse? (Faculty Researcher Feedback)” (Digital Library Federation Forum, Atlanta, GA, October 29 2014). http://www.diglib.org/wp­content/uploads/2014/11/DeRidder.pdf. ​

14 Ho Jung Yoo and SuHui Ho, “Do­It­Yourself Usability Testing: A Case Study from the UC San Diego Digital Collections,” (Digital Library Federation Forum, Atlanta, GA, October 29 2014). http://www.diglib.org/wp­content/uploads/2014/11/Yoo_DLF2014_UsabilityTesting.pdf. ​

15 http://wiki.diglib.org/Assessment ​

16 “2015 DLF Forum: Vancouver,” Digital Library Federation, http://www.diglib.org/forums/2015forum/. ​ ​ ​ ​ ​

9

Goal and Scope of the Project

Articulating the value of digital libraries ­­ from their impact on the development of scholarship and popular culture to their day­to­day costs ­­ remains a challenge for information professionals. To expand our understanding of how to measure and present the impact of a digital library, the working group analyzed the existing literature on the topic in order to identify themes, strengths, and weaknesses in how digital libraries are assessing value. To this end, the User Studies Working Group developed a plan to review the current literature (published from 2010 to 2014) by compiling and annotating a bibliography of resources and then analyzing the content of the scholarship.

Three specific areas of digital library assessment were established to frame the scholarship analysis:

● User and Usability Studies: The resources in this section investigate users’ ​ expectations, information needs and preferences; digital library usability; and interface design issues of various digital library applications, including library websites, discovery tools, digital repositories, e­books, and other electronic resources. ● Return on Investment (ROI): The resources in this section investigate the impact of ​ digital libraries, how to determine the cost and benefit of digital library­related activities, and how to measure the quality of digital library content. ● Content Reuse: The resources in this section investigate how users reuse the content ​ they access in digital repositories. Scholarship assesses the impact of digitized and digitally born materials on humanities and sciences research, as well as "non­scholarly" use of digital repository content.

Within these three areas, the group approached the literature by asking three questions:

1. What research strengths exist in the areas of User and Usability Studies, ROI, and Content Reuse assessment in digital libraries? 2. What gaps exist in these areas of focus? 3. What are possible next steps for the community to address?

Through this work, the group reaffirms the popular notion that the digital library community needs to conduct substantial research in all phases of assessing digital libraries. This white paper responds to the three questions posed above, synthesizes the results, and develops recommendations for future research in the areas of User and Usability Studies, ROI, and Content Reuse assessment.

10

Methodology

The User Studies Working Group split into subgroups in order to complete the work needed to create this white paper. Members of the working group volunteered to serve on the Bibliography, Literature Review, and White Paper subgroups. The following sections are explanations of each subgroup’s work scope.

Bibliography Subgroup

The Bibliography subgroup conducted a comprehensive literature search across multiple databases covering research topics on digital libraries, discovery layers, library websites, institutional and research repositories, user studies, usability testing, and interface design. Three topical groups were identified as potential areas of interest: user and usability studies, return on investment, and content reuse.

To ensure the timeliness of publications for further review, articles included in the bibliography were primarily published between 2010 and 2014 with two exceptions: the group included a small number of articles published before 2010 because of their seminal results or theoretical contributions; additionally, they included one article17 that was published in 2015 but made available in pre­print form in 2014. Only published articles and conference proceedings are included ­­ the bibliography does not explore conference presentations or other unpublished works.

Literature Review Subgroup

The Literature Review subgroup used this bibliography to develop a literature review. The Literature Review subgroup devised a hybrid approach to review the literature. The group implemented an article tagging process to identify themes in the works, and authored a Perl script to generate a series of reports from the data accumulated during the tagging process. The group drew upon the results to answer the three questions regarding strengths, gaps and next steps.

Tagging Process

The Literature Review subgroup decided that identifying important themes in the literature would assist in understanding broad trends and gaps around assessing digital library use. To do this,

17 The group’s analysis included one research article from 2015: Green, H. E., & Courtney, A. (2015). Beyond the ​ Scanned Image: A Needs Assessment of Scholarly Users of Digital Collections. College and Research Libraries. Retrieved from http://crl.acrl.org/gca?allch=citmgr&submit=Go&gca=crl;crl14­612v1 ​ ​ 11

the subgroup created a dictionary of tags18 that were applied to each article reviewed. Some ​ terms had been previously identified as “subcategories” by the Bibliography subgroup; others were added as their prevalence in the literature became obvious. The dictionary included terms and definitions for 24 concepts, such as “analytics,” “measuring impact and value,” and “user perception.” Additionally, the dictionary included terms and definitions for the type of article being reviewed. Examples included “research study/article,” “literature review,” and “case study.”

To review the articles, the subgroup divided into three teams: User and Usability Studies, ROI, and Content Reuse. Each team read articles related to its theme, using the titles and abstracts to determine relevance. During the reading process, team members assigned tags to each article using a Tagging Articles spreadsheet.19 Additionally, participants wrote a brief synopsis of ​ each work and identified any limitations of the research described in the article. Both were included in the Tagging Articles spreadsheet. Team members also identified 73 articles from the ​ ​ original bibliography that were, upon closer reading, deemed irrelevant to the scope of this project and thus removed those articles from the results list. Articles were predominantly excluded because they failed to address user studies, ROI, reuse, or, more broadly, the parameters of digital libraries as defined by this study. Upon completion, the subgroup developed a data set that contained tags, limitations, and synopses for 74 articles published between 2010 and 201420 related to assessing digital library user and usability studies, ROI, and content reuse.

Generating Reports

The Literature Review subgroup produced a series of reports based on the results of the literature review:

● groupings.xls grouped the citations under the tags that were applied to them. If a citation ​ had multiple tags, it appeared under each tag heading. ● synopsis.xls grouped the synopses under the tags that were applied to their citations, ​ similarly to the above. ● myTags.xls simply listed the citations with all the tags that were applied to each one in a ​ semi­colon separated list following each citation. ● tagCount.xls contains a list of tags sorted by how many times they were applied. This ​ should help suggest gaps in the literature.

18 https://docs.google.com/spreadsheets/d/14DMEKjBWBQelRPtf8DrlDRA7aXEtUGsME3ujBJPtXlg/ ​

19 Full URLs for the Google spreadsheets are available in Appendix D.

20 The group’s analysis included one research article from 2015: Green, H. E., & Courtney, A. (2015). Beyond the ​ Scanned Image: A Needs Assessment of Scholarly Users of Digital Collections. College and Research Libraries. Retrieved from http://crl.acrl.org/gca?allch=citmgr&submit=Go&gca=crl;crl14­612v1 ​ ​ 12

● gaps.xls simply contained a list of the information in the gaps field, with a count of how ​ many times that appeared. ● cotags.xls listed each tag, and next to it the names of all other tags that were used on ​ the same item where the first tag was used ­­ and the number of times that happened. This shows which tags were most used with which other tags. ● tagCountGrouped.xls listed the grouping in the first column, the tags used in the second ​ column, and the number of times used within that grouping in the third column.

The group generated reports by downloading the Tagging Articles spreadsheet and running a ​ ​ Perl script to analyze the tags. The Perl script, used for transformation, associated each column or entry with the appropriate tag name, grouped and ordered the results, and printed tab­delimited files. The group uploaded the results as spreadsheets in Google Drive.

Data Analysis

To analyze reports generated from the data set, the Literature Review subgroup again divided into three teams: User and Usability Studies, ROI, and Reuse.

The User and Usability Studies team analyzed the synopsis of each article, as well as the identified gaps to determine overall themes. To do this, they relied primarily on the synopsis.xslx ​ and the gaps.xlsx reports. The tagging reports (i.e., tagCount.xlsx, myTags.xslx, cotags.xslx, ​ ​ ​ ​ ​ ​ ​ ​ and tagCountGroup.xslx) showed expected groupings with “user behavior,” “user perception,” ​ ​ and “measuring usability” being highly ranked in this section. Some unexpected lower rankings, particularly for “identifying users” and “content access” helped identify clear gaps in the literature and future research areas.

The ROI team looked at all reports and reread abstracts to formulate responses to the three questions. Since the amount of literature available for examination is relatively small compared to the other areas, there were obvious conclusions for gaps in the literature, namely that more research was needed. Due to the dearth of literature, establishing strengths in the current body of research was challenging.

The Reuse team used several reports to identify gaps, research strengths, and next steps. They investigated the co­tags.xslx and tagCountGrouped.xslx reports to identify themes in the ​ ​ ​ ​ literature that frequently overlap with articles that were tagged “reuse.” The team determined gaps in the literature by using several reports and metadata fields. They identified some based on missing and minimally used co­tags as well as qualitative analysis of gaps identified in the individual articles. The group found others by analyzing text in the “gaps identified” and “synopsis” text fields in the Tagging Articles spreadsheet. They examined articles tagged as ​ ​ ​ reuse not formally in the Reuse purview for additional gaps in the literature. Finally, they used these gaps to highlight potential next steps needed to enrich the reuse literature.

13

White Paper Subgroup

Members of the White Paper subgroup revised a draft written by the Literature Review subgroup into a white paper. The White Paper subgroup divided into two teams: Writing and Reviewing. The two teams also incorporated comments from the public before releasing the final draft.

14

Results

User and Usability Studies

Question 1: What research strengths exist in this area of research?

The user and usability studies articles reviewed primarily focus on the usability of digital library software and/or websites, particularly testing the ability of the user to interact with, search, and find specific items in the digital library. Although there are inconsistencies in methodology and data analysis (discussed in the next question), studies typically involve users performing a set of tasks, measuring users’ task performance and feedback, and developing design recommendations based on results of user tests. The team found that user­centered design and assessment was the most widely used way within the digital library community to engage users and improve the user experience.

Studies put a significant focus on how users search for information in digital libraries using discovery layers or custom search interfaces. User tests of both exploratory and known­item searches were conducted in a number of studies. Those studies identify user behavior patterns to develop design recommendations, including the following:

● visualization of information should be as important as the organization of the information ● search interface design is critical to effective discovery of new information and locating specific information ● it is important for a digital library to show the scope of its collection and access options

Another prominent topic of interest is the relationship of digital libraries to archival finding aids. The Literature Review subgroup’s analysis of the articles identifies clear benefits when digital library content is either directly embedded into archival finding aids or linked to from the finding aid. Additionally, the more granular the descriptive content in the finding aid is, the more the digital content is used. The layout of the finding aid is also important for users’ visual search of important information and subsequent use of digital content. These findings clearly indicate a strong relationship between the archival finding aid's depth, structure, and links to digital content, and the usability of digital libraries.

A less prominent topic involves the metrics of digital libraries. Several articles cover information on individual institutions’ recommended criteria for evaluating digital library interfaces, functionality, and user satisfaction. A few articles briefly touch on usability testing for discovery layers that include the digital library/special collections content, but for the most part 15

this is ignored by discovery layer tests.21 For a summary of strengths in user and usability studies, see Table 1. ​ ​

Question 2: What gaps exist in this area of research?

There are a considerable number of usability studies on discovery layers at individual libraries. Those studies are helpful in validating the system implementation and offer some insights into how users interact with discovery layers, as opposed to traditional library catalogs, or individual database search interfaces. However, there are a variety of gaps in the research.

One significant gap is the application of methodologies employed to analyze user behavior and determine user profiles. As an overview, studies focusing on users’ information­seeking behavior on library websites and discovery layers often use convenient, small samples from students (and less from faculty and researchers), potentially not covering representative library user population. The majority of studies were in­lab user interviews or tests out of users’ normal working environment, which may introduce non­natural user behavior. Contextual inquiry, a method popular in the industry for eliciting user struggles and requirements based on natural observations, is not widely adopted by those conducting digital library user studies. Related to this concern, most studies overwhelmingly ask participants to perform testing tasks designed by the researchers, rather than letting participants conduct their normal, typical search tasks. Those designed tasks could be out of the normal information­seeking context for participants. Any resulting datasets may be incomplete, unnatural, or hard to generalize to other situations.

In the analysis of observations and user feedback, some studies rely on user preferences (e.g., location of certain interface elements, layout of advanced search options, terminology, etc.). Without any behavioral evidence to support those preferences (e.g., data from A/B testing22), the design recommendations from those studies may affect the researchers’ interpretations of what participants said (or did not say) during the test.

User studies of library websites and discovery layers largely focus on the search interface rather than the discovery process for users. The informal site­ and context­specific usability tests in those studies may limit the ability to uncover user goals, intents, motivations, and workflows that could inform better overall system and service designs.

Furthermore, most user studies largely ignore digital libraries. Instead they tend to favor studying the usability of a or a discovery layer, which do not often include digital

21 Articles that focused exclusively on discovery layers, with no mention of digital libraries, were considered out of scope and removed from consideration.

22 For more information, see Paras Chopra, “The Ultimate Guide to A/B Testing,” Smashing Magazine ​ (2010), http://www.smashingmagazine.com/2010/06/the­ultimate­guide­to­a­b­testing/. ​ ​ 16

library platforms or searches for digital library content. The small number of existing studies on digital libraries focus on implementation issues and user outreach. There is a lack of studies concentrating on the usability of research repositories for end users, particularly content contributors. These usability issues may negatively affect potential users’ acceptance of research repositories as part of their research workflow.

For a summary of the gaps for user and usability studies, please see Table 1. For a more ​ ​ detailed summary of specific research gaps in user and usability studies, please see Appendix ​ E. Those gaps emerged as distinctive patterns during our analysis of the relevant literature, but ​ they are by no means a comprehensive summary of limitations of previous studies. Possible approaches to address those gaps in future studies are also listed in the Appendix E. ​ ​

Question 3: What are the next steps?

Future research should address the assessment of institutional repository software and collections, particularly on the usability of software for common user tasks (e.g., finding information and contributing information), the integration with discovery layers and scholarly search tools like Google Scholar, and the perception and awareness of the repository as a scholarly resource by users.

User studies of digital libraries need to be more focused on understanding user behavior and research needs through contextual inquiry. This type of user study will better inform the design of all aspects of digital libraries, including search, content access, integration with other resources and tools, and user content contribution. Furthermore, future research should develop data­based user profiles to help communicate user requirements throughout the design and implementation process with various stakeholders.

For usability studies, researchers should discuss and follow a set of best practices when developing study plans. Those practices should include:

● clear identification of test goals (functionality, interfaces, integration with external referrers, factors for access and reuse, etc.) ● early determination of the testing mechanism (remote vs. in­lab test) ● appropriate balance between users’ own task scenarios and test tasks designed by the researchers ● response measures and assessment tools guided by the test goals ● data analysis and interpretation methods that account for users’ demographics, cultural background, and all possible contingencies of the test

Results also suggest a need for standardization across usability studies. Several articles gave recommendations on criteria for their own testing, but higher level collaborative standards 17

should be developed to help normalize practices across the board to provide for increased comparability of results.

Additionally, very few of the articles mention long term efforts to work with vendors or software developers to integrate findings into existing digital library software that would extend beyond the scope of their individual institution. Creating a method for reporting evidence based efficiencies or improvement ideas to vendors or open source platform developers23 would prevent the inefficiency of multiple institutions running the same usability tests. User studies could focus the efforts of the community on sharing research findings within the community and developing effective new methodologies for assessing user interaction with digital materials.

For a summary of next steps in user and usability studies, see Table 1. ​ ​

Return On Investment (ROI)

Question 1: What strengths exist in this area of research?

As first conceptualized by Chowdhury, the major benefit of conducting ROI studies is the ability to connect digital library work to economic, societal, and sustainability development models.24 Only 10 articles published from 2010 to 2014 were identified as pertaining to the assessment of ROI of digital libraries. Some of the articles describe specialized case studies, while others ​ discuss ROI at a very abstract level. The only thematic tag assigned to more than a single ​ article in the collection was “measuring cost,” which was assigned to 40% (4) of the 10 articles. Examples of other thematic tags related to ROI include “measuring impact or value,” “measuring effectiveness,” and “measuring learnability.” Relatively little published material exists on the topic of ROI of digital libraries, despite a sizeable interest in the ROI of academic libraries and higher education in the last 5 years.25

Central themes in the scant literature involve measurement of time and cost for processes associated with digital libraries (e.g., digitization, image capture equipment, or overhead) and the theoretical application of ROI ­­ traditionally used in private industry and government ­­ to the landscape of library project management. Several studies discuss one­off surveys or

23 Some open source developers have wikis where anyone can suggest a feature/change and provide use cases/evidence supporting the idea. There's a need on the (especially) “free and open­source software” (FOSS) side to educate their implementers about these resources too.

24 Gobinda Chowdhury, “Sustainability of Digital Libraries: A Conceptual Model and a Research Framework,” International Journal on Digital Libraries, 14(3/4) (2014): 181–195. ​

25 “LibValue: Value, Outcomes, and Return on Investment of Academic Libraries,” http://libvalue.cci.utk.edu/biblio. ​

18

theoretical frameworks developed to assist in analysis of ROI for digital libraries, none of which appear to have gained traction in the field.

For a summary of strengths in ROI, see Table 1. ​ ​

Question 2: What gaps exist in this area of research?

Measuring the costs of digital libraries is the most prevalent type of research uncovered in the area of ROI. However, the concept of ROI requires both an investment (a cost) and a return (a benefit). While some efforts are being made to identify costs, the literature shows less evidence of methodologies to measure benefits of digital libraries or to weigh those against costs. Both can be seen as gaps in the existing literature: the field needs more data on the costs of various aspects of digital libraries, as well as more data on the benefits.

The team identified several gaps in the literature, including the fact that standards have yet to be developed for implementing cost/benefit analyses of Web­accessible databases and other online resources,26 and that additional research is needed to identify and develop different measurable indicators for the economical sustainability of digital libraries.27 For a summary of gaps in ROI, see Table 1. ​ ​

Question 3: What are the next steps?

Future research should conduct more ROI studies to test various assumptions and build different models that will enhance the comfort level of the digital library practitioner community with these types of methods and evidence. Digital libraries also lack a body of data around both cost and benefit, though some work is currently being performed in this area. The Cost Assessment Working Group of the DLF AIG is currently working to standardize definitions and collection methods for data associated with various steps of the digitization process, and building a digitization cost calculator28 that combines openly available cost data to help practitioners estimate and better understand the costs of digitization projects. However, this

26 Sam Byrd et al., “Cost/benefit Analysis for Digital Library Projects: The Virginia Historical Inventory Project ​ (VHI),” The Bottom Line 14, no. 2 (2001): 65­75. ​ ​

27 Chowdhury, “Sustainability of Digital Libraries.” ​

28 See the beta calculator here http://statelibrarync.org/plstats/digitization_calculator.php, the definitions of ​ ​ ​ processes here https://docs.google.com/document/d/17jTJmCzKsa83BMdlgKj239Shqcbglq7I4EfxJIWWDQo/edit, and the ​ call for data contribution here https://docs.google.com/document/d/1s1bHzkB3SSyufaoZS0JZhbVQ1AZfRgUsZ0iOcKZ3Q0k/edit

19

group, and the larger profession, has yet to to define or measure the benefit side of the ROI equation.

While the most common interpretation of “benefit” in an ROI model is represented by sales, libraries and other cultural heritage institutions do not typically quantify value as such. Digital libraries must therefore work to articulate their own operational definitions of value,29 before any significant progress can be made in this area. This will be a challenge as a digital library’s value will not be the same across institutions, due to the varying missions and purposes of higher education and cultural heritage institutions.

For a summary of next steps in ROI, see Table 1. ​ ​

Content Reuse

Question 1: What strengths exist is this area of research?

The group found that the professional literature associates reuse with both the aims of the user and the behavior they exhibit to identify information. It is apparent from the literature that evaluating reuse is heavily interconnected with a variety of points in the lifecycle of digital object. 30

● discoverability of materials ● usability of both the digital objects found and the digital repositories in which they reside ● user’s motivation for seeking cultural heritage resources ● what users end up doing with the digital content

The understanding and mapping of this lifecycle makes the measurement of use, value, and impact possible. As discussed in the User and Usability section, analyzing user behavior when it comes to discoverability is the most widely represented portion of the assessment literature.

While many areas of reuse need more robust research and focus, researchers are beginning to address key topics in this area. For example, there are a growing number of case studies

29 For an interesting discussion of the development of such operational definitions of value in the field of ​ bibliographic control, see Erin Stalberg and Christopher Cronin, “Assessing the Cost and Value of Bibliographic Control,” Library Resources & Technical Services 55, no. 3 (2011): 124­137. ​ ​ https://journals.ala.org/lrts/article/view/5501/6757. ​

30 For more information on digital object lifecycles, see The Digital Curation Center, “DCC Curation Lifecycle Model,” http://www.dcc.ac.uk/resources/curation­lifecycle­model ​ 20

regarding how specific disciplines, mainly those in the humanities31 and arts, are reusing digital collections materials. Due to this focus there is a substantial understanding of user behavior and requirements for humanities­focused digital repositories. The reuse of digital images, in particular, receives a lot of attention and topics in these areas show promising developments in the future. In fact, there appears to be a sufficient amount of research done to warrant a review of findings and a synthesized recommendation based off of the results.

As libraries and cultural heritage institutions produce online content, they are now branching out to evaluate whether that content is useful, and to analyze the ways in which patrons might need to reuse digital content. The most successful studies in the literature are able to track digital content reuse, particularly images, through Google image searches or by tracking URL hyperlinking. However, these means of tracking digital content can be thwarted by inconsistencies, such as changing reference URLs or images and digital objects hidden in larger, possibly un­indexed resource collections. The exploration of potential tracking methods utilized in these studies provides some potential new avenues for libraries to explore and highlights the need for more consistency in the field.

For a summary of strengths in Content Reuse, see Table 1. ​ ​

Question 2: What gaps exist in this area of research?

Isolating and understanding patterns of content reuse in digital repositories is a difficult task, and remains one of the most prominent gaps in assessment literature overall. The unexplored gaps in the study of reuse include:

● Who are the current and potential users of digital library content? ● What do patterns of reuse look like? ● What formats should digital repository objects take in order to be most effectively reused by user communities? ● What tools, technology, or standards need to be developed to help track usage? ● What role does copyright or other access restrictions play on reuse? ● To what extent does the usability of digital asset management systems and library sharing policies enable or limit users’ ability to reuse content? ● Does promotion of digital library material increase reuse and what types of promotion are most effective?

This paper describes the methods employed for determining user groups and studying their patterns and wishes (see the User and Usability section in this white paper). Few of these studies are able to answer the question ­ “What do users want?” One of the most effective tactics employed is the simple interview ­ asking users how they utilize digital content. However,

31 The articles reviewed self­identified the disciplines included in the humanities, disciplines varied from article to article. 21

this appears to be the most labor­intensive method and is dependant on participant recruitment. This recruitment requires researchers to be able to identify a prominent, large enough user base for digital objects, and then to contact and interview users within that group. When it comes to different audience groups, (e.g. general or scholarly users), it is unclear how their academic backgrounds, aims, and purposes determine how and what they reuse. Determining not only what users want, but also what they want to do with the digital objects and what delivery format best suits their needs are questions that need to be investigated in order to develop methods for tracking digital content reuse.

Several of the studies surveyed track the reuse of digital content through image searches and the analysis of hyperlinking, which is a promising step forward. These techniques, while useful, exclude non­digital reuse of once digital material (i.e. physical exhibits, publications, etc.), as well as digital objects kept in un­indexed systems. The lack of a consistent way to cite digital cultural heritage content also provides a barrier to tracking reuse. Some methods, such as the use of embedded identifiers are noticeably absent from the current research. There are no comprehensive studies comparing disparate tracking methods to determine effectiveness. The lack of best practices and standardization for consistency and persistency in tracking methods appears to be pervasive across the field. Barring the ability to track individual digital object reuse, predicting how users reuse digitized objects based on web analytics may lead to a biased perception of user needs. There is room for further exploratory studies and field­wide development of standards for developing persistency in URLs, identifiers, and interpreting web analytics using evidence­based data.

How users interact with digital repository interfaces when accessing content also plays into the reuse of digital content. While there are a number of usability studies that look at digital library interfaces, they focus mostly on the discoverability of pre­selected content. Studies looking at the how the simplicity or difficulty of a digital library interface influences the ability of patrons to reuse the digital objects is a missing component of the usability testing research. For instance, does the interface make downloading or sharing digital items simple and straightforward? Do content management systems restrict users to thumbnails only? Do blanket restrictive permission settings and the fear of commercial reuse on the part of libraries and cultural heritage institutions prevent acceptable reuse?

Some case studies glancingly mention the topic of digital content promotion, but do not delve into the topic to examine how libraries and cultural heritage institutions could potentially influence the reuse of the content they are putting online through advertisement. Many of the studies assume a “digitize it and they will come” mentality and have the assumption that their content is known simply because it is available.

For a summary of gaps in Content Reuse, see Table 1. ​ ​

22

Question 3: What are the next steps?

As the literature about reuse is relatively young, the most important next step for the digital library community is to strategically explore reuse patterns. From the literature review, the working group identified six areas where digital library and cultural heritage communities could expand the scholarship in this arena.

First, it is imperative that future studies better identify and understand current and potential user groups, especially in the sciences, and social sciences. Mandates for federally funded grants to make their outputs open, discoverable, and reusable increases the importance of content reuse for the sciences and social sciences. Understanding not only what user groups are searching for, but how they plan to use the digital content will help inform best practices for content development, storage, delivery, and permissions that govern digital objects. It will also directly inform any assessment of reuse, and if it is meeting the needs of identified stakeholders.

Second, the digital repository community should concentrate on developing studies that measure how the functionality of a digital repository interface affects reuse. More studies should analyze the various platforms that foster and record the reuse of content, including web analytics, open access academic publishing platforms, specialized and discipline­based web environments, and software tools that record reuse among users. More open conversations among the digital library community about needs and next steps will allow for the development of best practices.

Third, the working group recommends exploration into the efficacy of current methods employed to track reuse. Comparing methods such as image searching, hyperlinking, or embedded identifiers or potentially developing and analyzing new means of tracking content would be the first step. From there, new best practices and recommendations for tracking reuse of specific object types could be developed.

Fourth, the profession should better articulate the terms and policies under which people can reuse content. Libraries and cultural heritage institutions need to examine the permissions they are extending for reuse of their digital content, taking into account potential impact on users. This is particularly important if reuse becomes a standard form of reporting data for ROI and funding opportunities.

Fifth, there needs to be a standardized format or best practice established for evaluating and discussing metrics about reuse. This will allow institutions to ensure proper financial support for appropriate digital collections projects. To this end, the working group recommends the development of a reuse assessment framework and an accompanying toolkit or best practices to help unify future studies and discussions of reuse in the digital library field.

23

Sixth, there needs to be a synthesis of the existing studies regarding digital image reuse, particularly for users in humanities and fine arts disciplines. From this synthesis, best practices for delivering digital image files for specific discipline use can be parsed and shared by the digital library community.

For a summary of next steps in Content Reuse, see Table 1. ​ ​ 24

Conclusion

The DLF AIG User Studies Working Group reviewed and synthesized a comprehensive corpus of current research related to the assessment of user and usability studies, ROI, and content reuse; outlined critical gaps in the literature; and identified future steps needed to advance the practice of digital library assessment (see Table 1 for a brief summary). This work supports the ​ ​ digital library community’s efforts to curate more relevant content, utilize information on costs and benefits in decision­making, build and implement better systems, reinforce connections to traditional user groups, and encourage development of new connections with increasingly diverse user groups.

The group observed a number of gaps in the methodology and data analysis of user and usability studies in digital libraries, including the lack of contextual inquiry and examination of users' task context, an over­reliance on standard testing tasks (e.g. canned searches) and user feedback rather than behavioral evidence, and a lack of studies on user interactions with digital libraries and institutional repositories. Despite these gaps, most studies involve users performing a set of tasks, measuring task performance, collecting user feedback, and developing design recommendations for future improvements. These activities highlight the wide acceptance of user­centered design and assessment within the digital library community. While previous studies have improved the profession’s understanding on how users search for information in digital libraries, few research projects have examined the research needs that precede search. A better understanding of these user needs and motivations could help digital libraries not only improve the search experience, but also build digital library collections, tools, and platforms. It is also important to consider how to more actively engage users in the content contribution process, particularly for institutional and subject repositories. Future usability studies should establish and follow a set of best practices to ensure consistency in methodology and reliability of findings. On the same note, collaborations across institutions need to be promoted in order to normalize practice and share research findings.

Very little published research exists on the topic of ROI for digital libraries. This is surprising given the significant interest in the literature of higher education in general over the past ​ decade, as well as in literature within the past five years. The most common theme in the existing literature on ROI for digital libraries is analysis of “cost” in the form of both time and money. However, there are two sides to the ROI coin: both cost and benefit. While ​ ​ ​ ​ some efforts are being made to identify costs, the literature shows very little evidence of methodologies to measure benefits of digital libraries, or to weigh benefits against costs. ​ Whereas the sales of goods and services represents the most common interpretation of “benefit” in an ROI model, libraries and other cultural heritage institutions do not typically quantify value as such. Digital libraries must therefore work to articulate their own operational definitions of value, before any significant progress can be made in this area.

25

Reuse of digital objects is closely tied with both the user studies and ROI research. It is influenced by the user needs and expectations for digital material, discoverability, and usability of digital repository interfaces. Yet, studies related to reuse specifically are in their infancy. Future research should address a variety of areas to improve the profession’s understanding of content reuse and the role it plays in digital library assessment. New studies should identify user groups and incorporate needs and expectations from the science and social science disciplines, which have not received much study to date. While most usability studies have examined discoverability of digital objects, future studies should conduct more research to determine how interface design impacts reuse potential. This could include a wider examination on the impacts of technical and intellectual restrictions placed on reuse as well. Further, the profession should develop tools to standardize analytics and track reuse more effectively. Finally, others should analyze the influence of current library and cultural heritage institution permissiveness on reuse. The digital library field, as a whole, will benefit from standardization in all of these areas.

Over the course of the research process, authors of this white paper recognized that the literature in all three areas of focus (user and usability studies, ROI, and content reuse) grapples with the idea of creating value for library materials and demonstrating impact from that value. As mentioned in the discussion of ROI, value is particularly difficult to measure without first defining it in operational terms, and this problem pertains to all areas of libraries, not just to digital libraries. In their formative 2010 article, Stalberg and Cronin argue that libraries’ “ability to make evidence­based decisions has been hindered by a lack of both operational definitions of value and methods for assessing cost and value within larger institutional constructs...The development of cost/benefit analyses for libraries may be difficult, but faced with limited resources and an array of directions in which to move forward, libraries find that articulating the varied cost/value propositions in measured and concrete ways is increasingly necessary.”32 The authors’ research has shown this to be true, and while work has been performed in recent years in the area of articulating value for the academic library at large,33 such terrain is, for the most part, unchartered territory in digital libraries. This working group recommends that more efforts be made to define operational definitions of value for various aspects of digital libraries as a first step in moving forward with articulating digital library value.

32 Erin Stalberg and Christopher Cronin, “Assessing the Cost and Value of Bibliographic Control,” Library ​ Resources & Technical Services 55, no. 3 (2011): 124­137, ​ https://journals.ala.org/lrts/article/view/5501/6757. ​

33 For more information, see the 2010 ACRL publication, “The Value of Academic Libraries,” http://www.ala.org/acrl/sites/ala.org.acrl/files/content/issues/value/val_report.pdf 26

Table 1: Summary of strengths, research gaps, and next steps

Subgroup Strengths Research Gaps Next Steps

User and ● Acceptance of ● Behavioral ● Identify users’ User Studies user­centered observations and research needs design and examination of users’ related to digital assessment task context library methodology ● Over­reliance on ● Understand user’s ● Design Strategies standard testing tasks role in system through user search and subject user development behavior feedback ● Cross­institutional ● Lack of studies on collaboration to user interactions with normalize usability digital library content methods

Return on ● Measurement of ● Benefits of ● Conduct more Investment time and cost for cost/benefit analysis studies processing ● Limited corpus of cost ● Compile more data ● Theoretical data ● Develop more tools application of ROI ● No standard to library project methodology for management implementation ● Lack of measurable indicators for economic sustainability

Content ● Reuse among ● Patterns of reuse ● Understand other Reuse humanities­focused ● Web log analysis user groups ­­ digital repositories ● Non­digital reuse science and social science ● Strengthen connection between digital repository interface and reuse ● Develop assessment framework ● Expand evidence­based practice in tracking objects through web log analysis

27

The working group’s purpose in authoring this whitepaper is to jump start a conversation on the development of community norms and guidelines for user studies in digital libraries. There is still more work to be done, as the working group turns its attention to possibilities such as the development of best practices, or the creation of toolkits for practitioners in the coming year. Assessment of digital libraries has become mission­critical as libraries are increasingly asked to articulate their value to administrators and funders making difficult fiscal decisions, which in turn requires an in­depth understanding of users and their needs. “Accountability will continue to influence the landscape of higher education,”34 and digital libraries must be prepared to respond with evidence that illuminates the critical contributions of digital libraries to research and general use across disciplines and user communities.

34 Megan Oakleaf, “The Value of Academic Libraries: A Comprehensive Research Review and Report,” Association of College and Research Libraries (2010), ​ http://www.ala.org/acrl/sites/ala.org.acrl/files/content/issues/value/val_report.pdf

28

Bibliography

“2013 DLF Forum: Austin, Texas,” Digital Library Federation. ​ http://www.diglib.org/forums/2013forum/. ​

“2015 DLF Forum: Vancouver.” Digital Library Federation. ​ ​ http://www.diglib.org/forums/2015forum/. ​

Byrd, Sam, Glenn Courson, Elizabeth Roderick, and Jean Marie Taylor. “Cost/benefit Analysis for Digital Library Projects: The Virginia Historical Inventory Project (VHI),” The Bottom Line 14, ​ ​ no. 2 (2001): 65­75.

Chapman, Joyce. “Introducing the New DLF Assessment Interest Group.” Digital Library ​ Federation Blog, May 12 2014. http://www.diglib.org/archives/5901/ . ​ ​ ​ ​ ​

Chapman, Joyce. “Library Digitization Cost Calculator,” 2014. http://statelibrarync.org/plstats/digitization_calculator.php. ​

Chopra, Paras. “The Ultimate Guide to A/B Testing.” Smashing Magazine (2010), ​ ​ http://www.smashingmagazine.com/2010/06/the­ultimate­guide­to­a­b­testing/. ​

Chowdhury, Gobinda. “Sustainability of Digital Libraries: A Conceptual Model and a Research Framework,” International Journal on Digital Libraries, 14(3/4) (2014): 181–195. ​ ​

DeRidder, Jody. “Did We Get the Cart Before the Horse? (Faculty Researcher Feedback)” (presentation, Digital Library Federation Forum, Atlanta, GA, October 29 2014). http://www.diglib.org/wp­content/uploads/2014/11/DeRidder.pdf. ​

DeRidder, Jody, Joyce Chapman, Nettie Lagace, and Ho Jung Yoo. “Moving Forward with Digital Library Assessment” (presentation, Digital Library Federation Forum, Atlanta, GA, October 29, 2014). http://www.diglib.org/forums/2014forum/program/32z/. ​ ​ ​

DeRidder, Jody, Sherri Berger, Joyce Chapman, Cristela Garcia­Spitz, and Lauren Menges. “Hunting for Best Practices in Library Assessment” (presentation, Digital Library Federation Forum, Austin, TX, November 4, 2013). http://www.diglib.org/forums/2013forum/schedule/30­2/. ​ ​

Digital Curation Center, ““DCC Curation Lifecycle Model,” (2004). http://www.dcc.ac.uk/resources/curation­lifecycle­model. ​

“Hunting for Best Practices in Library Assessment,” (collaborative Google document, Digital Library Federation Forum, Austin, TX, November 4, 2013). http://bit.ly/dlf­assessment. ​ ​

29

“Library Services for People with Disabilities Policy,” Association of Specialized and Cooperative ​ Library Agencies (ASCLA) website. http://www.ala.org/ascla/asclaissues/libraryservices. ​ ​

“LibValue: Value, Outcomes, and Return on Investment of Academic Libraries.” http://libvalue.cci.utk.edu/biblio. ​

Matusiak, Krystyna K. “Perceptions of usability and usefulness of digital libraries,” Journal ​ Of Humanities & Arts Computing: A Journal Of Digital Humanities, 6 (1/2) (2012): 133­147. ​ doi:10.3366/ijhac.2012.0044. ​

National Information Standards Organization. “NISO Alternative Metrics (Altmetrics) Initiative.” http://www.niso.org/topics/tl/altmetrics_initiative/. ​ ​ ​

Oakleaf, Megan. “The Value of Academic Libraries: A Comprehensive Research Review and Report,” Association of College and Research Libraries, (2010). ​ ​ http://www.ala.org/acrl/sites/ala.org.acrl/files/content/issues/value/val_report.pdf

Scherer, David, Stacy Konkiel, and Michelle Dalmau. “Determining Assessment Strategies for Digital Libraries and Institutional Repositories Using Usage Statistics and Altmetrics” (presentation, Digital Library Federation Forum, Austin, TX, November 5, 2013). http://www.diglib.org/forums/2013forum/schedule/21­2/. ​

Stalberg, Erin and Christopher Cronin. “Assessing the Cost and Value of Bibliographic Control,” Library Resources & Technical Services 55, no. 3 (2011): 124­137. ​ ​ https://journals.ala.org/lrts/article/view/5501/6757. ​

Yoo, Ho Jung and SuHui Ho. “Do­It­Yourself Usability Testing: A Case Study from the UC San Diego Digital Collections,” (presentation, Digital Library Federation Forum, Atlanta, GA, October 29 2014). http://www.diglib.org/wp­content/uploads/2014/11/Yoo_DLF2014_UsabilityTesting.pdf. ​

30

Appendices

Appendix A : Research Bibliography Research Bibliography (https://docs.google.com/document/d/1AVlT3QlmhbcUmoRtZ0hpvEhpbD9_Eld5bHsa­JNHX24/) ​ ​

Appendix B: Tagging Articles Spreadsheet Tagging Articles spreadsheet (https://drive.google.com/open?id=1P­OXY9KvEg1lZHNW8lUzhVezWG1C6hkmlmvtpYLkIgc) ​ ​

Appendix C: Tagging Dictionary Tagging Dictionary (https://docs.google.com/spreadsheets/d/14DMEKjBWBQelRPtf8DrlDRA7aXEtUGsME3ujBJPtX ​ lg/) ​

Appendix D: Tagging Results Tagging results: ● Cotags (https://docs.google.com/spreadsheets/d/1e7W5K0UoigpiAuFCj2DOYP05QfPlhuzy6vG ​ DApLOkVE/) ​

● Gaps (https://docs.google.com/spreadsheets/d/111Idyi6DI4vYWAb9_CAl2QDzrbFSXxrUlIdLM ​ XSZy9E/) ​

● Groupings (https://docs.google.com/spreadsheets/d/1nMbSrHwzDziP7rEhd0XQktxlF58vWhf0lq1Pt ​ sWSRXw/) ​

● MyTags (https://docs.google.com/spreadsheets/d/1US_EGAwWSXRH13ba9FoWatZvrPnHke2fZ ​ a6T6rc_um0/) ​

● Synopsis (https://docs.google.com/spreadsheets/d/1wdDbcNbfw2V7TNqdq9TEksBsWkcfwWy­Qa ​ 5e1xd3xzw/) ​

31

● TagCount (https://docs.google.com/spreadsheets/d/1JTOUSeoJ9vULoa9Lv1OJ61e9HTTZWs0ww ​ MlxuzmGenk/) ​

● TagCountGrouped (https://docs.google.com/spreadsheets/d/12t6gLCt1E1qbcSLJtsZyWDMuyLagBxBjzcgTl ​ 1RVj70/) ​

Appendix E: User and Usability Studies Summary Grid

Table 2. Summary of research gaps and recommendations in user and usability studies of digital libraries. Area Description of gap Recommendations

Depth and breadth of Lackluster literature on Review studies directly author’s scope usability and digital libraries relevant to research goals; Discuss gaps not addressed in previous studies and address those gaps in research questions.

Incomplete conclusions Collect user behavior data without data of specific user from server logs, Google behavior Analytics, and behavior observations before identifying user requirements, usability issues, and design recommendations.

Exclusion of relevant digital Conduct studies on user library information that would interaction with institutional have benefitted the study repositories and digital including institutional collections.

repositories and digital Evaluate digital content using objects the lens of critical and other theoretical frameworks.

Questionable methodology in Reliance on other user Researchers should work user studies studies with different closely with stakeholders to parameters of the purpose develop research goals and and intent of the research. questions from the specific study context. 32

Not recognizing the patterns Collect demographic and perceptions among variables and measures of different user groups relevant experiences of user groups, and relate those measures with data collected in user studies.

Create personas (typical user profiles) based on distinct user requirements and behavior to better communicate research findings.

Incomplete analysis of how Identify the specific task the structure of the study context for user studies and influences the results discuss how the context could affect users’ behavior and responses.

Follow user study best practices, including avoiding leading questions, offering help only when necessary, and observing more and guiding less.

Questionable methodology in Use of one­dimensional Examples: usability test testing in the field where (1) Study user perceptions of other methods could have digital libraries while been useful. integrating actual usage patterns; (2) Integrating log analysis results and web analytics data with usability test findings.

Lack of accessibility Designers and developers coverage.35 should follow established web accessibility guidelines and standards.

35 In this case, accessibility refers to the opportunity for everyone to access information regardless of disability. Because this is an investigation of usability, it is vital that all users have full ability to find and retrieve information. A fuller definition can be found through the Association of Specialized and Cooperative Library Agencies: http://www.ala.org/ascla/asclaissues/libraryservices. ​ ​ 33

Work with disability resources at institutions to identify and follow their guidelines and best practices.

Researchers should include users with special needs in user studies and usability test.

Inappropriate data analysis Identify research questions, procedures. develop hypotheses, and design the study in ways that can directly validate those hypotheses.

Data analysis should follow established guidelines for behavioral data.

Avoid establishing causal relationship based on simple correlations without additional data support.

Communication with Lack of focus on identifying Researchers should work stakeholders appropriate stakeholders closely with stakeholders to prior to identifying user develop a study plan before behavior. conducting any research.

Communicate research findings with stakeholders and use their feedback to guide subsequent studies.

Avoid using technical terms when communicating findings with audience who may not be familiar with user experience research.