Developing Sustainable, Scalable

Total Page:16

File Type:pdf, Size:1020Kb

Developing Sustainable, Scalable

1

DEVELOPING SUSTAINABLE, SCALABLE DIGITAL LIBRARY COLLECTIONS

STRATEGIES FOR DIGITIZATION

by Abby Smith

Questions for review

Introduction 1. Defining Sustainability 2. Methodology

I. Identification, evaluation, and selection

1. Policies, Guidelines, and Best Practices 2. Rationale for Digitization A. Preservation a. Surrogates b. Replacements B. Access a. Outreach b. Collection-driven 1) Special Collections 2) Text-based Collections c. User-driven

II. Cataloging

III. Data management and access policies

1. Preservation 2. Copyright 3. Access Onsite and Off

IV. User support

Summary and Conclusions

1. Costs 2. Benefits 2

DEVELOPING SUSTAINABLE, SCALABLE DIGITAL LIBRARY COLLECTIONS

STRATEGIES FOR DIGITIZATION

by Abby Smith

QUESTIONS FOR REVIEW

Questions for readers of Abby Smith's paper on development of digitized collections:

1) Since the approach is aimed to get at core issues of sustainability and scalability, is the definition given in the introduction to these two concepts acceptable? Do you agree with them?

2) Is the argument for typologies of selection persuasive?

3) Is the decision to avoid reporting on specific activities in favoring of identifying good/best practices and discussing troublesome issue a good one? Will it be helpful tour intended audiences?

4) The three authors wrote to a common outline, but have substantially different content to include. Do the sections in this paper on cataloging, access management, etc have merit? How can they be improved?

5) Are there important trends and developments missing?

6) What type of bibliography would be most helpful? What appendices would be helpful?

INTRODUCTION

Research libraries have been digitizing collections for a decade or more. The collective experiences of these libraries, from their successes to their near misses and even occasional failures, have produced a depth of technical expertise and a set of reliable practices that are widely shared among the most active digital library staff and well-reported in a number of meetings and publications. A decade into this ongoing experiment with representing research collections online, it is reasonable to expect that not only technical practices, but also selection policies have begun to codify, that decisions to convert have been rationalized within institutions, and that a look at existing practices and policies will reveal certain trends and best practices emerging. This paper will review existing practices among the most active libraries engaged in conversion programs, identify policies and best practices where they exist, and discuss the long- term implications of the opportunities and constraints that shape digital conversion programs. It will not be a systematic review of what the major institutions are doing, but rather an analysis of what they have accomplished in order to identify good practices and benchmarks for success.

For the purposes of analysis, a representative subset of the most active libraries have been chosen for close investigation. The libraries are all large and share a mission to create deep and rich 3 collections of primary and secondary resources that serve scholarship and teaching. Most of them are members of the Digital Library Federation (DLF), a group of 25 institutions who have joined forces in an effort to develop common infrastructures and share experience and expertise in developing digital libraries. Among the institutions studied there are great differences in governance and funding, with some libraries within public universities, some within private, and some that are independent of an academic institution. These differences are fully reflected in the various approaches they take to selecting what to digitize, how to do so, and for which audiences.

All digitization activities start in the project mode. However, while the great majority of research libraries have undertaken digitization projects of one type of another, a few have reached the point where they are moving forward to developing full-scale digitization programs, rather than just a series of projects. How have they conceptualized the place of digitized collections in the provision of collections and services to their core constituencies? What have they done or have determined must be done in order to move from project-based conversion to full-blown programs? How are they developing programs that are sustainable over the long term and scaled from project to program?

Defining sustainability

This report works from the assumption that to be sustainable, digitization programs should have certain intrinsic features: *They should be fully integrated into the fabric of library services, *They should be focused primarily on achieving mission-related objectives *Funding for digitization, as for other core library treatments, must be based chiefly on internal, not external, sources *That the long-term maintenance of digital assets is planned for in a way similar to the planning for the preservation and access of other library collection items.

Methodology

A sustainable digitization program, then, would be fully integrated into traditional collection development strategies. Therefore, the assessment of what libraries have achieved so far is based on a close examination of two key factors common to sustainable collection development, be it of analog, digitized, or born-digital materials: *a strategic view of the role of collections in the service of research and teaching, and *life cycle planning for the collections, beginning with acquisition, including cataloging and preservation, and providing reference.

Strategic vision can be determined by looking at the decision-making process - who decides what to convert to serve what ends. Are the decisions made primarily by subject specialists based on existing collection strengths, or is the selection process shaped by the shorter-term needs of the faculty? If the latter, then by what process are they involved and how are research tools developed to meet their needs?

Sustainability is indicated by how the planning for the life cycle is done. How does the library budget for not only the creation of the digital scans, but also for the metadata, storage capacity, preservation tools (e.g., refreshing, migration), and user support, the sorts of things that are routinely budgeted for book acquisitions? How much of the program is supported by grant funding and how much by base funding? If presently grant-supported, what plans exist to make 4 the program self-sustaining? The study was not confined to DLF members, but included other “first-generation” digital libraries as well. Research was conducted by studying the Web sites of all DLF members, as well as many other libraries and research institutions engaged in putting collections online. More important were the site visits made to selected libraries - the University of Michigan, Cornell University, the University of Virginia, New York Public Library, New York University, and the New-York Historical Society. Each library was given a set of questions that comprised the framework for investigation (see Appendix I), and each institution organized its responses individually. (The Library of Congress, also included in this study, answered the questions in writing and no site visit was made.) The questions begin with the selection process and proceed through the creation of metadata, decisions about access policies, and consideration of user support systems that might need to be developed.

At a result of this focus on strategic vision and life-cycle management, the report proposes a typology for digitized collection development and the discussion is ordered around this typology rather than specific institutional practices. Readers will find that many institutions undertaking significant work do not make formal appearances in the report. Examples of specific institutions are introduced only when context is needed to ground the argument. It is hoped that the typology offered is a mirror true enough that libraries looking to see where they are in the wide spectrum of digitization activities will find their own experiences reflected in the report.

In the case of selecting for digitization, in contrast to purchasing or licensing born-digital materials, the rationale for expending resources to essentially re-select items already in the library’s possession is necessarily more complex. In theory, a library would only choose to digitize existing collection items if it could identify the value that is added by digitization and determine that the benefits outweigh the costs. But in practice, over the past decade the research library community has boldly gone forward with digitization projects without knowing how to measure either cost or benefit. Dependent on a technology that is constantly changing, and with it the costs, budgeting models that make comparison between even similar libraries can be at times meaningless or downright misleading. The only way for many libraries to get at the issue of cost is to undertake projects for their own sake in the expectation that documentation of expenditures will yield some meaningful data. Libraries that have been able to secure funding for projects, carefully document their activities and expenditures, and share that information with their colleagues have emerged as the undoubted leaders of the community. Their experiences are necessarily more relevant for this report than that of others who have embarked on fewer projects or have failed to document and share their knowledge.

Beside cost, the other unknown factor in this first decade has been the benefit - the potential of this technology to enhance teaching, research, lifelong learning, or any number of possible goals that digitization is intended to achieve. How could we know in advance how this technology would be adapted by users other than ourselves? How could we conceptualize use of these digitally reborn collections except by extrapolating what we know from the analog realm? Regrettably, most academic institutions, despite their clearly stated goals of improving or at least enhancing research and teaching, have done less than they might have to gather meaningful data about the uses of digitized collections. While this report will address issues of costs and benefits, it should be remembered that as a community, we still have insufficient data on which to draw firm conclusions and derive recommended practices.

The report aims to synthesize experiences in order to identify trends, accomplishments, and 5 problems common to all libraries and indeed many cultural institutions when they represent their collections online. To that end, issues that seem especially complex and problematical are marked for separate consideration. Clearly none of the issues that merit the special attention are to be seen as isolated from the contexts from which they arose. But extracting them from the context of daily operations and setting them apart rhetorically should help library managers see how systemic these issues - funding, selecting by opportunism, copyright - are among libraries large and small. 6

I. Identification, evaluation, and selection

POLICIES, GUIDELINES, AND BEST PRACTICES

There has been a great deal of thoughtful literature written on both the subject of selection for digitization and the management of conversion projects.[citations here] Much of this literature is published on the Web and has become de facto “best practice” to the extent that many institutions applying for digitization grant funds use these documents to plan their projects and develop selection criteria. In addition to these guidelines, there are a number of reports about selection for digitization that range from project management handbooks and technical guides to imaging, to broad, non-technical articles aimed at those outside the library community who fund such programs. [our imaging guides, NEDCC, Smith, Gertz ]

In contrast to these numerous and widely available documents, very few libraries have formal written policies for conversion criteria. Those that have written documents tend to refer to them as guidelines, and they tend to be focused on technical aspects of selection and even more on project planning. When asked why an institution does not have a policy, the response is either that it is too early to formulate policies, that they have not gotten around to formulating them, or that the institution does not have written collection development policies and so is unlikely to write them for digitized collection development.

The focus of these documents is almost invariably not on the rationale for digitization, but on the planning of digital projects or various elements of a larger program. The University of Michigan, for example, does have a written policy and it clearly aims to fit digitization into the context of traditional collection development. [citation] It states that “Core questions underlying digitization should be familiar to any research library collection specialist.”

“Is the content original and of substantial intellectual quality? Is it useful in the short term for instruction and in the long term for research? Does it match campus programmatic priorities and library collecting interests? Is the cost in line with anticipated value? Does the format match the research styles of anticipated users? Does it advance the development of a meaningful organic collection?”

These are fundamental collection development criteria that assert the importance of the research value of source materials over technical considerations, but they are quite general. The rest of the policy focuses not on how to select which items from among the millions in the collection have priority for conversion, but rather how anticipated use of the digital surrogates should affect decisions about technical aspects of the conversion, mark-up, and presentation online.

The selection criteria developed for Harvard offer far more detailed considerations.[citation] In common with the Michigan criteria, though, they focus largely on questions that come after the larger “why bother to put this in digital form rather than that” issues that have already been answered. Creation of digital surrogates for preservation purposes is cited as one legitimate reason for selection; so are a number of considerations aimed not at preservation, but solely at increasing access. (Sometimes, of course, digitization does both at once, as in the case of rare 7 books or manuscripts.)

The Harvard guidelines have proven to be useful to many beyond Harvard in planning a conversion project because it presents a matrix of decisions that face selectors. [citation to Indiana article] They begin with the issue of copyright - whether or not the library has the right to reformat items and distribute them either in limited or unlimited forms. They then ask a series of questions derived from essentially two points of departure: Source material: does it have sufficient intellectual value to warrant the costs? Can it withstand the scanning process? Would digitization be likely to increase use? Would the potential to link to other digitized sources create a deeper intellectual resource? Would the materials be easier to use? Audience: who is the potential audience? How are they likely to use the surrogates? What metadata should be created to enhance use?

The answers to these questions should guide nearly all of the technical questions related to scanning technique, navigational tools and networking potential, preservation strategy, and user support.

The primary non-technical criterion - research value - is at heart a subjective one and relies on many contingencies for interpretation. What does it mean for something to have intrinsic research value? How many items are collected by research libraries that do not? Should we give priority to items that have research value today, or those that will (probably) have it tomorrow? What relationship does current demand have to intrinsic value? Because these are essentially and unavoidably subjective judgments, the only things excluded under these selection criteria are things that do not fit under the camera, like architectural drawings, or things that are very boring or out of intellectual fashion. Interestingly, foreign language materials are nearly always excluded from consideration, even if they are of high research value, because they are deemed not cost- effective to convert.

This high-level criterion of research value is also an intrinsic part of non-digital collection development policies. The difference in how the two play out in most libraries is that the acquisition of monographs, to take an example, fit into a long-standing activity that has been well defined by prior practice. And it governs the acquisition of new materials, those that are not already held by the library. (The issue of how many copies is secondary to the decision to acquire the title.) Selection for digitization is re-selection, and so the criteria for digitization, or repurposing, will be different in the end. The meaning of research value will also differ, as we see the methods of research and the types of materials that are mined and how, are really fundamentally different from analog items. As discussed in section I.2.B, the most successful digitization programs today are grounded in the belief that it is the nature of research itself that is “repurposed” by this technology, and the source material that yields the greatest return when digitized often surprises.

As one librarian said, the guidelines that exist and are used routinely, whether they are official or not, are all “project-oriented” and it would be mistake to confuse what libraries are doing now with what libraries should and would be doing if “we understood what higher purpose digitization serves.” Guidelines for technical matters such as image capture and legal rights management are, on the contrary, extremely useful and should be codified, in this manager’s view. Formal collection development policies are still a way off. 8 9

RATIONALE FOR DIGITIZATION

Libraries identify two reasons for digitization: to preserve analog collections; and to extend the reach of those collections. Most projects and programs, while perhaps giving priority to one over the other, end up serving a mix of both higher purposes. As libraries learned during the period when they tackled the brittle book problem through deacidification and reformatting, it is difficult and often pointless to pick apart preservation and access. Indeed, when a library is seeking outside funding for digital conversion, applicants tend to cite as many possible benefits from conversion as possible, and so preservation and access are usually mentioned in the same breath. Nonetheless, because it has been generally conceded that digital conversion is not as reliable for preservation purposes as microfilm reformatting, it is worthwhile to stop and see what institutions are doing and saying that they are doing in terms of preservation per se.

Preservation

Preventive preservation The use of scans made of rare, fragile, and unique materials, from print and photographs to recorded sound and moving image are universally acclaimed as an effective tool of preventive preservation. For materials that cannot withstand frequent handling or pose security problems, digitization has proved to be a boon.

Reformatting for replacement For paper-based items, there is a consensus among librarians that digital scans serve as the preferred type of preservation surrogates. They are widely embraced by scholars and preferred to microfilm. However, with some few exceptions, librarians will also assert that scanning in lieu of filming does not serve preservation purposes, because the certainty that we can migrate those scans into the future is simply not as great as our certainty that we can manage preservation microfilm over decades. There is the general hope, if not certain expectation, that the problem of digital longevity will soon be resolved. In anticipation of that day, most libraries are creating what they refer to as preservation-quality digital masters, a scan rich enough to use for several different purposes and created to obviate the need to scan the original again.

Only one institution - the University of Michigan - has a policy to scan brittle books and use the scans as replacements, not surrogates. They have created a policy for the selection and treatment of these books, and they explicitly talk of digital replacements as a crucial strategy for collection management. [See attached policy at Appendix II: http://www.umdl.umich.edu/policies/digitpolicyfinal.html.] This policy is premised on their view that books printed on acid paper have a limited life span and that, for those with insignificant artifactual value, they are not only rescuing the imperilled information, but also making it vastly more accessible by scanning in lieu of filming. (The preservation staff continue to microfilm items identified by selectors for filming, as well as deacidifying volumes that are at risk but not yet embrittled.) The focus of Michigan’s program is the printed record, not special collections, and digitization is a key collection management tool for those holdings. At Cornell, which also has incorporated digitization into collection management (that is, it is not for access alone), but not as systematically as Michigan, there is a preference for digital replacements of brittle materials with backup to COM (computer output microfilm). The Library of Congress has also begun implementing preservation strategies based on digitization in the House and Garden project [citation]. Most libraries, though, elide the issue of replacing brittle materials with digital images by scanning chiefly special collections items that will be retained in the original. 10

For audiovisual materials, digital replacements appear to be inevitable. Because the recording media used for sound and moving image demand regular, frequent, and ultimately destructive reformatting, migrating onto digital media for preservation as well as access is acknowledged to be the only course to pursue for long-term maintenance. The Library of Congress, one institution deeply engaged in audiovisual preservation, is looking to digitization for the long-term access for analog as well as digital materials. This does not mean that the institution will dispose of the original analog source materials, only that the preservation strategy for these items will not be based on that analog source material.

Issue: Disposition of source materials The disposition of scanned materials is a challenging subject that most libraries are just beginning to grapple with in a serious way. When the time comes that digitization is considered an acceptable if not superior alternative to microfilm for preservation reformatting, and those items that can be networked are, what criteria will libraries use to decide what to keep and what to discard? For materials that are rare or unique that question should not arise. But what about back journals that will be available either by a database like JSTOR or American imprints that the University of Michigan has scanned and made available without restrictions on its Making of America site? The library community never reached consensus about this issue for microfilming. But twenty years from now, when many scholars will prefer remote access to these materials to seeking them out from a library, will the library community have developed a collective strategy for preserving a limited number of originals and reducing the redundancy of print collections?

For those certain media, such as acetate sound discs or nitrate film, the original or master should never be used for access purposes due the extreme fragility of the carrier. Service of those collections should always be done on reformatted media. However, that is an expensive proposition for any library, and there is great resistance to push the costs of preservation transfer onto the user. Regrettably, a great number of recorded sound and moving image resources are played back using the original. Until digitization is an affordable option for access to these media, their preservation will remain at very high risk.

Access

In nearly all research libraries, digitization is viewed as service of collections in another guise, one that provides enhanced functionality, convenience, certain preservation considerations, aggregation of collections that are physically dispersed, and greatly expanded reach. Among all the various strands of digitization activities at major research institutions, there are essentially three models of collection development based on access: one that serves as outreach to various communities; one that is collection-driven; and one user-driven. All libraries engage in the first kind of access to one degree or another. Where one sees significant strategic differences is in their approach to the choice between mounting large bodies of materials in the expectation of use versus collaborating closely with identified users to facilitate their data creation.

Access for outreach and community goals

There are and will continue to be times when libraries create digital surrogates of their analog holdings for reasons that are important to the home institution yet not directly related to teaching and research. Libraries are and will continue to be parts of a larger community that look to them for purposes that transcend the educational mission of the library per se. As custodians of 11 invaluable institutional intellectual and cultural assets, libraries will always play crucial roles in fund-raising, cultivating alumni allegiance, and public relations.

Occasions for selective digitization projects include exhibitions, anniversaries (when archives or annual reports get into the queue), a funding appeal (usually as a quid pro quo for donation), and efforts to building institutional identity. Careful consideration needs to be given to what goes online for whatever purpose because, once a collection is online, it becomes de facto part of the institutional identity. Image building is a critical and often undervalued part of ensuring the survival of the library and its host institution. As custodians of the intellectual and cultural treasures of a university, libraries have an obligation to share that public good to the advantage of the institution.

Issue: Diversion of resources Some library staff voice their anxieties that institutional concerns, such as funding rasing, public relations, and special projects, divert too many resources from more academically defensible projects. Most library administrators show an acceptance of this role and some use it to the library’s distinct advantage. Even a “vanity project,” if managed properly, will bring money into the library for digitization and provide the kind of training and hands-on experience that is necessary to develop digital library infrastructure and expertise. The key to building on such a project is to be sure that all the library’s costs, not only scanning but also creating metadata, migrating files, and so forth, are covered. Such projects, done willingly and well, usually enhance the status of the library within the community and seldom do long-term harm.

Collection-driven access

Special collections

The first large-scale digitization program was American Memory, inaugurated by the Library of Congress officially in 1995 but based on earlier digitization of discrete special collections. The goal of American Memory from the beginning has been to make widely accessible the Americana collections of the library, those being for all intents and purposes special collections - every sort of format and genre collected by the library except books and periodicals. This approach, focused as it has been on access to rich cultural heritage materials held in trust for the American people, made sense for many overlapping reasons. Ironically, the experience of an agency directly subordinate to the Legislative Branch of the Federal government has become a model for many academic libraries, even those whose mission, audience, and collections have virtually nothing in common with the largest public library in the world. In retrospect, one can see that virtually all digital projects that convert archival and non-print materials are modeled in one degree or another on American Memory. This is due chiefly to the extensive documentation that the library has mounted on its Web site, and to the well-publicized redistribution grants that it gave under its LC/Ameritech funding. The requirements for that grant were based on Library of Congress experience, and the requirements of other funding agencies, including the Institute for Museum and Library Services (IMLS) have been heavily influenced by them. The only other library that has similar collecting policies and a similar governance and funding structure is the New York Public Library, and the digital program they have scheduled to implement over the next few years bears remarkable resemblances to the Library of Congress’ s in it ambitious time frame, focus on special collections, and stated goals of access to the general public as high a priority as service to scholars. They share, in other words, the same strategic view of digitization, one well in line with the realities of their audience, collection strengths, and governance. 12

Indiana University reports that, in the early stages of their digitization program, they used the LC/Ameritech Competition proposal outline to assess the merits of collections for digitization. This led to a canvas of their libraries for “their most significant collections, preferably ones in the public domain or with Indiana University-held copyrights.” Then, with (special collection) candidates in hand, they examined them for what they identified as the basic criteria: “the copyright status of the collection; its size; its popularity; its use; its physical condition; [and] the formats included in the collection... and the existence of electronic finding aids.”[Brancolini Library Trends Spring 2000 v 48].

While there are lots of reasons that academic libraries decide to digitize special collections, the rationale of the two public institutions merit special consideration. The two libraries base their selection decisions on their understanding that they are not part of a larger academic community, with faculty and students to set priorities. Rather, they serve a very broad and often faceless community - the general public - and so wish to make available things that both scholars and a broader audience would find interesting. Because their primary audience is not academic, they have no curricular or educational demands to meet. They can focus directly and exclusively on their mission as cultural institutions. Moreover, as libraries that have rich cultural heritage collections held in the public trust, they feel obligated to make those unique, rare, or fragile materials that do not circulate available to patrons not able to come to their reading rooms. Their strategic goal, then is cultural enrichment. None of the research libraries with comparable collections, such as the Harvard and Yale University libraries, claim that as a goal.

For whatever reasons public, private, and academic libraries choose to mount special collections, they encounter many of the same difficulties in developing coherent digital collections because of the unique problems that special collections present. Whether one digitizes archival holdings or photographic collections for a broad public or for use by faculty in teaching and research, they still begin with the question of where to begin, where to end, and how to contextualize materials outside the reading room in a cost-effective manner. The following are a number of issues that invariable arise to in the process of selection.

Issue: Physical condition

The problems with special collections actually begin with their sometimes parlous physical handling issues. Except for small digital exhibitions, special collections are usually digitized in quantity. Their intellectual value is often premised on volume, or comprehensiveness, which usually is the same thing, so curators and preservation staff often have to devote more time to the physical preparation of these items than periodicals, for example, raising the price of putting them online. Inevitably there are cases in which items in a collection must be rejected because they are too frail, and that an entire collection is deemed unscannable because the proportion of items that would have to be left out compromise the integrity of the collection for its intended purpose.

As noted above, at least one library is digitizing books and destroying them in the process. Their reasoning for this is justified in their official policy: the materials are not unique; they are not of artifactual value; and they will self-destruct in time. No library that holds unique or even very rare materials, or those that are of artifactual value, would risk damage to an item for access purposes, though, and until there are widely available cameras that can photograph a rare or tightly bound volume without distortion, this category of sources material will not be scanned. 13

Issue: Intellectual control

The scarcity of cataloging or description that can be quickly and cheaply converted into metadata is often a decisive factor in excluding a collection from digitization. Given that creating metadata is usually a more expensive activity than the actual scanning, there is the need to take advantage of existing metadata - better known as cataloging. Often money to digitize has come with a promise by the library director that they will put up several thousand - even million - images, a daunting pledge. To mount five million images in five years, as the Library of Congress pledged to do, has necessitated giving priority to large collections that already have extensive bibliographical controls. New York Public is likewise giving selection preference to special collections that already have some form of cataloging that can be converted into metadata in order to meet production goals. In this way, expedience can theoretically be happily married to previous institutional investments. These libraries have put enormous resources in past decades to creating descriptions, exhibitions, finding aids, and published catalogs of prized institutional holdings. One can assume that a collection that has been exhibited or made the subject of published illustrated catalog has demonstrated research and cultural value.

Some collections that are supported by endowments can also make the transition to digital access more easily than others, because funds may be available for this within the terms of the gift. The Wallach collection of arts and prints, for example, at the New York Public, has been put online as the Digital Wallach Gallery. There are a number of grant applications that not only build the cost of metadata creation into the digitization project, but also appear to be driven in part by a long- standing desire on the part of a library to get certain special collections finally under bibliographical control.

It can be quite difficult, though, to harmonize the descriptive practices that were prevalent 40 years with what is required today. The expansive bibliographical essays that once were standard for describing special collections need quite bit of editing to make them into useful metadata. It is not simply a question of standards, which have always been problematic in special collections anyway. It is the fact that people research and read differently on the Web than when sitting with an illustrated catalog or finding aid at a reading desk. For better or for worse, descriptive practices need to be reconceptualized for presenting these types of materials online. This rethink is several years off, as we have as yet no long-term understanding of how people use special collections online.

Issue: Funder mandates

Many librarians ruefully admit that so far, many of the projects they have developed were done so with a calculated effort to appeal to a funder, be they a wealthy alumnus or a Federal agency. While one can imagine all sorts of cases in which one collection was given priority for digitization over another because someone felt it would be easier to secure funds for, the “problem” is probably not as problematic as it is reported. All libraries answer to some authority or other for their budgets. The issue only bodes ill when libraries deliberately seek funding for things that are not core to institutional mission. As argued above, outreach can properly be considered part of mission work.

For the Library of Congress, for example, online distribution of collections is the only way to provide access to Americans in all Congressional districts. For the primary funders and 14 governors, Members of Congress, who have built and sustained this library on behalf of their constituents, this rationale is compelling. New York Public also has to answer to a jealous state government that, while it does not financially support the library in full, places high demands on the library to fulfill a public mandate. The public institutions, under special pressures to meet the needs of many differing publics, wrestle routinely with what they see as mission-driven activities versus what is simply urgent for public relations reasons. This true for academic libraries that get any state money. Where this is most disconcerting is the issue of the so-called K-12 community. There is much (largely unsubstantiated) talk of how access to primary source materials held in research institutions will transform education in the K-12 community. This is a hypothesis that needs to be tested. But there is no doubt that public institutions are seen as holding a promise to improve the quality of our civic life if they provide greater access to their richest holdings. In one sense, it is a measure of the high esteem that our society holds libraries in that New York Public and the Library of Congress have been extremely successful in finding public-spirited citizens willing to make extraordinary financial donations in order to “get the treasures out.” This level of philanthropy, numbering in the tens of millions of dollars for each library, is simply unthinkable in any other country. While these libraries may be accused of pandering to donors on occasion, or are blamed for not paying enough attention to the academic community by digitizing materials that are not in demand first and foremost by scholars (cite NAS report), the fact is that public libraries, like the libraries in state universities, are not designed to serve exclusively, or even primarily, the scholarly community. This does not skew selection for digitization as drastically as some seem to assume. Donors may express an interest in a particular type of material, but they end up choosing from a set of candidate collections that have been proposed by curatorial divisions and vetted by preservation and digital library staff for technical fitness. In terms both of process and result, they differ little from their private academic counterparts.

It is worth noting that in both public and academic libraries, some curators who are active in special collection development advocate for digitization because they see it as a way to induce further donations. For them, the promise of access is a useful collection development tool because digital access advertises what the library collects and demonstrates a commitment to access.

General collections

To date, far fewer libraries have digitized significant series of books and periodicals than special collections. There are a few reasons that are commonly given, and others that can be inferred. For starters, there is a general reluctance on the part of libraries to undertake digitization of materials that have or even suggest a whisper of any copyright issue attached to them. This means sticking to the nineteenth century, by and large, and secondary literature (the bulk of imprints) from that era are often judged to be less valuable. It is only when one decides that the general collections of the nineteenth century are the special collections of the twenty-first that one can see their intellectual value rising.

Another reason is that, following the example of the first large-scale digitization program, American Memory, there has been a prejudice to choose rare or uncommon materials over those more commonly held. This may change over time, as the matter of convenience of access grows in importance as librarians see students and faculty alike preferring the expedience of remote access over the quality of resource.

Furthermore, with materials that are unpublished or not commonly held, the chances are that a 15 library can help to build institutional identity by mounting their special collections. This can in encouraging alumni loyalty or in recruiting students. This assumption is actually belied by the success of the Making of America (MOA) projects at Michigan and Cornell, which have achieved considerable renown for their collections of monographs and periodicals.

But, as the two MOA projects highlight, one of the challenges faced by libraries mounting print publications is that of how much is too little and how much perhaps too much. The sense that textual items need to exist in a critical mass online stems in part from the fact that these books and magazines do not have quite the same cultural frisson as Jefferson holographs or Brady daguerreotypes, perhaps.

Issue: Critical mass

Indeed, a problem even more intellectually challenging than how to retool descriptive practices for the creation of metadata is how to represent a special collection or archives online. “Critical mass” is one criterion for selection that shows up in nearly all the written guidelines for selection and is commonly noted in conversation. [cite example] Librarians will refer to achieving a critical mass of materials online that, once achieved, will make it worthwhile to add items incrementally, either from within one’s own institution or, less commonly by linking related but physically disparate holdings, creating a virtual collocation. The magic of critical mass, in theory, is that if you get enough related items up in a commonly searchable database, then you have created a collection that is richer in its digital instantiation than in analog. This is premised on the notion that the technology has a transformative power, that it can not only re-create a collection online, but give it new functionality, allow for new purposes, and ultimately create new audiences that put to it novel queries. It does this by, for example, turning static pages of text or numbers into a database. Monographs are no longer limited by the linear lay-out of the bound volume (or microfilm reader). By transforming text into hypertext, librarians can create whole new resources for their patrons from old ones and even make items that have has little or no use into something that gets a lot of hits.

Critical mass is also a term used frequently in analog collection development, and had proven a meaningful concept for acquiring and describing primary and secondary sources. (The original metaphor could be said to come either from Hegelian theory of history, in which it refers to a mass of historical phenomena sufficient to transmute quantity into quality; or from nuclear physics, in which case we are speaking of a density of matter so great that it sets in motion a chain reaction. In both cases, the idea is that a number can become so great that it wreaks a transformation in the nature of matter itself. And in both cases, it is fundamentally a mystical concept.)

How much is enough? A critical mass is enough to allow meaningful queries through curious juxtapositions and comparisons of phenomena, be it the occurrence of the word “chemise” or the census returns from 1900. A large and comprehensive collection is valuable because it provides a context for interpretation. But in the digital realm, it turns out that we really mean something else by this term, critical mass, something ill-defined and quite new. The most salient example of this new phenomenon is the Making of America (MOA) at the University of Michigan, a database of thousands of brittle nineteenth-century imprints. While the books themselves were seldom called from the stacks, the MOA database is heavily used, though not primarily by students and teachers of Michigan. (Its largest user is the Oxford University Press, which mines the database for 16 etymological and lexical research.) Is this database heavily used because it is easily searched, and the books were not? Because one can get access to it from any computer in any time zone, while the books were available only to a small number of credentialed users? Were the books as they languished in remote storage not of research value but now, as a database, they are?

“Critical mass” could more accurately be thought of as “contextual mass,” a (variable) quantity of materials the provide a context for evaluation and interpretation. Whereas in the analog realm, searching within a so-called critical mass has always been very labor-intensive and takes great human effort to reveal the relationship in and among items in that collection, once those items are online in a form that is word-searchable, one has a mass that is now accessible to machine searching, not the more arduous human researching.

But for special collections, which are not necessarily text-based, and usually under cruder bibliographical control than published works, the amount of material needed to get a critical mass almost defies the imagination. If a collections is very large, too large to digitize, the staff may choose to digitize a portion that represents the strengths of the collection. But what is that? How much is enough? These are subjective decisions, and they are answered differently by different libraries. In the public libraries, with no faculty to provide advice, the decision have been made by the curatorial staff. The Library of Congress has had outside scholarly consultants and educational experts from time to time to aid in selection decisions, but the actual selection decisions are always made by curatorial staff, within limits indicated by scanning and preservation experts. New York Public relies on a curatorial staff that is expert in a number of fields and, as most cultural heritage institutions, has long corporate experience in selecting for exhibitions.

But many curators see doing anything less that a complete collection as “cherry-picking” that lacks intrinsic value to the research mission of the institution. Others are less severe and cheerfully admit that for most researchers, a little bit is better than nothing at all and very few researchers mine any single collection to the depth that we are talking about. Those who do, they think, would end up coming to see the collection on site at some point, in any event. These judgments are based on current experience with researchers, not on objectively gathered data. When asked, as an analogy to the MOA case, for example, about how research techniques in special collections have been affected by digitization, some librarians asserted that research will be pursued by radically different strategies inside of a decade. Others think that research strategies for special collections materials would not change, even with the technology. The important thing, in their view, is not to get the resources on line, but to make tools for searching what is available in libraries readily accessible on the Web - tools such as finding aids. New York Public has secured money to do long-term studies of (digitized) special collections users in order to gather information about use and test assumptions about users.

The California Digital Library has inaugurated a project, called California Cultures, designed to make accessible “a ‘critical mass’ of source materials to support research and teaching. Much of this documentation will reflect the social life, culture, and commerce of ethnic groups in California.” [citation] The collection will comprise about 18,000 images.

The role of scholars in selecting a defined set of contextually meaningful sources often works well in certain disciplines for published items. Agriculture and mathematics are examples where scholars have been able to come up with a list of so-called core literature that is amenable to comprehensive digitization. By way of contrast, curators as a rule do a far better job in selecting 17 from special collections than scholars. These are the sorts of materials that usually only curatorial staff are familiar enough with to make fine assessments.

Issue: coordinated collection development

Another trend that is evident is that, for all the talk of building federated collections that will aggregate into a digital library with depth and breadth - that is, critical mass - the principle of “states rights” is nonetheless the standard. Each institution decides on its own what to digitize, and usually does so with little or no consultation with other libraries. There are funding sources that require collaboration in some circumstances - the Library of Congress’ Ameritech grant is an example of such - but the extent of collaboration usually has to do with using the same standards for scanning and, at least sometimes, description. Selection is not truly collaborative; it could more properly characterized as “harmonized thematically.” Institutions usually make decisions based on particular institutional needs rather than on consensus community priorities.

One example of coordinated collection development is the California Digital Library (CDL), which sees that collaboration as a key element in sustainability . Because of funding and governance issues, the CDL believes that they must foster a sense of ownership and responsibility for these collections among creators state wide, locality by locality. Their access policies are derived from this view to the extent that they have a built single place where everyone can see what aggregated collections can be. This, they believe, will help local collection development. [citation]

By way of contrast, scholars as data-creators tend to have a different concept of the term. Projects as Valley of the Shadow at the Institute for Advanced Technologies in the Humanities (IATH) at Virginia are built with the achievement of a critical mass for teaching and research in mind. [citation] Librarians might tend to see such a project as publishing because it scale is so small and they are not built to be searchable with other such databases. But not all libraries are trying to build large-scale collections of digital surrogates online, but rather have a strategy that starts with the end-user and works from there.

User-driven access

Some libraries have decided that they will not digitize collections for general access purposes, but rather only in response to what their local academic community has indicated it wants. The University of Virginia (UVA) is the most vivid example of this approach, though the depth and breadth of their activities in this area are far from typical. (As a state-supported institution, they also have access projects that serve state and regional needs.) UVA has several digital conversion initiatives, both in the library and elsewhere on campus. In the Institute for Advanced Technology on the Humanities (IATH), an academic center that is not in the library, scholars develop deep and deeply interpreted and edited digital objects that are, by any other name, publications. Examples include the projects on the writers Blake, Rossetti, and Twain, as well as the Shadow of the Valley mentioned above. Within the library there is the Electronic Text Center, where the staff will choose to encode humanities texts that they put up without the interpretive apparatus of the IATH objects. They are more analogous to traditional library materials that are made available for others to interpret. Except, of course, that encoded text is far more complicated a creature than the OCR’d [Optical Character Recognition] text that other libraries are creating. To some extent, this latter center is, if anything, technology-driven, in that it seeks to pursue the 18 potential of various encoding schemes as part of its explicit agenda.

To some extent, scholars and librarians at Virginia have been working in essentially parallel tracks and they are now ready to grapple with the long-term consequences of their collection- building activities in concert. The Andrew W. Mellon Foundation has granted IATH and the library funds to work to develop an architecture that will allow the library to provide long-term care of the digital publication that the scholars are creating.

At other libraries, not yet as far advanced in building digital collections from their analog holdings as Virginia is, there is a growing skepticism about the great collections approach and its usefulness for their campus. At New York University, the focus is on the user largely because there is neither a pressing need for outreach at their campus, nor the sense that the collections were truly world-class and of surpassing cultural significance. That is, the library does not see cultural enrichment as part of its mission; the collections had been developed for teaching and specific types of research only. So this library had decided to concentrate on working with faculty and graduate students to develop digital objects designed to enhance teaching and research. An even more pressing need is the development of an infrastructure to deal with born digital materials. There will be grants given to faculty for development of teaching and research tools, similar to what Virginia has created, but the effort of the library staff is going to preparing for that time, seen to be in the immediate future when the demands of born digital materials will obviate any initiative to create collections of digital surrogates.

Harvard University libraries are taking the same approach, concentrating on building an infrastructure to support born-digital materials first and foremost. While the holdings of the over one hundred repositories in the university certainly comprise a rich collection of cultural heritage, Harvard will attempt to serve the Harvard community, not a larger community beyond its campus. [citation to Flecker article] “While in many instances the digital conversion of retrospective materials already in the University’s collections can increase accessibility and add functionality and value to existing scholarly resources, it is strategically much more important that the library begin to deal with the increasing flood of materials created and delivered solely in digital format.” Although $5M of the $12M allocated is for content development, so far the majority of content development comprises conversion-for-access purposes. Slated for review is the collection of collections that have been mounted so far. “One specific issue being discussed is the randomness of the areas covered by the content projects. Since these depend upon the initiative of individuals, it is no surprise that the inventory of projects undertaken is spotty, and that there are notable gaps....It is also possible that specific projects will be commissioned to address strategic topics” However, the gaps Flecker refers to are not content per se - specific subjects that would complement one another - but content that demands different type of digital format - encoded text, video, sound recordings, etc. This is a technical criterion, of course, independent of collection development, fully concordant with the purposes that Flecker identifies the initiative is to serve.

Many of the scholar-driven projects may be coherent digital objects in themselves. But they would, by library standards, fail the test of comprehensiveness. Indeed, one could say that the value-added by the scholar lies precisely in the selection. Those projects that are driven by scholar selection, such as the “Fantastic” collection of witchcraft at Cornell, do not claim to have comprehensiveness, but serve rather as pointers to the collection by presenting a representative sampling of it. 19

Issue: overbuilding

Cornell has tried both collections-driven and user-driven approaches to selection. In several instances, staff have begun with expressed interests of faculty, say for teaching, and have developed digital collections based on those interests. In each case, though, Cornell has expanded its brief and augmented the faculty’s choices with related materials. It seems that a faculty member’s interests are usually fairly circumscribed and librarians will select a good deal of additional materials on a topic, such as Renaissance art, to provide depth to the collection. As a result, the digital collection begins to take on the character of a database. It is as if a scholar will start the project by asking for certain resources to be digitized for a specific and usually limited purpose, and the library staff responds by developing from this modest request a plan to build something that makes for rich searching. Research librarians are used to thinking of collections as being useful to the extent that they assure some measure of comprehensiveness or depth. Scholars, on the other hand, will take such comprehensiveness for granted and concentrate instead on making choices and discriminations between collection items in order to build a case for an interpretation. While these two views of collections are complementary, when it comes to selection for digitization they actually create the most difficult choices facing libraries in digitization programs. Selection is an “either/or” proposition. It does not tolerate “both/and” solutions. Those historians who are working on e-Gutenberg projects are now beginning to encounter the limitations that librarians live with every day. When faced with the opportunity to not only write their text for electronic distribution, but also to represent their resources as well, they find themselves facing dilemmas familiar to digital collection builders everywhere: how much of the source material is enough to represent the base from which an argument was built, and why is digitization of even a few core files so expensive? 20

II. Cataloging

The creation of metadata is recognized by all be one of the major costs factors in digitizing primary and secondary sources. For monographs and serials, genres for which the MARC record was originally devised and which is a standard well-understood, retooling catalog records need not be complicated or expensive. For those materials that are published but not primarily text- based, such as photographs, posters, recorded speech or musical interpretations, the MARC record has noted limitations and those tend to be accentuated in the online environment. Unpublished materials share this dichotomy of descriptive practice between textual and non- textual. For those institutions that have chosen to put their special collections online, items that often lack uniform descriptions, tough decisions must be made about how much information can be created in the most cost-effective way. In some cases, re-keying or OCR can be used to produce a searchable text in lieu of creating subject access.

Many libraries have looked for existing material created for other purposes, such as exhibition captions, to be refashioned into descriptions. But that is only a start. As an expert from the Library of Congress wrote, “Even when contextualizing or interpretive material is available for repurposing, preparation for the online environment demands substantial effort of curatorial staff beyond that needed for traditional processing.” This is particularly true for items that, when used in the original in a reading room, can be relatively well researched through finding aids and collection-level descriptions that may be idiosyncratic in nature.

But so far, the online environment is one in which the context for interpretation needs to be far more explicit than in the analog realm. It is interesting to think about why that is so. Are librarians creating too much descriptive material for online presentation of those collections that have successfully been served in reading rooms with no such level of description? Is it that librarians make assumptions that the level of sophistication or patience in the online user is far lower than that of the onsite researcher? There does seem to be a general operating principle that an online patron will not use a source no matter how valuable, if it is accompanied by minimal- level description. This may be a well-founded principle, and it is certainly true that the deeper and more structured the description, the likelier it is that the item will be found through the various searching protocols most in use.

As mentioned above, one of the factors determining how suitable an item or collection is for digital representation is how well and how economically it can be described. NOTE: IS THERE ANYTHING TO BE ADDED HERE? DOES THIS SECTION HAVE A REASON FOR BEING? 21

III. Data management and access policies

In contrast to selection criteria guidelines, there is much less written about how to plan for the access and preservation of digitally reformatted collections over time. This is in part because we know little about maintaining digital assets for the long haul. We have learned a great deal already through the agency of failed or deeply flawed efforts - those of the “we’ll never do that again!” variety told of various CD projects, for example - but such lessons tend to be only informally communicated, for understandable reasons. Some exceptions include the University of Michigan, one library that has a clear view of what role digitization plays - that of collection management and preservation - and so has developed and published policies that support those goals. The California Digital Library is also an exception, perhaps because, as a central repository, the need to establish standards and best practices that their contributors must adhere to is paramount to building confidence as well as collections.

Preservation

Nearly every library declares it intention to preserve the digital surrogates that it creates and, in the case of the Library of Congress, it has also pledged to preserve those surrogates created by other libraries under the auspices of the National Digital Library Program. (Citation: the LC preservation strategy descried in some detail in RLG DigiNews.) In reality, however, many libraries have created digital surrogates for access purposes and may have no strategic interest in maintaining those surrogates with the care that they would if they had created those files to serve as replacements. Libraries nonetheless are uncomfortable at this point coming out and saying that they may have a limited commitment to many of their surrogates, should push come to shove. Or they may, in the case of the Library of Congress or the New York Public, who are creating surrogates for access purposes alone, still declare an interest in maintaining those surrogates as long as they can because the original investment in the creation of digital files has created something of enormous value to their patrons.

The mechanism for long-term management of digital surrogates is in theory no different from that of born-digital assets. While refreshment and migration of digital collections has occurred in many libraries, the protocols and policies for preservation are clearly still under development. Many libraries have been sensitized to the fact that loss can be simple and catastrophic, beginning with the wrong choice of (proprietary) hardware, software, or medium on which to encode information to negligent management of metadata. The Y2K threat that libraries faced during 1999 has led to systemic improvements in many cases. Not only did institutions become aware of how deleterious is to allow different software to proliferate, but they also developed disaster preparedness plans and often were given funds for infrastructure upgrades that might have been postponed or not funded without the general sense of urgency that the looming crisis provided.

Issue: Odd objects

Libraries are anticipating the day when they must develop strategies for handling digital objects created by faculty outside the purview of the library. These are the often elaborate constructions done by individual scholars or groups of collaborators that the library hears about only after the critical choices of hardware and software, and metadata have been made, often by people wholly unaware of the problems of long-term access to digital media. 22

An increasing number of library managers express concern about the materials created by faculty that are “more than a Web site” yet less, often far less, than what the library would choose to accession and preserve. While libraries acknowledge that this is a growing problem, none have been forced to do much about it yet and thoughts about how to deal with faculty projects is just now evolving. Predictably, those that are collection-driven in approach are working to build a system for selecting what the library wishes to accession to its permanent collections. Cornell is developing criteria that individuals must work to if they expect that the library will provide “perpetual care.’” CDL already has those guidelines in place. Michigan has well-articulated preservation policy, one that is detailed enough to support their vision of digital reformatting as a reliable long-term solution to the brittle book problem. (See the following policies:....]

Copyright

At those few institutions that are digitizing items that are not presumed to be in the public domain, copyright is best described as a risk management issue. At the Mann Agricultural Library at Cornell, librarians are digitizing a core literature that by its nature will include some items still under copyright control. They have decided that it is part of the digitization process to run copyright checks, and while they recognize it is very time-consuming, they have settled on a few ground rules and assumptions that streamline their work and reduce the risk of infringement. Based on research, they judge that only about 5-7% of copyright owners renew the copyright, so in those cases where this may be the case, they limit their attempts to trace the owner and wait to be contacted by them in the event of inadvertent infringement.

For most institutions that put up visual images under copyright or likely to be restricted in some way, thumbnails are served. At the Library of Congress, “Recognizing that rights status is often indeterminate (particularly for unpublished works), a risk management approach is taken. Some items that are digitized for internal purposes or for preservation are limited to onsite access.”

Access onsite and off

The ease of finding digitized items on library Web sites varies to a great degree. There are a few sites that are constructed in a way that makes finding digitized collections almost impossible for people who do not already know they exist. Others have integrated the surrogates into the online catalog and on OCLC and/or RLIN. Some DLF members, those whose primary purpose in digitization is to increase access to special collections and rare items, have expressed willingness to expose the metadata for these collections to a harvester using a technical framework established for the Open Archives Initiative.

NOTE: DO I NEED THIS LAST SECTION? 23

III. User support

In terms of user preferences and support, there is also little understanding of how research library patrons use what has been created for them. Most libraries recognize that the collections they now offer online require different types of support for users than they have traditionally given readers in the reading room. In many cases, user support has been developed for “digital collections” or “digital resources,” terms that almost invariably denote born-digital (licensed) materials. The Library of Congress, which specifically targets a K-12 audience, has three reference librarians for their National Digital Library Program Learning Center. Libraries as a rule have not been reallocating staff to deal specifically with digitized collections. Hit rates and analysis of Web transactions have yielded a lot of quantitative data about access to digital surrogates, and that has been mined for any number of internal purposes, from “demonstrating” how popular sites are to making gross generalities about how where users are dialing in from. Qualitative analysis is harder to derive from these raw data, and, as a rule, few in-depth studies have attempted to look into how patrons are reacting to the added functionality and convenience of materials now online. Libraries have been keeping careful track of gate counts, for example, but when they go up or go down, what conclusion are we to draw about the effect of online resources on use of on-site resources?

One of the exceptions to this rule is not a library, but the journal archiving service, JSTOR, which rigorously tracks the use of its resources. It analyzes its users’ behavior because it needs to recover costs and hence must stay closely attuned to demand, within the constraints of copyright issues. Looking ahead, such close analysis of the ways in which researchers use specific online resources, and especially how they do or do not contribute to the productivity of faculty and students, will be a prime interest for DLF member libraries.

Most libraries report having classes and other instructional options available for both students and faculty. Some librarians report that instruction is not really necessary for undergraduates, who are quite used to looking first online, but general orientations to library collections is in greater need than ever.

NOTE: WHAT ELSE SHOULD BE SAID IN THIS CONTEXT? 24

SUMMARY AND CONCLUSIONS

Costs

A useful way of looking at the issue of the strategic value of digitized collections is to ask what would happen if these programs were self-supporting. Would the money for creating digital surrogates come from the acquisitions budget? From preservation? If the program were supported by through a separate line, from which pocket of money would these funds be reallocated?

Michigan, which says that it left project-based digitization 3 years ago, says that it funds programs from the base or from fee-base, though they still seek and get large grants for conversion. Managers says that conversion costs must come from outside sources, at least in part. Base funding is to be for new content and for the infrastructure and technical support to sustain digital collections (whether born digital or “re-born”). Harvard’s Library Digital Initiative (LDI) is funded from internal funds, though of course the purpose of LDI is not collection development at all but building infrastructure.

As it is, digitization costs at most libraries are borne by external funds, and the projects developed appeal to the intended source of funding, be it a Federal agency with stringent and inflexible grant conditions; private foundations that have a heuristic interest in projects; or donors and alumni who usually are contributing to the institution for eleemosynary purposes and often do so out of dedication to the institution and its mission per se. When asked about priorities for selection, many respondents remarked wryly that they digitize what they can get money to do, implying and even sometimes stating directly that their choices were skewed by funding consideration and did not serve pure scholarship. However, it is also the case that what selectors, curators, and bibliographers think to be of highest value differed from what administration did, because they had differing views of where scholarship is tending, how sophisticated the users are, and what is of lasting import.

Some librarians expressed great concern about the fact that, as long as libraries are competing for outside funds to digitize, they will be stuck in the entrepreneurial phase in which collection development will be driven by strong personalities - this who are willing to compete for funds - and that some parts of the library’s collections will go untapped simply because the subject specialist in that area in not “the entrepreneurial type.” Others express a more serious concerns about the fate of non-English language materials, and even greater anxiety about the neglect of non-Roman collections.

Concerns about the changing role of library staff, above all of bibliographers, comes up with increasing frequency. They get increasingly diverted from traditional collection development duties to spend more time selecting for digitization - what might be called reselecting, something that is bound to have some effect on current collection development of traditional materials. A topic far more widely discussed is where to find the skill sets that are needed for digital library development. If they cannot afford to hire it - as increasingly they can not, how are they to go about growing it from within the organization without robbing Peter to pay Paul?

NOTE: HAVEN’T GOTTEN THIS FAR. WHAT IS USEFUL HERE? 25

Benefits

Oxford reports [citation] that curators learned an enormous amount about heir own collections and about those of other colleges.

Staff training and expertise

CONCLUSION

What is clear by now is that digitization must be an integral part of the core mission work of the library to be a sustainable activity over time. Whereas the majority of research libraries engaged in digitization have been able to raise external funds for conversion, they all recognize that hazards of relying on these funds for long. There is no such thing as a free building. Even were a donor to pay for all aspects of the construction, from land acquisition to furnishing, at some point the ongoing maintenance costs will become the responsibility of the home institution and the building must meet minimum criteria for support.

The same holds true of digitized collections. Over the next few years we will see some libraries that have done digital projects essentially phase them out or reduce this activity to the exception rather than the rule. Others committed to large-scale digital projects either as a part of collection management or as a commitment to extending access, will continue and begin to address the tough questions of finding internal funds or developing fee-based services to support conversion, maintenance, and service. 26

APPENDIX I. FRAMEWORK QUESTIONS

I. Identification, evaluation, and selection

Do (written) guidelines exist; and if they do, how were they generated? Who initiates the reformatting project? Who selects the material? According to what criteria? sampling of core collections signature items most frequently demanded (by whom? Faculty? Students? Publishing office? commercial entities?) preservation surrogates added functionality or ease of use funder mandates public domain high research value (and who decides that and how?) distance learning demands course reserves faculty request financial considerations (and are those considerations given higher or lower priority than others?) rarity Neat Stuff My Stuff Who is consulted in the process – legal counsel, preservation specialists, catalogers, extramural experts or constituents, development office, etc. Who is the intended audience – researchers vs. students; specialists vs. general public, secondary vs. primary source users; how much additional curatorial work is necessary to make the objects accessible to the intended audiences? Are these raw materials or cooked? Published or unpublished? What is the intended purpose – pedagogical, access, preservation surrogate, commercial use, etc. Describe in some detail the various advantages and disadvantages of using a digital surrogate for access, teaching, research, etc. in order to clarify what criteria should be considered when deciding to digitize an item or collection of items – functionality, post-scanning enhancements, forensics possible and impossible with digital, size and medium constraints of the source materials, etc. Is any repurposing envisioned? Who decides the technical specifications for conversion and how?

III. Cataloging

Who creates the metadata? What criteria are used for deciding what gets described and to what level of detail?

III. Data management and access policies 27

[preservation and dissemination/service]

Given the stated (or implicit, or not even considered) purpose of the surrogate, what provisions have been made for access to the digital versions – to whom and under what circumstances? How is copyright handled, if it applies? How is use authorized, if it is? What provisions are made for long-term maintenance and access? Are the surrogates searchable and accessible throughout the internal system and beyond?

IV. User support [reference/service]

How much support is given to the creator (say, a professor or curator), to the intended user, and to users not originally foreseen at the time of conversion? 28

BIBLIOGRAPHY

 

 

 

    

OAC: www.cdlib.org/libstaff/sharedcoll/ then projects then OAC]

Recommended publications