VLACRL Spring 2011: Clay Shirky, Fantasy Football, and the Future of Library Collections

As we shift to a demand-driven collection development approach, we will better be able to provide content at the point of need.

Speaker: Greg Raschke

Raschke started off with several assumptions about the future of library collections. These should not be a surprise to anyone who’s been paying attention: The economics of our collections is not sustainable – the cost and spend has gone up over the years, but there is a ceiling to funding, so we need to lower the costs of the entire system. We’re at a tipping point where just in case no longer delivers at the point of need. We must change the way we collect, and it will be hard, but not impossible.

The old system of supply-side collection development assumes that we’re working with limited resources (i.e. print materials), so we have to buy everything just in case someone needs it 10 years down the road when the book/journal/whatever is out of print. As a result, we judge the quality of a collection by its size, rather than by its relevance to the users. All of this contributes to an inelastic demand for journals and speculative buying.

The new system of demand-driven collections views them as drivers of research and teaching. It’s not really a new concept so much as a new workflow. There’s less tolerance in investing in a low-use collection, so there is an increase in the importance of use data and modifying what we collect based on that use data. The risks of not evolving and failing to innovate can be seen in the fate of the newspapers, many of whom held onto the old systems for too long and are dying or becoming irrelevant as a result.

Demand-driven collection development can create a tension between the philosophy of librarians as custodians of scholarship and librarians as enablers of a digital environment for scholars. Some think that this type of collection development may result in lower unit costs, but the reality is that unless the traditions of tenure and promotion change, the costs of publishing scholarly works will not go down. One of the challenging/difficult aspects of demand-driven collection development is that we won’t be getting new funds to do it – we must free funds from other areas in order to invest in these new methods (i.e. local digital production and patron-driven acquisitions).

The rewards of adapting are well worth it. The more our constituencies use the library and its resources, the more vital we become. Look at your data, and then bet on the numbers. Put resources into enabling a digital environment for your scholars.

Demand-driven collection development is not just patron-driven acquisitions! It’s about becoming an advanced analyst and increasing the precision in collection development. For NCSU‘s journal review, they look at downloads, impact factors, publications by NCSU authors, publications that cite NCSU authors, and gather feedback from the community. These bibliometrics are processed through a variety of formulas to standardize them for comparison and to identify outliers.

For print resources, they pulled circulation and bibliographic information out of their ILS and dropped it into SAS to assess the use of these materials over time. It was eye-opening to see what subject areas saw circulation greater than one over 10 years from the year they were added to the collection and those that saw no circulations. As a result, they were able to identify funds that could go towards supporting other areas of the collection, and they modified the scopes of their approval profiles. [A stacked graph showing the use of their collection, such as print circulation, ejournals/books downloads, reserves, and ILL has been one of their most popular promotional tools.]

As we shift to a demand-driven collection development approach, we will better be able to provide content at the point of need. This includes incorporating more than just our local collections (i.e. adding HathiTrust and other free resources to our catalog). Look to fund patron-driven acquisitions that occur both in the ebook purchasing models and through ILL requests. Integrate electronic profiling with your approval plans so that you are not just looking at purchasing print. Consider ebook packages to lower the unit costs, and use short-term loans for ebooks as an alternative to ILL. Get content to users in the mode they want to consume it. Do less speculative buying, and move money into new areas. It is imperative that libraries/librarians collaborate with each other in digital curation, digital collections, and collective bargaining for purchases.

There are challenges, of course. You will encounter the CAVE people. Data-driven and user-driven approaches can punish niche areas, disciplinary variation, and resources without data. The applications and devices we use to interact with digital content are highly personalized, which is a challenge for standardizing access.

I asked Raschke to explain how he evaluates resources that don’t have use data, and he says he’s more likely to stop buying them. For some resources, he can look at proxy logs and whether they are being cited by authors at his institution, but otherwise there isn’t enough data beyond user feedback.

NASIG 2010: Integrating Usage Statistics into Collection Development Decisions

Presenters: Dani Roach, University of St. Thomas and Linda Hulbert, University of St. Thomas

As with most libraries, they are faced with needing to downsize their purchases in order to fit within reduced budgets, so good tools must be employed to determine which stuff to remove or acquire.

The statistics for impact factor means little to librarians, since the “best” journals may not be appropriate for the programs the library supports. Quantitative data like cost per use, historical trends, and ILL data are more useful for libraries. Combine these with reviews, availability, features, user feedback, and the dust layer on the materials, and then you have some useful information for making decisions.

Usage statistics are just one component that we can use to analyze the value of resources. There are other variables than cost and other methods than cost per use, but these are what we most often apply.

Other variables can include funds/subjects, format, and identifiers like ISSN. Cost needs to be defined locally, as libraries manage them differently for annual subscriptions, multiple payments/funds, one-time archive fees, hosting fees, and single title databases or ebooks. Use is also tricky. A PDF download in a JR1 report is different from a session count in a DB1 report is different from a reshelve count for a bound journal. Local consistency with documentation is best practice for sorting this out.

Library-wide SharePoint service allows them to drop documents with subscription and analysis information into one location for liaisons to use. [We have a shared network folder that I do some of this with — I wonder if SharePoint would be better at managing all of the files?]

For print statistics, they track separately bound volume use versus new issue use, scanning barcodes into their ILS to keep a count. [I’m impressed that they have enough print journal use to do that rather than hash marks on a sheet of paper. We had 350 reshelved in last year, including ILL use, if I remember correctly.]

Once they have the data, they use what they call a “fairness factor” formula to normalize the various subject areas to determine if materials budgets are fairly allocated across all disciplines and programs. Applying this sort of thing now would likely shock budgets, so they decided to apply new money using the fairness factor, and gradually underfunded areas are being brought into balance without penalizing overfunded areas.

They have stopped trying to achieve a balance between books and periodicals. They’ve left that up to the liaisons to determine what is best for their disciplines and programs.

They don’t hide their cancellation list, and if any of the user community wants to keep something, they’ve been willing to retain it. However, they get few requests to retain content, and they think it is in part because the user community can see the cost, use, and other factors that indicate the value of the resource for the local community.

They have determined that it costs them around $52 a title to manage a print subscription, and over $200 a title to manage an online subscription, mainly because of the level of expertise involved. So, there really are no “free” subscriptions, and if you want to get into the cost of binding/reshelving, you need to factor in the managerial costs of electronic titles, as well.

Future trends and issues: more granularity, more integration of print and online usage, interoperability and migration options for data and systems, continued standards development, and continued development of tools and systems.

Anything worth doing is worth overdoing. You can gather Ulrich’s reports, Eigen factors, relative price indexes, and so much more, but at some point, you have to decide if the return is worth the investment of time and resources.

css.php