ER&L 2010: Where are we headed? Tools & Technologies for the future

Speakers: Ross Singer & Andrew Nagy

Software as a service saves the institution time and money because the infrastructure is hosted and maintained by someone else. Computing has gone from centralized, mainframe processing to an even mix of personal computers on an networked enterprise to once again a very centralized environment with cloud applications and thin clients.

Library resource discovery is, to a certain extent, already in the cloud. We use online databases and open web search, WorldCat, and next gen catalog interfaces. The next gen catalog places the focus on the institution’s resources, but it’s not the complete solution. (People see a search box and they want to run queries on it – doesn’t matter where it is or what it is.) The next gen catalog is only providing access to local resources, and while it looks like modern interfaces, the back end is still old-school library indexing that doesn’t work well with keyword searching.

Web-scale discovery is a one-stop shop that provides increased access, enhances research, and provides and increase ROI for the library. Our users don’t use Google because it’s Google, they use it because it’s simple, easy, and fast.

How do we make our data relevant when administration doesn’t think what we do is as important anymore? Linked data might be one solution. Unfortunately, we don’t do that very well. We are really good at identifying things but bad at linking them.

If every component of a record is given identifiers, it’s possible to generate all sorts of combinations and displays and search results via linking the identifiers together. RDF provides a framework for this.

Also, once we start using common identifiers, then we can pull in data from other sources to increase the richness of our metadata. Mashups FTW!

ER&L 2010: E-book Management – It Sounds Serial!

Speakers: Dani L. Roach & Carolyn DeLuca

How do you define an ebook? How is it different from a print book? From another online resource? Is it like pornography – you know it when you see it? “An electronic equivalent of a distinct print title.” What about regularly updated ebooks? For the purposes of this presentation, an ebook is defined by its content, format, delivery, and fund designation.

Purchase impacts delivery and delivery impacts purchase – we need to know the platform, the publisher, the simultaneous user level, bundle options, pricing options (more than cost – includes release dates, platforms, and licensing), funding options, content, and vendor options (dealing more one-on-one with publishers). We now have multiple purchasing pots and need to budget annually for ebooks – sounds like a serial. Purchasing decisions impact collection development, including selection decisions, duplicate copies, weeding, preferences/impressions, and virtual content that requires new methods of tracking.

After you purchase an ebook bundle, then you have to figure out what you actually have. The publisher doesn’t always know, and the license doesn’t always reflect reality, and your ERMS/link resolve may not have the right information, either. Also, the publisher doesn’t always remove the older editions promptly, so you have to ask them to “weed.”

Do you use vendor-supplied MARC records or purchase OCLC record sets? Do you get vendor-neutral records, or multiple records for each source (and you will have duplicates).

Who does what? Is your binding person managing the archival process? Is circulation downloading the ebooks to readers? Is your acquisitions person ordering ebooks, or does your license manger now need to do that? How many times to library staff touch a printed book after it is cataloged and shelved? How about ebooks?

Users are already used to jumping from platform to platform – don’t let that excuse get in the way of purchasing decisions.

Ebooks that are static monographs that are one-time purchases are pretty much like print books. When ebooks become hybrids that incorporate aspects of ejournals and subscription databases, it gets complicated.

Why would a library buy an ebook rather than purchase it in a consortia setting? With print books, you can share them, so shouldn’t we want to that with ebooks? Yes, but ebooks are relatively so new that we haven’t quite figured out how to do this effectively, and consorital purchases are often too slow for title-by-title purchases.

ER&L 2010: Developing a methodology for evaluating the cost-effectiveness of journal packages

Speaker: Nisa Bakkalbasi

Journal packages offer capped price increases, access to non-subscribed content, and it’s easier to manage than title-by-title subscriptions. But, the economic downturn has resulted in even the price caps not being enough to sustain the packages.

Her library only seriously considers COUNTER reports, which is handy, since most package publishers provide them. They add to that the publisher’s title-by-title list price, as well as some subject categories and fund codes. Their analysis includes quantitative and qualitative variables using pivot tables.

In addition, they look at the pricing/sales model for the package: base value, subscribed/non-subscribed titles, cancellation allowance, price cap/increase, deep discount for print rate, perpetual/post-cancellation access rights, duration of the contract, transfer titles, and third-party titles.

So, the essential question is, are we paying more for the package than for specific titles (perhaps fewer than we currently have) if we dissolved the journal package?

She takes the usage reports for at least the past three years in order to look at trends, and excludes titles that are based on separate pricing models, and also excluded backfile usage if that was a separate purchase (COUNTER JR1a subtracted from JR1 – and you will need to know what years the publisher is calling the backfile). Then she adds list prices for all titles (subscribed & non-subscribed). Then, she calculates the cost-per-use of the titles, and uses the ILL cost (per the ILL department) as a threshold for possible renewals or cancellations.

The final decision depends on the base value paid by the library, the collection budget increase/decrease, price cap, and the quality/consistency of ILL service (money is not everything). This method is only about the costs, and it does not address the value of the resources to the users beyond what they may have looked at. There may be other factors that contributed to non-use.

ER&L 2010: Comparison Complexities – the challenges of automating cost-per-use data management

Speakers: Jesse Koennecke & Bill Kara

We have the use reports, but it’s harder to pull in the acquisitions information because of the systems it lives in and the different subscription/purchase models. Cornell had a cut in staffing and an immediate need to assess their resources, so they began to triage statistics cost/use requests. They are not doing systematic or comprehensive reviews of all usage and cost per use.

In the past, they have tried doing manual preparation of reports (merging files, adding data), but that’s time-consuming. They’ve had to set up processes to consistently record data from year to year. Some vendor solutions have been partially successful, and they are looking to emerging options as well. Non-publisher data such as link resolver use data and proxy logs might be sufficient for some resources, or for adding a layer to the COUNTER information to possibly explain some use. All of this has required certain skill sets (databases, spreadsheets, etc.)

Currently, they are working on managing expectations. They need to define the product that their users (selectors, administrators) can expect on a regular basis, what they can handle on request, and what might need a cost/benefit decision. In order to get accurate time estimates for the work, they looked at 17 of their larger publisher-based accounts (not aggregated collections) to get an idea of patterns and unique issues. As an unfortunate side effect, every time they look at something, they get an idea of even more projects they will need to do.

The matrix they use includes: paid titles v. total titles, differences among publishers/accounts, license period, cancellations/swaps allowed, frontfile/backfile, payment data location (package, title, membership), and use data location and standard. Some of the challenges with usage data include non-COUNTER compliance or no data at all, multiple platforms for the same title, combined subscriptions and/or title changes, titles transferred between publishers, and subscribed content v. purchased content. Cost data depends on the nature of the account and the nature of the package.

For packages, you can divide the single line item by the total use, but that doesn’t help the selectors assess the individual subset of titles relevant to their areas/budgets. This gets more complicated when you have packages and individual titles from a single publisher.

Future possibilities: better automated matching of cost and use data, with some useful data elements such as multiple cost or price points, and formulas for various subscription models. They would also like to consolidate accounts within a single publisher to reduce confusion. Also, they need more documentation so that it’s not just in the minds of long-term staff. 

ER&L 2010: Step Right Up! Planning, Pitfalls, and Performance of an E-Resources Fair

Speakers: Noelle Marie Egan & Nancy G. Eagan

This got started because they had some vendors come in to demonstrate their resources. Elsevier offered to do a demo for students with food. The library saw that several good resources were being under-used, so they decided to try to put together an eresources demo with Elsevier and others. It was also a good opportunity to get usability feedback about the new website.

They decided to have ten tables total, representing the whole fair. They polled the reference librarians to get suggestions for who to invite, and they ended up with resources that crossed most of the major disciplines at the school. The fair was held in a high-traffic location of the library (so that they could get walk-in participation) and publicized in the student paper, posted it in the blog, and the librarians shared it on Facebook with student and faculty friends.

They had a raffle to gather information about the participants, and in the end, they had 64 undergraduates, 19 graduates, 6 faculty, 5 staff, and 2 alumni attend the fair over the four hours. By having the users fill out the raffle information, they were able to interact with library staff in a different way that wasn’t just about them coming for information or help.

After the fair, they looked at the sessions and searches of the resources that were represented at the fair, and compared the monthly stats from the previous year. However, there is no way to determine whether the fair had a direct impact on increases (and the few decreases).

In and of itself, the event created publicity for the library. And, because it was free (minus staff time), they don’t really need to provide solid support for the success (or failure) of the event.

Some of the vendors didn’t take it seriously and showed up late. They thought that it was a waste of their time to talk about only the resources the library already purchases, rather than pushing new sales, and it’s doubtful those vendors will be invited back. It may be better to try to schedule it around the time of your state library conference, if that happens nearby, so the vendors may already be close and not making a special trip.