#ERcamp13 at George Washington University

“The law of two feet” by Deb Schultz

This is going to be long and not my usual style of conference notetaking. Because this was an unconference, there really wasn’t much in the way of prepared presentations, except for the lightening talks in the morning. What follows below the jump is what I captured from the conversations, often simply questions posed that were left open for anyone to answer, or at least consider.

Some of the good aspects of the unconference style was the free-form nature of the discussions. We generally stayed on topic, but even when we didn’t, it was about a relevant or important thing that lead to the tangents, so there were still plenty of things to take away. However, this format also requires someone present who is prepared to seed the conversation if it lulls or dies and no one steps in to start a new topic.

Also, if a session is designed to be a conversation around a topic, it will fall flat if it becomes all about one person or the quirks of their own institution. I had to work pretty hard on that one during the session I led, particularly when it seemed that the problem I was hoping to discuss wasn’t an issue for several of the folks present because of how they handle the workflow.

Some of the best conversations I had were during the gathering/breakfast time as well as lunch, lending even more to the unconference ethos of learning from each other as peers.

Anyway, here are my notes.

Continue reading “#ERcamp13 at George Washington University”

NASIG 2012: A Model for Electronic Resources Assessment

Presenter: Sarah Sutton, Texas A&M University-Corpus Christi

Began the model with the trigger event — a resource comes up for renewal. Then she began looking at what information is needed to make the decision.

For A&I databases, the primary data pieces are the searches and sessions from the COUNTER release 3 reports. For full-text resources, the primary data pieces are the full-text downloads also from the COUNTER reports. In addition to COUNTER and other publisher supplied usage data, she looks at local data points. Link-outs from the a-to-z list of databases tells her what resources her users are consciously choosing to use, and not necessarily something they arrive at via a discovery service or Google. She’s able to pull this from the content management system they use.

Once the data has been collected, it can be compared to the baseline. She created a spreadsheet listing all of the resources, with a column each for searches, sessions, downloads, and link-outs. The baseline set of core resources was based on a combination of high link-outs and high usage. These were grouped by similar numbers/type of resource. Next, she calculated the cost/use for each of the four use types, as well as the percentage of change in use over time.

After the baseline is established, she compares the renewing resource to that baseline. This isn’t always a yes or no answer, but more of a yes or maybe answer. Often more analysis is needed if it is tending towards no. More data may include overlap analysis (unique to your library collection), citation lists (unique titles — compare them with a list of highly-cited journals at your institution or faculty requests or appear on a core title list), journal-level usage of the unique titles, and impact factors of the unique titles.

Audience question: What about qualitative data? Talk to your users. Does not have a suggestion for how to incorporate that into the model without increasing the length of time in the review process.

Audience question: How much staff time does this take? Most of the work is in setting up the baseline. The rest depends on how much additional investigation is needed.

[I had several conversations with folks after this session who expressed concern with the method used for determining the baseline. Namely, that it excludes A&I resources and assumes that usage data is accurate. I would caution anyone from wholesale adopting this as the only method of determining renewals. Without conversation and relationships with faculty/departments, we may not truly understand what the numbers are telling us.]

ER&L 2010: Comparison Complexities – the challenges of automating cost-per-use data management

Speakers: Jesse Koennecke & Bill Kara

We have the use reports, but it’s harder to pull in the acquisitions information because of the systems it lives in and the different subscription/purchase models. Cornell had a cut in staffing and an immediate need to assess their resources, so they began to triage statistics cost/use requests. They are not doing systematic or comprehensive reviews of all usage and cost per use.

In the past, they have tried doing manual preparation of reports (merging files, adding data), but that’s time-consuming. They’ve had to set up processes to consistently record data from year to year. Some vendor solutions have been partially successful, and they are looking to emerging options as well. Non-publisher data such as link resolver use data and proxy logs might be sufficient for some resources, or for adding a layer to the COUNTER information to possibly explain some use. All of this has required certain skill sets (databases, spreadsheets, etc.)

Currently, they are working on managing expectations. They need to define the product that their users (selectors, administrators) can expect on a regular basis, what they can handle on request, and what might need a cost/benefit decision. In order to get accurate time estimates for the work, they looked at 17 of their larger publisher-based accounts (not aggregated collections) to get an idea of patterns and unique issues. As an unfortunate side effect, every time they look at something, they get an idea of even more projects they will need to do.

The matrix they use includes: paid titles v. total titles, differences among publishers/accounts, license period, cancellations/swaps allowed, frontfile/backfile, payment data location (package, title, membership), and use data location and standard. Some of the challenges with usage data include non-COUNTER compliance or no data at all, multiple platforms for the same title, combined subscriptions and/or title changes, titles transferred between publishers, and subscribed content v. purchased content. Cost data depends on the nature of the account and the nature of the package.

For packages, you can divide the single line item by the total use, but that doesn’t help the selectors assess the individual subset of titles relevant to their areas/budgets. This gets more complicated when you have packages and individual titles from a single publisher.

Future possibilities: better automated matching of cost and use data, with some useful data elements such as multiple cost or price points, and formulas for various subscription models. They would also like to consolidate accounts within a single publisher to reduce confusion. Also, they need more documentation so that it’s not just in the minds of long-term staff. 

ER&L 2010: We’ve Got Data – Now What Do We Do With It? Applying Standards to Assess Information Resources

Speakers: Mary Feeney, Ping Situ, and Jim Martin

They had a budget cut (surprise surprise), so they had to asses what to cut using the data they had. Complicating this was a change in organizational structure. In addition, they adopted the BYU project management model. Also, they had to sort out a common approach to assessment across all of the disciplines/resources.

They used their ILLs to gather stats about print resource use. They hired Scholarly Stats to gather their online resource stats, and for publishers/vendors not in Scholarly Stats, they gathered data directly from the vendors/publishers. Their process involved creating spreadsheets of resources by type, and then divided up the work of filling in the info. Potential cancellations were then provided to interested parties for feedback.

Quality standards:

  • 60% of monographs need to show at least one use in the last four years – this was used to apply cuts to the firm orders book budget, which impacts the flexibility for making one-time purchases with remaining funds and the book money was shifted to serial/subscription lines
  • 95% of individual journal titles need to show use in the last three years (both in-house and full-text downloads) – LJUR data was used to add to the data collected about print titles
  • dual format subscriptions required a hybrid approach, and they compared the costs with the online-only model – one might think that switching to online only would be a no-brainer, but licensing issues complicate the matter
  • cost per use of ejournal packages will not exceed twice the cost of ILL articles

One problem with their approach was with the existing procedures that resulted in not capturing data about all print journals. They also need to include local document delivery requests in future analysis. They need to better integrate the assessment of the use of materials in aggregator databases, particularly since users are inherently lazy and will go the easiest route to the content.

Aggregator databases are difficult to compare, and often the ISSN lists are incomplete. And, it’s difficult to compare based on title by title holdings coverage. It’s useful for long-term use comparison, but not this immediate project. Other problems with aggregator databases include duplication, embargos, and completeness of coverage of a title. They used SerSol’s overlap analysis tool to get an idea of duplication. It’s a time-consuming project, so they don’t plan to continue with it for all of their resources.

What if you don’t have any data or the data you have doesn’t have a quality standard? They relied on subject specialists and other members of the campus to assess the value of those resources.