ER&L 2016: Trying Something New: Examining Usage on the Macro and Micro Levels in the Sciences

Cheaper by the yard
“Cheaper by the yard” by Bill Smith

Speakers: Krystie (Klahn) Wilfon, Columbia University; Laura Schimming and Elsa Anderson, Icahn School of Medicine at Mount Sinai

Columbia has reduced their print collection in part due to size, but more because their users prefer electronic collections. Wilfon has employed a systematic collection of cost and data over time, a series of analysis templates based on item type and data source, and an organized system of distributing the end product. [She uses similar kinds of metrics I use in my reports, but far more data-driven and detailed. She’s only done this for two years, so I’m not sure how sustainable this is. I know how much time my own reports take each month, and I don’t think I would have the capacity to add more data to them.]

Mount Sinai had a lot of changes in 2013 that changed their collection development practices. They wanted to assess the resources they have, but found that traditional metrics were problematic. Citation counts don’t factor in the resources used but not cited; journal impact factors have their own issues; etc. They wanted to include altmetrics in the assessment, as well. They ended up using Altmetrics Explorer.

Rather than looking at CPU for the journal package as a whole, she broke it up by journal title and also looked at the number of articles published per title as a percentage of the whole. This is only one picture, though. Using Altmetric Explorer, they found that the newsletter in the package, while expensive in the cost per use, had a much higher median Altmetric score than the main peer reviewed journal in the package (score divided by the number of articles published in that year). So, for a traditional journal, citations and impact factor and COUNTER usage are important, but maybe for a newsletter type publication, altmetrics are more important. Also, within a single package of journal titles, there are going to be different types of journals. You need to figure out how to evaluate them without using the same stick.

NASIG 2012: A Model for Electronic Resources Assessment

Presenter: Sarah Sutton, Texas A&M University-Corpus Christi

Began the model with the trigger event — a resource comes up for renewal. Then she began looking at what information is needed to make the decision.

For A&I databases, the primary data pieces are the searches and sessions from the COUNTER release 3 reports. For full-text resources, the primary data pieces are the full-text downloads also from the COUNTER reports. In addition to COUNTER and other publisher supplied usage data, she looks at local data points. Link-outs from the a-to-z list of databases tells her what resources her users are consciously choosing to use, and not necessarily something they arrive at via a discovery service or Google. She’s able to pull this from the content management system they use.

Once the data has been collected, it can be compared to the baseline. She created a spreadsheet listing all of the resources, with a column each for searches, sessions, downloads, and link-outs. The baseline set of core resources was based on a combination of high link-outs and high usage. These were grouped by similar numbers/type of resource. Next, she calculated the cost/use for each of the four use types, as well as the percentage of change in use over time.

After the baseline is established, she compares the renewing resource to that baseline. This isn’t always a yes or no answer, but more of a yes or maybe answer. Often more analysis is needed if it is tending towards no. More data may include overlap analysis (unique to your library collection), citation lists (unique titles — compare them with a list of highly-cited journals at your institution or faculty requests or appear on a core title list), journal-level usage of the unique titles, and impact factors of the unique titles.

Audience question: What about qualitative data? Talk to your users. Does not have a suggestion for how to incorporate that into the model without increasing the length of time in the review process.

Audience question: How much staff time does this take? Most of the work is in setting up the baseline. The rest depends on how much additional investigation is needed.

[I had several conversations with folks after this session who expressed concern with the method used for determining the baseline. Namely, that it excludes A&I resources and assumes that usage data is accurate. I would caution anyone from wholesale adopting this as the only method of determining renewals. Without conversation and relationships with faculty/departments, we may not truly understand what the numbers are telling us.]

ER&L 2012: Electronic Resources Workflow Analysis & Process Improvement

workflow at the most basic level
illustration by wlef70

Speakers: Ros Raeford & Beverly Dowdy

Users were unhappy with eresource management, due in part to their ad hoc approach, and they relied on users to notify them when there were access issues. A heavy reliance on email and memory means things slip through the cracks. They were not a train wreck waiting to happen, they were train wreck that had already occurred.

Needed to develop a deeper understanding of their workflows and processes to identify areas for improvement. The reason that earlier attempts have failed was due to not having all the right people at the table. Each stage of the lifecycle needs to be there.

Oliver Pesch’s 2009 presentation on “ERMS and the E-Resources Lifecycle” provided the framework they used. They created a staff responsibility matrix to determine exactly what they did, and then did interviews to get at how they did it. The narrative was translated to a workflow diagram for each kind of resource (ebooks, ejournals, etc.).

Even though some of the subject librarians were good about checking for dups before requesting things, acquisitions still had to repeat the process because they don’t know if it was done. This is just one example of a duplication of effort that they discovered in their workflow review.

For the ebook package process, they found it was so unclear they couldn’t even diagram it. It’s very linear, and it could have a number of processes happening in parallel.

Lots of words on screen with great ideas of things to do for quality control and user interface improvements. Presenter does not highlight any. Will have to look at it later.

One thing they mentioned is identifying essential tasks that are done by only one staff. They then did cross-training to make sure that if the one is out for the day, someone else can do it.

Surprisingly, they were not using EDI for firm orders, nor had they implemented tools like PromptCat.

Applications that make things work for them:

JTacq — using this for the acquisition/collections workflow. I’ve never heard of it, but will investigate.

ImageNow — not an ERM — a document management tool. Enterprise content management, and being used by many university departments but not many libraries.

They used SharePoint at a meeting space for the teams.

peer-to-peer sharing — the legal kind

I’ve been watching with interest to see what comes out of the TERMS: Techniques for Electronic Resources Management, for obvious reasons. Jill Emery and Graham Stone envision this to be a concise listing of the six major stages of electronic resources management, as well as a place to share tips and workflows relating to each. As they publish each section, I’ve marveled at how concise and clear they are. If you do anything with electronic resources management, you need to be following this thing.

Evaluation of resources has been a subject near and dear to my heart for many years, and increasingly so as we’ve needed to justify why we continue to pay for one resource when we would like to purchase another equally desired resource. And in relation to that, visualization of data and telling data stories are also professional interests of mine.

renewal decision report
renewal decision report example

When the section on annual review was published last month, it included an appendix that is an example of usage and cost  data for a resource delivered as both flat numbers and a graph. While this is still a rather technical presentation, it included several elements I had not considered before: cost as a percentage of the budget line, cost per student, use per student, and a mean use for each year. I decided this method of delivering statistical information about our electronic resources might be more useful to our subject specialists than my straight-up number approach. So, I’ve now incorporated it into the annual review checklist that I send out to the subject specialists in advance of renewal deadlines.

I’m not going to lie — this isn’t a fast report to create from scratch. However, it has made a few folks take a hard look at some resources and the patterns of their use, and as far as I’m concerned, that makes it work my time and effort. Repeat use will be much faster, since I’ll just need to add one year’s worth of data.

NASIG 2011: Using Assessment to Make Collection Development Decisions

Speaker: Mary Ann Trail & Kerry Chang FitzGibbon

It is not in the interest of faculty to cut journal titles because it may be perceived as an admission that it is not needed. With relying on faculty input for collection decisions, the collection can become skewed when certain faculty are more vocal than others.

When a new director arrived in 2000, they began to use more data to make decisions. And, the increase in aggregator databases and ejournals changed what was being collected. In addition to electronic publishing, electronic communication has changed the platform and audience for faculty communicating with each other and administrators, which can be both good and bad for library budgets.

In 2005, after some revision of collection methods, cancellations, and reallocation, they went to a periodicals allocation formula. This didn’t work out as well as expected, and was abandoned in 2008.

As a part of their assessment projects in 2008, they looked at the overlap between print and electronic titles to see if they could justify canceling the print in order to address the budget deficit. Most importantly, they wanted to proactively calm the faculty, who were already upset about past cancellations, with assurances that they would not lose access to the titles.

They used their ERMS to generate overlap analysis report, and after some unnecessary and complicated exporting and sorting, she was able to identify overlaps with their print collection. Then she identified the current subscriptions before going to the databases to verify that the access is correct and noted any embargo information. This was then combined with budget line, costs, and three years of usage (both print and electronic for non-aggregator access).

They met their budget target by canceling the print journals, and they used the term “format change” instead of cancel when they communicated with faculty. Faculty showed more support for this approach, and were more willing to advocate for library funds.

Did they consider publications that have color illustrations or other materials that are better in print? Yes, and most of them were retained in print.

Did they look at acquiring other databases to replace additional print cancellations? No, not with their funding situation.

What was the contingency plan for titles removed from the aggregator? Would resubscribe if the faculty asked for it, but funds would likely come from the monograph budget.