a values conundrum

Scales
photo by Charles Thompson (CC BY 2.0)

‘Tis the season when I spend a lot of time gathering and consolidating usage reports for the previous calendar year (though next year not as many if my SUSHI experiment goes well). Today, as I was checking and organizing some of the reports I had retrieved last week, I noticed a journal that had very little use in the 2017 YOP (or 2016, for that matter), so I decided to look into it a bit more.

The title has a one year embargo and then the articles are open access. Our usage is very low (average 3.6 downloads per year) and most of it, according to the JR5 and JR1 GOA for confirmation, is coming from the open access portion, not the closed access we pay for.

The values conundrum I have is multifaceted. This is a small society publisher, and we have only the one title from them. They are making the content open access after one year, and I don’t think they are making authors pay for this, though I could be wrong. These are market choices I want to support. And yet….

How do I demonstrate fiscal responsibility when we are paying ~$300/download? Has the research and teaching shifted such that this title is no longer needed and that’s why usage is so low? Is this such a seminal title we would keep it regardless of whether it’s being used?

Collection development decisions are not easy when there are conflicting values.

giving SUSHI another try

(It's just) Kate's sushi! photo by Cindi Blyberg
photo by Cindi Blyberg

I’m going to give SUSHI another try this year. I had set it up for some of our stuff a few years back with mix results, so I removed it and have been continuing to manually retrieve and load reports into our consolidation tool. I’m still doing that for the 2017 reports, because the SUSHI harvesting tool I have won’t let me go back and pull from before, only monthly moving forward now.

I’ve spent a lot of time making sure titles in reports matched up with our ERMS so that consolidation would work (it’s matching on title, ugh), and despite my efforts, any reports generated still need cleanup. What is the value of my effort there? Not much anymore. Especially since ingesting cost data for journals/books is not a simple process to maintain, either. So, if all that matters less to none, might as well take whatever junk is passed along in the SUSHI feed as well and save myself some time for other work in 2019.

Charleston 2016: COUNTER Release 5 — Consistency, Clarity, Simplification and Continuous Maintenance

Speakers: Lorraine Estelle (Project COUNTER), Anne Osterman (VIVA – The Virtual Library of Virginia), Oliver Pesch (EBSCO Information Services)

COUNTER has had very minimal updates over the years, and it wasn’t until release 4 that things really exploded with report types and additional useful data. Release 5 attempts to reduce complexity so that all publishers and content providers are able to achieve compliance.

They are seeking consistency in the report layout, between formats, and in vocabulary. Clarity in metric types and qualifying action, processing rules, and formatting expectations.

The standard reports will be fewer, but more flexible. The expanded reports will introduce more data, but with flexibility.

A transaction will have different attributes recorded depending on the item type. They are also trying to get at intent — items investigated (abstract) vs. items requested (full-text). Searches will now distinguish between whether it was on a selected platform, a federated search, a discovery service search, or a search across a single vendor platform. Unfortunately, the latter data point will only be reported on the platform report, and still does not address teasing that out at the database level.

The access type attribute will indicate when the usage is on various Open Access or free content as well as licensed content. There will be a year of publication (YOP) attribution, which was not in any of the book reports and only included in Journal Report 5.

Consistent, standard header for each report, with additional details about the data. Consistent columns for each report. There will be multiple rows per title to cover all the combinations, making it more machine-friendly, but you can create filters in Excel to make it more human-friendly.

They expect to have release 5 published by July 2017 with compliance required by January 2019.

Q&A
Q: Will there eventually be a way to account for anomalies in data (abuse of access, etc.)?
A: They are looking at how to address use triggered by robot activity. Need to also be sensitive of privacy issues.

Q: Current book reports do not include zero use entitlements. Will that change?
A: Encouraged to provide KBART reports to get around that. The challenge is that DDA/PDA collections are huge and cumbersome to deliver reports. Will also be dropping the zero use reporting on journals, too.

Q: Using DOI as a unique identifier, but not consistently provided in reports. Any advocacy to include unique identifiers?
A: There is an initiative associated with KBART to make sure that data is shared so that knowledgbases are updated so that users find the content so that there are fewer zero use titles. Publisher have motivation to do this.

Q: How do you distinguish between unique uses?
A: Session based data. Assign a session ID to activity. If no session tracking, a combination of IP address and user agent. The user agent is helpful when multiple users are coming through one IP via the proxy server.

Slides

community site for usage statistics

Usus is an independent community website developed to help librarians, library consortium administrators, publishers, aggregators, etc. communicate around topics related to usage statistics. From problem-solving to workflow tips to calling out bad actors, this site hopes to be the hub of all things usage.

Do you have news to share or a problem you can’t figure out? Do you have really cool workflows you want to share? Drop us a note!

guest post on ACRLog

I see a strong need for the creation, support, and implementation of data standards and tools to provide libraries with the means to effectively evaluate their resources.

A few months ago, Maura Smale contacted me about writing a guest post for ACRLog. I happily obliged, and it has now been published.

When it came time to finally sit down and write about something (anything) that interested me in academic librarianship, I found myself at a loss for words. Last month, I spent some time visiting friends here and there on my way out to California for the Internet Librarian conference, and many of those friends also happened to be academic librarians. It was through those conversations that I found a common thread for the issues that are pushing some of my professional buttons.

Specifically, I see a strong need for the creation, support, and implementation of data standards and tools to provide libraries with the means to effectively evaluate their resources. If that interests you as well, please take a moment to go read the full essay, and leave a comment if you’d like.