ER&L 2016: Trying Something New: Examining Usage on the Macro and Micro Levels in the Sciences

Cheaper by the yard
“Cheaper by the yard” by Bill Smith

Speakers: Krystie (Klahn) Wilfon, Columbia University; Laura Schimming and Elsa Anderson, Icahn School of Medicine at Mount Sinai

Columbia has reduced their print collection in part due to size, but more because their users prefer electronic collections. Wilfon has employed a systematic collection of cost and data over time, a series of analysis templates based on item type and data source, and an organized system of distributing the end product. [She uses similar kinds of metrics I use in my reports, but far more data-driven and detailed. She’s only done this for two years, so I’m not sure how sustainable this is. I know how much time my own reports take each month, and I don’t think I would have the capacity to add more data to them.]

Mount Sinai had a lot of changes in 2013 that changed their collection development practices. They wanted to assess the resources they have, but found that traditional metrics were problematic. Citation counts don’t factor in the resources used but not cited; journal impact factors have their own issues; etc. They wanted to include altmetrics in the assessment, as well. They ended up using Altmetrics Explorer.

Rather than looking at CPU for the journal package as a whole, she broke it up by journal title and also looked at the number of articles published per title as a percentage of the whole. This is only one picture, though. Using Altmetric Explorer, they found that the newsletter in the package, while expensive in the cost per use, had a much higher median Altmetric score than the main peer reviewed journal in the package (score divided by the number of articles published in that year). So, for a traditional journal, citations and impact factor and COUNTER usage are important, but maybe for a newsletter type publication, altmetrics are more important. Also, within a single package of journal titles, there are going to be different types of journals. You need to figure out how to evaluate them without using the same stick.

ER&L 2016: COUNTER Point: Making the Most of Imperfect Data Through Statistical Modeling

score card
“score card” by AIBakker

Speakers: Jeannie Castro and Lindsay Cronk, University of Houston

Baseball statistics are a good place to start. There is over 100 years of data. Cronk was wishing that she could figure the WAR for eresources. What makes a good/strong resource? What indicators besides usage performance should we evaluate? Can statistical analysis tell us anything?

Castro suggested looking at the data as a time series. Cronk is not a statistician, so she relied on a lot of other folks who can do that stuff.

Statistical modeling is the application of a set of assumptions to data, typically paired data. There are several techniques that can be used. COUNTER reports are imperfect time series data sets. They don’t give us individual data points (day/time). They are clumped together by month, but aside from this, they are good for time series. There is equal spacing and time of consistently measured data points.

Decomposition provides a framework for segmented time series. Old data can be checked by newer data (i.e. 2010-2013 compared to 2014) without having to predict the future. Statistical testing is important in this. Exponential smoothing eliminates noise/outlier, and is very useful for anomalies in your COUNTER data due to access issues or unusual spikes.

Cronk really wanted to look at something other than cost/use, which was part of the motivation to do this. Usage by collection portion size is another method touted by Michael Levine-Clark. She needed 4+ years usage history for reverse predictive analysis. Larger numbers make analysis easier, so she went with large aggregator databases for DB and some large journal packages for JR.

She used Excel for data collection and clean-up, R (studio) for data analysis, and Tableau (public) for data visualization. R studio is a lot more user-friendly than the desktop. There are canned analysis packages that will do the heavy lifting. (There was a recommendation forRyan Womack’s video series for learning how to use R.) Tableau helped with visualization of the data, including some predictive indicators. We cannot see trends ourselves, so these visualization can help us make decisions. Usage can be predicted based on the past, she found.

They found that usage over time is consistent across the vendor platforms (for journal usage), even though some were used more than others.

The next level she looked at was the search to session ratio for databases. What is the average? Is that meaningful? When we look at usage, what is the baseline that would help us determine if this database is more useful than another? Downward trends might be indicators of outside factors.

Charleston 2014 – To Go Boldly Beyond Downloads

Speaker: Gabriel Hughes, Elsevier

New to the industry, and didn’t know what usage data was when he started. He’s interested in usage that COUNTER doesn’t count.

Internet based storage and sharing technology results in higher volume of reading of material than is reflected in download statistics due to scholars sharing the content more easily. Elsevier has done surveys on this, and 65% of those researchers surveyed this year agreed that they access articles from a shared folder or platform, which is increasing over time.

For the most part, sharing doesn’t happen because the recipient doesn’t have access. It’s more out of convenience, particularly with annotations or attached notes. Of course, he recommends using Mendeley (or similar tools, whatever they may be) to meet this need.

Elsevier is funding the research that Tenopir is doing on how and why researchers share, and how that compares with measured usage.

 

Speaker: Carol Tenopir, University of Tennessee

There are many tools and platforms designed to share citations and content, and they were designed to fit the research workflow. Informal methods are tools that weren’t designed for sharing citations/documents, but are used widely both personally and professionally to do so (i.e. Twitter, blogs).

They have done interviews and focus groups, and an international survey that went out two days ago. Sharing a citation or link is more common than sharing a document. Those that share their own work say that they mostly share what was uploaded to their institutional repository.

Altruism and the advancement of research trump any concerns about copyright when it comes to sharing content with other scholars.

There are some differences when it comes to books. Articles and research reports are more easily shared, but book royalties are a consideration that causes many to hesitate. They certainly wouldn’t want their own books shared instead of purchased.

Is a COUNTER-like measure/calculation possible? Good question. Any thoughts on that are welcome.

ER&L 2014 — Beyond COUNTER: The changing definition of “usage” in an open access economy

Speakers: Kathy Perry (VIVA), Melissa Blaney (American Chemical Society), and Nan Butkovitch (Pennsylvania State University)

In 1998, ICOLC created guidelines for delivering usage information, and they have endorsed COUNTER and SUSHI. COUNTER works because all the players are involved and agree to reasonable timeframes.

COUNTER Code of Practice 4 now recognizes media and tracking of use through mobile devices.

PIRUS (Publisher and Institutional Repository Usage Statistics) is the next step, but they are going to drop the term and incorporate it as an optional report in COUNTER (Article Report 1). There is a code of practice and guidelines on the website.

Usage Factor metric as a tool for assessing journals that aren’t covered by impact factor. It won’t be comparable across subject groups because they are measuring different things.

If your publishers are not COUNTER compliant, ask them to do it.

ACS chose to go to COUNTER 4 in part because it covers all formats. They like being able to highlight usage of gold open access titles and denials due to lack of license. They also appreciated the requirement for the ability to provide JR5, which reports usage by year of publication.

Big increases in search can also mean that people aren’t finding what they want.

ACS notes that users are increasingly coming from Google, Mendeley, and other indexing sources, rather than the publisher’s site itself.

They hear a lot that users want platforms that allow sharing and collaborating across disciplines and institutions. Authors are wanting to measure the impact of their work in traditional and new ways.

Science librarian suggests using citation reports to expand upon the assessment of usage reports. If you have time for that sort of thing and only care about journals that are covered by ISI.

Chemistry authors have been resistant to open access publishing, particularly if they think they can make money off of a patent, etc. She thinks it will be useful to have OA article usage information, but needs to be put in the context of how many OA articles there are available.

What you want to measure in usage can determine your sources. Every measurement method has bias. Multiple usage measurements can have duplication. A new metric is just around the corner.

css.php