ER&L: Buzz Session – Usage Data and Assessment

What are the kinds of problems with collecting COUNTER and other reports? What do you do with them when you have them?

What is a good cost per use? Compare it to the alternative like ILL. For databases, trends are important.

Non-COUNTER stats can be useful to see trends, so don’t discount them.

Do you incorporate data about the university in makings decisions? Rankings in value from faculty or students (using star rating in LibGuides or something else)?

When usage is low and cost is high, that may be the best thing to cancel in budget cuts, even if everyone thinks it’s important to have the resource just in case.

How about using stats for low use titles to get out of a big deal package? Comparing the cost per use of core titles versus the rest, then use that to reconfigure the package as needed.

How about calculating the cost per use from month to month?

ER&L 2010: Beyond Log-ons and Downloads – meaningful measures of e-resource use

Speaker: Rachel A. Flemming-May

What is “use”? Is it an event? Something that can be measured (with numbers)? Why does it matter?

We spend a lot of money on these resources, and use is frequently treated as an objective for evaluating the value of the resource. But, we don’t really understand what use is.

A primitive concept is something that can’t be boiled down to anything smaller – we just know what it is. Use is frequently treated like a primitive concept – we know it when we see it. To measure use we focus on inputs and outputs, but what do those really say about the nature/value of the library?

This gets more complicated with electronic resources that can be accessed remotely. Patrons often don’t understand that they are using library resources when they use them. “I don’t use the library anymore, I get most of what I need from JSTOR.” D’oh.

Funds are based on assessments and outcomes – how do we show that? The money we spend on electronic resources is not going to get any smaller. ROI is focused more on funded research, but not electronic resources as a whole.

Use is not a primitive concept. When we talk about use, it can be an abstract concept that covers all use of library resources (physical and virtual). Our research often doesn’t specify what we are measuring as use.

Use as a process is the total experience of using the library, from asking reference questions to finding a quiet place to work to accessing resources from home. It is the application of library resources/materials to complete a complex/multi-stage process. We can do observational studies of the physical space, but it’s hard to do them for virtual resources.

Most of our research tends to focus on use as a transaction – things that can be recorded and quantified, but are removed from the user. When we look only at the transaction data, we don’t know anything about why the user viewed/downloaded/searched the resource. Because they are easy to quantify, we over-rely on vendor-supplied usage statistics. We think that COUNTER assures some consistency in measures, but there are still many grey areas (i.e. database time-outs equal more sessions).

We need to shift from focusing on isolated instances of downloads and ref desk questions, but focus on the aggregate of the process from the user perspective. Stats are only one component of this. This is where public services and technical services need to work together to gain a better understanding of the whole. This will require administrative support.

John Law’s study of undergraduate use of resources is a good example of how we need to approach this. Flemming-May thinks that the findings from that study have generated more progress than previous studies that were focused on more specific aspects of use.

How do we do all of this without invading on the privacy of the user? Make sure that your studies are thought-out and pass approval from your institution’s review board.

Transactional data needs to be combined with other information to make it valuable. We can see that a resource is being used or not used, but we need to look deeper to see why and what that means.

As a profession, are we prepared to do the kind of analysis we need to do? Some places are using anthropologists for this. A few LIS programs are requiring a research methods course, but it’s only one class and many don’t get it. This is a great continuing education opportunity for LIS programs.

ER&L 2010: We’ve Got Data – Now What Do We Do With It? Applying Standards to Assess Information Resources

Speakers: Mary Feeney, Ping Situ, and Jim Martin

They had a budget cut (surprise surprise), so they had to asses what to cut using the data they had. Complicating this was a change in organizational structure. In addition, they adopted the BYU project management model. Also, they had to sort out a common approach to assessment across all of the disciplines/resources.

They used their ILLs to gather stats about print resource use. They hired Scholarly Stats to gather their online resource stats, and for publishers/vendors not in Scholarly Stats, they gathered data directly from the vendors/publishers. Their process involved creating spreadsheets of resources by type, and then divided up the work of filling in the info. Potential cancellations were then provided to interested parties for feedback.

Quality standards:

  • 60% of monographs need to show at least one use in the last four years – this was used to apply cuts to the firm orders book budget, which impacts the flexibility for making one-time purchases with remaining funds and the book money was shifted to serial/subscription lines
  • 95% of individual journal titles need to show use in the last three years (both in-house and full-text downloads) – LJUR data was used to add to the data collected about print titles
  • dual format subscriptions required a hybrid approach, and they compared the costs with the online-only model – one might think that switching to online only would be a no-brainer, but licensing issues complicate the matter
  • cost per use of ejournal packages will not exceed twice the cost of ILL articles

One problem with their approach was with the existing procedures that resulted in not capturing data about all print journals. They also need to include local document delivery requests in future analysis. They need to better integrate the assessment of the use of materials in aggregator databases, particularly since users are inherently lazy and will go the easiest route to the content.

Aggregator databases are difficult to compare, and often the ISSN lists are incomplete. And, it’s difficult to compare based on title by title holdings coverage. It’s useful for long-term use comparison, but not this immediate project. Other problems with aggregator databases include duplication, embargos, and completeness of coverage of a title. They used SerSol’s overlap analysis tool to get an idea of duplication. It’s a time-consuming project, so they don’t plan to continue with it for all of their resources.

What if you don’t have any data or the data you have doesn’t have a quality standard? They relied on subject specialists and other members of the campus to assess the value of those resources.