ER&L 2014 — Beyond COUNTER: The changing definition of “usage” in an open access economy

Speakers: Kathy Perry (VIVA), Melissa Blaney (American Chemical Society), and Nan Butkovitch (Pennsylvania State University)

In 1998, ICOLC created guidelines for delivering usage information, and they have endorsed COUNTER and SUSHI. COUNTER works because all the players are involved and agree to reasonable timeframes.

COUNTER Code of Practice 4 now recognizes media and tracking of use through mobile devices.

PIRUS (Publisher and Institutional Repository Usage Statistics) is the next step, but they are going to drop the term and incorporate it as an optional report in COUNTER (Article Report 1). There is a code of practice and guidelines on the website.

Usage Factor metric as a tool for assessing journals that aren’t covered by impact factor. It won’t be comparable across subject groups because they are measuring different things.

If your publishers are not COUNTER compliant, ask them to do it.

ACS chose to go to COUNTER 4 in part because it covers all formats. They like being able to highlight usage of gold open access titles and denials due to lack of license. They also appreciated the requirement for the ability to provide JR5, which reports usage by year of publication.

Big increases in search can also mean that people aren’t finding what they want.

ACS notes that users are increasingly coming from Google, Mendeley, and other indexing sources, rather than the publisher’s site itself.

They hear a lot that users want platforms that allow sharing and collaborating across disciplines and institutions. Authors are wanting to measure the impact of their work in traditional and new ways.

Science librarian suggests using citation reports to expand upon the assessment of usage reports. If you have time for that sort of thing and only care about journals that are covered by ISI.

Chemistry authors have been resistant to open access publishing, particularly if they think they can make money off of a patent, etc. She thinks it will be useful to have OA article usage information, but needs to be put in the context of how many OA articles there are available.

What you want to measure in usage can determine your sources. Every measurement method has bias. Multiple usage measurements can have duplication. A new metric is just around the corner.

ER&L 2014 — Diving Into Ebook Usage: Navigating the Swell of Information

“Two boys jumping & diving” by Xosé Castro Roig

Speakers: Michael Levine-Clark (University of Denver) & Kari Paulson (ProQuest eBrary/EBL)

ProQuest is looking at usage data across the eBrary and EBL platforms as they are working to merge them together. To help interpret the data, they asked Levine-Clark to look at it as well. This is more of a proof-of-concept than a final conclusion.

They looked at 750,000 ebooks initially, narrowing it down for some aspects. He asked several questions, from the importance of quality to disciplinary preferences to best practices for measuring use, and various tangential questions related to these.

They looked at eBrary data from 2010-2013Q3 and EBL data from 2011-2013Q3. They used only the titles with an LC call number, and separate analysis of those titles that come from university presses specifically.

Usage was defined in three ways: sessions, views (count of page views), and downloads (entire book). Due to the variations in the data sets (number of years, number of customers, platforms), they could not easily compare the usage information between eBrary and EBL.

Do higher quality ebooks get used more? He used university press books as a measure of quality, though he recognizes this is not the best measure. For titles with at least one session, he found that the rate of use was fairly comparable, but slightly higher for university press books. The session counts and page views in eBrary was significantly higher for UP books, but not as much with EBL. In fact, consistently use was higher for UP books across the categories, but this may be because there are more UP books selected by libraries, thus increasing their availability.

What does usage look like across broad disciplines? Humanities, Social Sciences, and STEM were broken out and grouped by their call number ranges. He excluded A & Z (general) as well as G (too interdisciplinary) out of the equation. The social sciences were the highest in sessions and views on eBrary, but humanities win the downloads. For EBL, the social sciences win all categories. When he looked at actions per session, STEM had higher views, but all downloaded at about the same rate on both platforms.

How do you measure predicted use? He used the percentage of books in an LC class relative to the total books available. If the percentage of a use metric is lower then it is not meeting expected use, and vice versa. H, L, G, N, and D were all better than expected. Q, F, P, K and U were worse than expected.

How about breadth versus depth? This gets complicated. Better to find the slides and look at the graphs. The results map well to the predicted use outcomes.

Can we determine the level of immersion in a book? If more pages are viewed per session in a subject area, does that mean the users spend more time reading or just look at more pages? Medicine (R), History of the Americas (F), and Technology (T) appear to be used at a much higher rate within a session than other areas, despite performing poorly in breadth versus depth assessment. In other words, they may not be used much per title, but each session is longer and involves more actions than others.

How do we use these observations to build better collections and better serve our users?

Books with call numbers tend to be use more than those without. Is it because a call number is indicative of better metadata? Is it because publishers of better quality will provide better metadata? It’s hard to tell at this point, but it’s something he wants to look into.

A white paper is coming soon and will include a combined data set. It will also include the EBL data about how long someone was in a book in a session. Going forward, he will also look into LC subclasses.

ER&L 2014 — Making Usage Data Meaningful

“Big Data” by JD Hancock

Speakers: Jill Morris & Emily Guhde, NC Live

NC Live is a multi-type library consortium that includes public and private universities as well as public libraries. Everything they provide is provided equally to all libraries across the state.

They wanted to figure out how to better manage the resources, both financial and scholarly. They wanted to be able to offer advice to libraries on better accessibility of the resources like authentication and discovery. They wanted to determine what kind of use should be expected for a library or library type, and how to improve it.

They did not want to determine if a library’s use is good or bad, or to compare databases with each other. They also didn’t want to define the value of the provided database or explain why certain factors may impact database use, but they do hope to get to these things in future iterations of the study.

They began by trying to identify peer groupings of NC libraries based on information about the libraries, population served, degrees offered. These peer groups were then created by a working group of librarians from across the state. Some of the libraries were not included because they were incomparable.

The next objective was to determine what data points would be used to measure usage of each of the databases (they were going to study only five that were broadly applicable across all members of the consortium). For academics, they looked at full-text use per full-time enrollment, but for publics they did something different that I didn’t capture before the speaker moved on. See the study for details.

No one library was at the top or bottom of their peer group for usage across all resources studied. The use of the databases varied wildly, even among peers. The feedback from the consortium members indicated that flexible peer groups might be more useful than permanent peer groups based on what they are wanted to analyze at the time.

Finally, they looked at the qualities of the high usage libraries (top third of peer groupings), such as their access & authentication, size of collection, outreach & support, community characteristics, and library characteristics.

Use of Academic Search Complete was higher in community college libraries with these characteristics:

  • librarians attend faculty meetings
  • have an NC Live representative
  • high number of total information services per FTE
  • high number of circ transactions per FTE

Trends in community college libraries for all five databases:

  • embedded librarians in courses
  • librarian-initiated engagement
  • library orientation
  • librarians attending faculty meetings

Use of Academic Search Complete was higher in four-year college and university libraries with these characteristics:

  • authenticate with local proxy
  • direct link to NC Live provided resources
  • high number of librarians per 1000 enrolled student
  • NC Live representative

Trends in four-year college and university libraries for all 5 databases:

  • higher use with local proxy authentication and federated search service
  • lower use with a link to NC Live website database list instead of individual linking; authentication with a password; displaying an NC Live search box; essentially, less customized services which may indicate fewer tech staff to support eresources

Trends for higher use of Academic Search Complete among all schools:

  • authentication with a local proxy
  • total library expenditures per FTE
  • UNC institution
  • NCICU institution

Use of Academic Search Complete was higher in public libraries with these characteristics:

  • direct links to the resources
  • chat reference box
  • high number downloads of stats from the NC LIVE website
  • high number of promotional items requests
  • staff training for NC LIVE provided resources

Trends in public libraries for all 5 databases:

  • percentage of legal service population with a bachelor’s degree
  • number of stats downloads
  • population density
  • and total operating expenditures per legal service population

Next steps: Planning for future consortium services related to usage data. Need to understand more about what libraries need from them. They plan to share their findings and offer best practices for member libraries. Finally, they plan to develop usage reports and other data that are helpful for collection assessment at both the library and consortium levels.

Recommendations for future research: Libraries need to be better informed consumers of database and set goals for use. We need to work with each other and vendors to develop use and/or cost per use profiles. Similar studies should be done elsewhere to allow for comparison of results that might help explain why the variables are impacting use.

ejournal use by subject

A couple of weeks ago I blogged about an idea I had that involved combining subject data from SerialsSolutions with use data for our ejournals to get a broad picture of ejournal use by subject. It took a bit of tooling around with Access tables and queries, including making my first crosstab, but I’ve finally got the data put together in a useful way.

It’s not quite comprehensive, since it only covers ejournals for which SerialsSolutions has assigned a subject, which also have ISSNs, and are available through sources that provide COUNTER or similar use statistics. But, it’s better than nothing.

dreaming about the future of data in libraries

I spent most of the past two months downloading, massaging, and uploading to our ERMS a wide variety of COUNTER and non-COUNTER statistics. At times it is mind-numbing work, but taken in small doses, it’s interesting stuff.

The reference librarians make most of the purchasing decisions and deliver instruction to students and faculty on the library’s resources, but in the end, it’s the research needs of the students and faculty that dictate what they use. Then, every year, I get to look at what little information we have about their research choices.

Sometimes I’ll look at a journal title and wonder who in the world would want to read anything from that, but as it turns out, quite a number of someones (or maybe just one highly literate researcher) have read it in the past year.

Depending on the journal focus, it may be easy to identify where we need to beef up our resources based on high use, but for the more general things, I wish we had more detail about the use. Maybe not article-level, but perhaps a tag cloud — or something in that vein — pulled together from keywords or index headings. There’s so much more data floating around out there that could assist in collection development that we don’t have access to.

And then I think about the time it takes me to gather the data we have, not to mention the time it takes to analyze it, and I’m secretly relieved that’s all there is.

But, maybe someday when our ERMS have CRM-like data analysis tools and I’m not doing it all manually using Excel spreadsheets… Maybe then I’ll be ready to delve deeper into what exactly our students and faculty are using to meet their research needs.

library day in the life – round 4

Hello. I’m the electronic resources librarian at the University of Richmond, a small private liberal arts university nestled on the edge of suburbia in a medium-sized mid-Atlantic city. Today I am participating in the Library Day in the Life Project for its fourth round. Enjoy!

8:30am Arrive, turn on computer, and go get a cup of coffee from the coffee shop attached to the library. By the time I return, the login screen is displayed, and thus begins the 5 minute long process of logging in and then opening Outlook, Firefox, and TweetDeck. Pidgin starts on its own, thankfully. Update location on FourSquare. (Gotta keep my mayorship!)

8:40am Check schedule on Outlook, note the day’s meeting times, and then check the tasks for the day. At this point, I see that it’s time for a DILO, so I start this entry.

8:50am Weed through the new emails that arrived over the weekend. Note that there is more spam than normal. In the middle of this, my boss cancels one of two meetings today. (w00t!)

9:15am Email processed and sorted into folders and labels. Time to dig into the day’s tasks and action items. Chatty coworkers in the cube farm prompt me to load Songbird and don headphones.

9:25am Send a reminder to the LIB 101 students registered for my seminar on Friday. Work out switching reference desk shifts because my Wednesday LIB 101 seminar conflicts with my regular Wednesday shift. Also send out a note requesting trades for next week’s shifts, since I’ll be away at ER&L.

9:40am Cleared all action items and to-do items, so now it’s time to dig into my current project — gathering 2009 use statistics.

10:30am Electronic resources workflow planning meeting for the next year with an eye towards the next five years.

11:00am Back to gathering use stats. I’ve been working on this for over two weeks, and I’m a little over half-way through. I’d be further along if I could dedicate all my time to it, but unfortunately, meetings, desk schedules, and other action items get in the way.

12:15pm Hunger overrides my obsessive hunt for stats. I brought my lunch with me today, but often I end up grabbing something on the go while I run errands.

1:10pm Process the email that has come in over the past two hours. Only two action items added (yay!) and both are responses to request for information from this morning (yay!), so I’m happy to see them.

1:15pm Back to the side-yet-related project that I started on shortly before lunch. We have a bunch of journals in the “Multiple Vendors :: Single Journals” category in our ERMS, and I’m moving them over to their specific publisher listings if I can, checking to see if we have use stats for them, and requesting admin info when we don’t. There are only about 55 titles, so I’m hoping to get most of this done before my reference desk shift at 3.

3:00pm I’m only half-way through the side-yet-related project, but I have to set it down and go to my reference desk shift now. Answering many technology questions from a retired woman who is attempting to take a class that requires her to use Blackboard and view PowerPoints and things that are highly confusing to her. Checking out netbooks to students and showing them how to scan documents to PDF using the copiers rather than making a bunch of copies. Also, catching up on RSS feeds between the questions.

5:00pm Desk shift over. I have just enough time to wrap up my projects for the day and prep for tomorrow, grab a quick bite to eat, and then I’m off to the other side of campus where I have choir rehearsal until 7pm.

Thank you for reading!

NASIG 2009: Managing Electronic Resource Statistics

Presenter: Nancy Beals

We have the tools and the data, now we need to use them to the best advantage. Statistics, along with other data, can create a picture of how our online resources are being used.

Traditionally, we have gathered stats by counting when re-shelving, ILL, gate counts, circulation, etc. Do these things really tell us anything? Stats from eresources can tell us much more, in conjunction with information about the paths we create to them.

Even with standards, we can run into issues with collecting data. Data can be “unclean” or incorrectly reported (or late). And, not all publishers are using the standards (i.e. COUNTER).

After looking at existing performance indicators, applying them to electronic resources, then we can look at trends with our electronic resources. This can help us with determining the return on investment in these resources.

Keep a master list of stats in order to plan out how and when to gather them. Keep the data in a shared location. Be prepared to supply data in a timely fashion for collection development decision-making.

When you are comparing resources, it’s up to individual institutions to determine what is considered low or high use. Look at how the resources stack up within the over-all collection.

When assessing the value of a resource, Beals and her colleagues are looking at 2-3 years of use data, 10% cost inflation, and the cost of ILL. In addition, they make use of overlap analysis tools to determine where they have multiple formats or sources that could be eliminated based on which platforms are being used.

Providing readily accessible data in a user-friendly format empowers selectors to do analysis and make decisions.

css.php