#libday8 day 4 — lies, damn lies, and statistics

How to Lie with Statistics cover
How to Lie with Statistics by Darrell Huff & Irving Geis

My day began with organizing and prioritizing the action items that arrived yesterday when I was swamped with web-scale discovery service presentations. I didn’t get very far when it was time to leave for a meeting about rolling out VuFind locally. Before that meeting, I dropped in to update my boss (and interim University Librarian) on some things that came out of the presentations and subsequent hallway discussions.

At the VuFind meeting, we discussed some tweaks and modifications, and most everyone took on some assignments to revise menu labels, record displays, and search options. I managed to evade an assignment only because these things are more for reference, cataloging, and web services. The serials records look fine and appear accurately in the basic search (from the handful of tests I ran), so I’m not concerned about tweaking anything specifically.

Back at my desk, I started to work on the action items again, but the ongoing conversations about the discovery service presentations distracted me until one of the reference librarians provided me with a clue about the odd COUNTER use stats we’ve received from ProQuest for 2011.

I had given her stats on a resource that was on the CSA platform, but for the 2011 stats I provided what ProQuest gave me, which were dubious in their sudden increase (from 15 in 2010 to 4756 in 2011). She made a comment about how the low stats didn’t surprise her because she hates teaching the Illumina platform. I said it should be on the ProQuest platform now because that’s where the stats came from. She said she’d just checked the links on our website, and they’re still going to Illumina.

This puzzled me, so I pulled the CSA stats from 2011, and indeed, we had only 17 searches for the year for this index. I checked the website and LibGuides links, and we’re still sending users to the Illumnia platform, and not ProQuest. So, I’m not sure where those 4756 searches were coming from, but their source might explain why our total ProQuest stats tripled in 2011. This lead me to check our federated search stats, and while it shows quite a few searches of ProQuest databases (although not this index, as we hadn’t included it), our DB1 report shows zero federated searches and sessions.

I compiled all of this and sent it off to ProQuest customer support. I’m eager to see what their response will be.

This brought me up to my lunch break, which I spent at the gym where one of the trainers forced my compatriots and I to accomplish challenging and strenuous activities for 45 min. After my shower, I returned to the library to lunch at my desk and respond to some crowd-sourced questions from colleagues at other institutions.

I managed to whack down a few email action items before my ER&L co-presenter called to discuss the things we need to do to make sure we’re prepared for the panel session. We’re pulling together seasoned librarians and product representatives from five different electronic resource management systems (four commercial, one open-source) to talk about their experiences working with the products. We hashed out a few things that needed hashing out, and ended the call with more action items on our respective lists.

At that point, I had about 20 min until my next meeting, so I tracked down the head of research and instruction to hash out some details regarding the discovery service presentations that I wanted to make sure she was aware of. I’m glad I did, because she filled in some gaps I had missed, and later she relayed a positive response from one of the librarians that concerned both of us.

The meeting ended early, so I took the opportunity of suddenly unscheduled time in my calendar to start writing down this whole thing. I’d been so busy I hadn’t had time to journal this throughout the day like I’d previously done.

Heard back from ProQuest, and although they haven’t addressed the missing federated search stats from their DB1 report, they explain away the high number of searches in this index as having come from a subject area search or the default search across all databases. There was (and may still be) a problem with defaulting to all databases if the user did not log out before starting a new session, regardless of which database they intended to use. PQ tech support suggested looking at their non-COUNTER report that includes full-text, citation, and abstract views for a more accurate picture of what was used.

For the last stretch of the day, I popped on my headphones, cranked up the progressive house, and tried to power through the rest of the email action items. I didn’t get very far, as the first one required tracking down use stats and generating a report for an upcoming renewal. Eventually, I called it a day and posted this. Yay!

ERMS implementation woes

Ever since vendors started selling electronic resource management systems (ERMS), there has been a session or a round table at NASIG that discussed various libraries’ implementations of their ERMS. A few more hands were raised this year when the room was asked to indicate if they feel like they’ve finished implementing their ERMS, but it’s still a very small minority of librarians. When I did my conference report for NASIG 2009 yesterday (we have a bit of a backlog on monthly conference report meetings since there are so many conferences held in the spring and early summer), I created this using ProjectCartoon to illustrate some of the reasons why ERMS have been so difficult and time-consuming to implement:

ERMS woes

CIL 2009: ERM… What Do You Do With All That Data, Anyway?

This is the session that I co-presented with Cindi Trainor (Eastern Kentucky University). The slides don’t convey all of the points we were trying to make, so I’ve also included a cleaned-up version of those notes.

  1. Title
  2. In 2004, the Digital Library Federation (DLF) Electronic Resources Management Initiative (ERMI) published their report on the electronic resource management needs of libraries, and provided some guidelines for what data needed to be collected in future systems and how that data might be organized. The report identifies over 340 data elements, ranging from acquisitions to access to assessment.

    Libraries that have implemented commercial electronic resource management systems (ERMS) have spent many staff hours entering data from old storage systems, or recording those data for the first time, and few, if any, have filled out each data element listed in the report. But that is reasonable, since not every resource will have relevant data attached to it that would need to be captured in an ERMS.

    However, since most libraries do not have an infinite number of staff to focus on this level of data entry, the emphasis should instead be placed upon capturing data that is neccessary for managing the resources as well as information that will enhance the user experience.

  3. On the staff side, ERM data is useful for: upcoming renewal notifications, generating collection development reports that explain cost-per-use, based on publisher-provided use statistics and library-maintained, acquisitions data, managing trials, noting Electronic ILL & Reserves rights, and tracking the uptime & downtime of resources.
  4. Most libraries already have access management systems (link resolvers, A-Z lists, Marc records).
  5. User issues have shifted from the multiple copy problem to a “which copy?” problem. Users have multiple points of access, including: journal packages (JSTOR, Muse); A&I databases, with and without FT (which constitute e-resources in themselves); Library website (particularly “Electronic Resources” or “Databases” lists); OPAC; A-Z List (typically populated by an OpenURL link resolver); Google/gScholar; article/paper references/footnotes; course reserves; course management systems (Blackboard, Moodle, WebCT, Angel,Sakai); citation management software (RefWorks, EndNote, Zotero); LibGuides / course guides; bookmarks
  6. Users want…
  7. Google
  8. Worlds collide! What elements from the DLF ERM spec could enhance the user experience, and how? Information inside an ERMS can enhance access management systems or discovery: subject categorization within the ERM that would group similar resources and allow them to be presented alongside the resource that someone is using; using statuses to group & display items, such as a trialset within the ERM to automatically populate a page of new resources or an RSS feed to make it easy for the library to group and publicize even 30 day trial. ERMS’s need to do a better job of helping to manage the resource lifecycle by being built to track resources through that lifecycle so that discovery is updated by extension because resources are managed well, increasing uptime and availability and decreasing the time from identification above potential new resource to accessibility of that resource to our users
  9. How about turning ERM data into a discovery tool? Information about accessibility of resources to reference management systems like Endnote, RefWorks, or Zotero, and key pieces of information related to using those individual resources with same, could at least enable more sophisticated use of those resources if not increased discovery.

    (You’ve got your ERM in my discovery interface! No, you got your discovery interface in my ERM! Er… guess that doesn’t quite translate.)

  10. Flickr Mosaic: Phyllotaxy (cc:by-nc-sa); Librarians-Haunted-Love (cc:by-nc-sa); Square Peg (cc:by-nc-sa); The Burden of Thought (cc:by-nc)
css.php