ER&L: Innovative eResource Workflow

Speaker: Kelly Smith and Laura Edwards

Their redesign of workflow was prompted by a campus-wide move to Drupal. They are now using it to drive the public display of eresources. They are grouping the resources by status as well as by the platform. On the back end, they add information about contacts, admin logins, etc. They can trigger real-time alerts and notes for the front-end. They track fund codes and cost information. In addition, there are triggers that prompt the next steps in workflow as statuses are changed, from trials to renewals.

Speaker: Xan Arch

They needed a way to standardize the eresource lifecycle, and a place to keep all the relevant information about new resources as they move through the departments. They also wanted to have more transparency about where the resource is in the process.

They decided to use a bug/issue tracker called Jira because that’s what another department had already purchased. They changed the default steps to map to the workflow and notify the appropriate people. The eresource order form is the start, and they ask for as much as they can from the selector. They then put up a display in a wiki using Confluence to display the status of each resource, including additional info about it.

Speaker: Ben Heet

The University of Notre Dame has been developing their own ERMS called CORAL. The lifecycle of an eresource is complex, so they took the approach of creating small workflows to give the staff direction for what to do next, depending on the complexity of the resource (ie free versus paid).

You can create reminder tasks assigned to specific individuals or groups depending on the needs of the workflow. They don’t go into every little thing to be done, but mainly just trigger reminders of the next group of activities. There is an admin view that shows the pending activities for each staff member, and when they are done, they can mark it complete to trigger the next step.

Not every resource is going to need every step. One size does not fit all.

Speaker: Lori Duggan

Kuali OLE is a partnership among academic libraries creating the next generation of library management software and systems. It has a very complex financial platform for manual entry of information about purchase. It looks less like a traditional ERMS and more like a PeopleSoft/Banner/ILS acquisition module, mostly because that is what it is and they are still developing the ERM components.

ER&L 2010: Usage Statistics for E-resources – is all that data meaningful?

Speaker: Sally R. Krash, vendor

Three options: do it yourself, gather and format to upload to a vendor’s collection database, or have the vendor gather the data and send a report (Harrassowitz e-Stats). Surprisingly, the second solution was actually more time-consuming than the first because the library’s data didn’t always match the vendor’s data. The third is the easiest because it’s coming from their subscription agent.

Evaluation: review cost data; set cut-off point ($50, $75, $100, ILL/DocDel costs, whatever); generate list of all resources that fall beyond that point; use that list to determine cancellations. For citation databases, they want to see upward trends in use, not necessarily cyclical spikes that average out year-to-year.

Future: Need more turnaway reports from publishers, specifically journal publishers. COUNTER JR5 will give more detail about article requests by year of publication. COUNTER JR1 & BR1 combined report – don’t care about format, just want download data. Need to have download information for full-text subscriptions, not just searches/sessions.

Speaker: Benjamin Heet, librarian

He is speaking about University of Notre Dame’s statistics philosophy. They collect JR1 full text downloads – they’re not into database statistics, mostly because fed search messes them up. Impact factor and Eigen factors are hard to evaluate. He asks, “can you make questionable numbers meaningful by adding even more questionable numbers?”

At first, he was downloading the spreadsheets monthly and making them available on the library website. He started looking for a better way, whether that was to pay someone else to build a tool or do it himself. He went with the DIY route because he wanted to make the numbers more meaningful.

Avoid junk in junk out: HTML vs. PDF downloads depends on the platform setup. Pay attention to outliers to watch for spikes that might indicate unusual use by an individual. The reports often have bad data or duplicate data on the same report.

CORAL Usage Statistics – local program gives them a central location to store user names & passwords. He downloads reports quarterly now, and the public interface allows other librarians to view the stats in readable reports.

Speaker: Justin Clarke, vendor

Harvesting reports takes a lot of time and requires some administrative costs. SUSHI is a vehicle for automating the transfer of statistics from one source to another. However, you still need to look at the data. Your subscription agent has a lot more data about the resources than just use, and can combine the two together to create a broader picture of the resource use.

Harrassowitz starts with acquisitions data and matches the use statistics to that. They also capture things like publisher changes and title changes. Cost per use is not as easy as simple division – packages confuse the matter.

High use could be the result of class assignments or hackers/hoarders. Low use might be for political purchases or new department support. You need a reference point of cost. Pricing from publishers seems to have no rhyme or reason, and your price is not necessarily the list price. Multi-year analysis and subject-based analysis look at local trends.

Rather than usage statistics, we need useful statistics.

css.php