ERMS implementation woes

Ever since vendors started selling electronic resource management systems (ERMS), there has been a session or a round table at NASIG that discussed various libraries’ implementations of their ERMS. A few more hands were raised this year when the room was asked to indicate if they feel like they’ve finished implementing their ERMS, but it’s still a very small minority of librarians. When I did my conference report for NASIG 2009 yesterday (we have a bit of a backlog on monthly conference report meetings since there are so many conferences held in the spring and early summer), I created this using ProjectCartoon to illustrate some of the reasons why ERMS have been so difficult and time-consuming to implement:

ERMS woes

NASIG 2009: ERMS Integration Strategies – Opportunity, Challenge, or Promise?

Speakers: Bob McQuillan (moderator), Karl Maria Fattig, Christine Stamison, and Rebecca Kemp

Many people have an ERM, some are implementing it, but few (in the room) are where they consider to be finished. ERMS present new opportunity and challenges with workflow and staffing, and the presenters intend to provide some insight for those in attendance.

At Fattig’s library, their budget for electronic is increasing as print is decreasing, and they are also running out of space for their physical collections. Their institution’s administration is not supportive of increasing space for materials, so they need to start thinking about how to stall or shrink their physical collection. In addition, they have had reductions in technical services staffing. Sound familiar?

At Kemp’s library, she notes that about 40% of her time is spent on access setup and troubleshooting, which is an indication of how much of their resources is allocated for electronic resources. Is it worth it? They know that many of their online resources are heavily used. Consorital “buying clubs” makes big deals possible, opening up access to more resources than they could afford on their own. Electronic is a good alternative to adding more volumes to already over-loaded shelves.

Stamison (SWETS) notes that they have seen a dramatic shift from print to electronic. At least two-thirds of the subscriptions they handle have an electronic component, and most libraries are going e-only when possible. Libraries tell them that they want their shelf space. Also, many libraries are going direct to publishers for the big deals, with agents getting involved only for EDI invoicing (cutting into the agent’s income). Agents are now investing in new technologies to assist libraries in managing e-collections, including implementing access.

Kemp’s library had a team of three to implement Innovative’s ERM. It took a change in workflow and incorporating additional tasks with existing positions, but everyone pulled through. Like libraries, Stamison notes that agents have had to change their workflow to handle electronic media, including extensive training. And, as libraries have more people working with all formats of serials, agents now have many different contacts within both libraries and publishers.

Fattig’s library also reorganized some positions. The systems librarian, acquisitions librarian, and serials & electronic resources coordinator all work with the ERMS, pulling from the Serials Solutions knowledge base. They have also contracted with someone in Oregon to manage their EZproxy database and WebBridge coverage load. Fattig notes that it takes a village to maintain an ERMS.

Agents with electronic gateway systems are working to become COUNTER compliant, and are heavily involved with developing SUSHI. Some are also providing services to gather those statistics for libraries.

Fattig comments that usage statistics are serials in themselves. At his library, they maintained a homegrown system for collecting usage statistics from 2000-07, then tried Serials Solutions Counter 360 for a year, but now are using an ERM/homegrown hybrid. They created their own script to clean up the files, because as we all know, COUNTER compliance means something different to each publisher. Fattig thinks that database searches are their most important statistics for evaluating platforms. They use their federated search statistics to weigh the statistics from those resources (will be broken out in COUNTER 3 compliance).

Kemp has not been able to import their use stats into ERM. One of their staff members goes in every month to download stats, and the rest come from ScholarlyStats. They are learning to make XML files out of their Excel files and hope to use the cost per use functionality in the future.

Fattig: “We haven’t gotten SUSHI to work in some of the places it’s supposed to.” Todd Carpenter from NISO notes that SUSHI compliance is a requirement of COUNTER 3.

For the next 12-18 months, Fattig expects that they will complete the creation of license and contact records, import all usage data, and implement SUSHI when they can. They will continue to work with their consorital tool, implement a discovery layer, and document everything. Plans to create a “cancellation ray gun and singalong blog” — a tool for taking criteria to generate suggested cancellation reports.

Like Fattig, Kemp plans to finish loading all of the data about license and contacts, also the coverage data. Looking forward to eliminating a legacy spreadsheet. Then, they hope to import COUNTER stats and run cost/use reports.

Agents are working with ONIX-PL to assist libraries in populating their ERMS with license terms. They are also working with CORE to assist libraries with populating acquisitions data. Stamison notes that agents are working to continue to be liaisons between publishers, libraries, and system vendors.

Dan Tonkery notes that he’s been listening to these conversations for years. No one is serving libraries very well. Libraries are working harder to get these things implemented, while also maintaining legacy systems and workarounds. “It’s too much work for something that should be simple.” Char Simser notes that we need to convince our administrations to move more staff into managing eresources as our budgets are shifting more towards them.

Another audience member notes that his main frustration is the lack of cooperation between vendors/products. We need a shared knowledge base like we have a shared repository for our catalog records. This gets tricky with different package holdings and license terms.

Audience question: When will the ERM become integrated into the ILS? Response: System vendors are listening, and the development cycle is dependent on customer input. Every library approaches their record keeping in different ways.

NASIG 2009: Moving Mountains of Cost Data

Standards for ILS to ERMS to Vendors and Back

Presenter: Dani Roach

Acronyms you need to know for this presentation: National Information Standards Organization (NISO), Cost of Resource Exchange (CORE), and Draft Standard For Trial Use (DSFTU).

CORE was started by Ed Riding from SirsiDynix, Jeff Aipperspach from Serials Solutions, and Ted Koppel from Ex Libris (and now Auto-Graphics). The saw a need to be able to transfer acquisitions data between systems, so they began working on it. After talking with various related parties, they approached NISO in 2008. Once they realized the scope, it went from being just an ILS to ERMS transfer to also including data from your vendors, agents, consortia, etc, but without duplicating existing standards.

Library input is critical in defining the use cases and the data exchange scenarios. There was also a need for a data dictionary and XML schema in order to make sure everyone involved understood each other. The end result is the NISO CORE DSFTU Z39.95-200x.

CORE could be awesome, but in the mean time, we need a solution. Roach has a few suggestions for what we can do.

Your ILS has a pile of data fields. Your ERMS has a pile of data fields. They don’t exactly overlap. Roach focused on only eight of the elements: title, match point (code), record order number, vendor, fund, what paid for, amount paid, and something else she can’t remember right now.

She developed Access tables with output from her ILS and templates from her ERMS. She then ran a query to match them up and then upload the acquisitions data to her ERMS.

For the database record match, she chose the Serials Solutions three letter database code, which was then put into an unused variable MARC field. For the journals, she used the SSID from the MARC records Serials Solutions supplies to them.

Things that you need to decide in advance: How do you handle multiple payments in a single fiscal year (What are you doing currently? Do you need to continue doing it?)? What about resources that share costs? How will you handle one-time vs. ongoing purchase? How will you maintain the integrity of the match point you’ve chosen?

The main thing to keep in mind is that you need to document your decisions and processes, particularly for when systems change and CORE or some other standard becomes a reality.

CIL 2009: ERM… What Do You Do With All That Data, Anyway?

This is the session that I co-presented with Cindi Trainor (Eastern Kentucky University). The slides don’t convey all of the points we were trying to make, so I’ve also included a cleaned-up version of those notes.

  1. Title
  2. In 2004, the Digital Library Federation (DLF) Electronic Resources Management Initiative (ERMI) published their report on the electronic resource management needs of libraries, and provided some guidelines for what data needed to be collected in future systems and how that data might be organized. The report identifies over 340 data elements, ranging from acquisitions to access to assessment.

    Libraries that have implemented commercial electronic resource management systems (ERMS) have spent many staff hours entering data from old storage systems, or recording those data for the first time, and few, if any, have filled out each data element listed in the report. But that is reasonable, since not every resource will have relevant data attached to it that would need to be captured in an ERMS.

    However, since most libraries do not have an infinite number of staff to focus on this level of data entry, the emphasis should instead be placed upon capturing data that is neccessary for managing the resources as well as information that will enhance the user experience.

  3. On the staff side, ERM data is useful for: upcoming renewal notifications, generating collection development reports that explain cost-per-use, based on publisher-provided use statistics and library-maintained, acquisitions data, managing trials, noting Electronic ILL & Reserves rights, and tracking the uptime & downtime of resources.
  4. Most libraries already have access management systems (link resolvers, A-Z lists, Marc records).
  5. User issues have shifted from the multiple copy problem to a “which copy?” problem. Users have multiple points of access, including: journal packages (JSTOR, Muse); A&I databases, with and without FT (which constitute e-resources in themselves); Library website (particularly “Electronic Resources” or “Databases” lists); OPAC; A-Z List (typically populated by an OpenURL link resolver); Google/gScholar; article/paper references/footnotes; course reserves; course management systems (Blackboard, Moodle, WebCT, Angel,Sakai); citation management software (RefWorks, EndNote, Zotero); LibGuides / course guides; bookmarks
  6. Users want…
  7. Google
  8. Worlds collide! What elements from the DLF ERM spec could enhance the user experience, and how? Information inside an ERMS can enhance access management systems or discovery: subject categorization within the ERM that would group similar resources and allow them to be presented alongside the resource that someone is using; using statuses to group & display items, such as a trialset within the ERM to automatically populate a page of new resources or an RSS feed to make it easy for the library to group and publicize even 30 day trial. ERMS’s need to do a better job of helping to manage the resource lifecycle by being built to track resources through that lifecycle so that discovery is updated by extension because resources are managed well, increasing uptime and availability and decreasing the time from identification above potential new resource to accessibility of that resource to our users
  9. How about turning ERM data into a discovery tool? Information about accessibility of resources to reference management systems like Endnote, RefWorks, or Zotero, and key pieces of information related to using those individual resources with same, could at least enable more sophisticated use of those resources if not increased discovery.

    (You’ve got your ERM in my discovery interface! No, you got your discovery interface in my ERM! Er… guess that doesn’t quite translate.)

  10. Flickr Mosaic: Phyllotaxy (cc:by-nc-sa); Librarians-Haunted-Love (cc:by-nc-sa); Square Peg (cc:by-nc-sa); The Burden of Thought (cc:by-nc)

NASIG 2008: Next Generation Library Automation – Its Impact on the Serials Community

Speaker: Marshall Breeding

Check & update your library’s record on lib-web-cats — Breeding uses this data to track the ILS and ERMS systems used by libraries world-wide.

The automation industry is consolidating, with several library products dropped or ceased to be supported. External financial investors are increasingly controlling the direction of the industry. And, the OPAC sucks. Libraries and users are continually frustrated with the products they are forced to use and are turning to open source solutions.

The innovation presented by automation companies falls below the expectations of libraries (not so sure about users). Conventional ILS need to be updated to incorporate the modern blend of digital and print collections.

We need to be more thoughtful in our incorporation of social tools into traditional library systems and infrastructures. Integrate those Web 2.0 tools into existing delivery options. The next NextGen automation tools should have collaborative features built into them.

Open source software isn’t free — it’s just a different model (pay for maintenance and setup v. pay for software). We need more robust open source software for libraries. Alternatively, systems need to open up so that data can be moved in and out easily. Systems need APIs that allow local coders to enhance systems to meet the needs of local users. Open source ERMS knowledge bases haven’t been seriously developed, although there is a need.

The drive towards open source solutions has often been motivated by disillusionment with current vendors. However, we need to be cautious, since open source isn’t necessarily the golden key that will unlock the door to paradise. (i.e. Koha still needs to add serials and acquisitions modules, as well as EDI capabilities).

The open source movement motivates the vendors to make their systems more open for us. This is a good thing. In the end, we’ll have a better set of options.

Open Source ILS options: Koha (commercial support from LibLime) used mostly by small to medium libraries, Evergreen (commercial support from Equinox Software) tested and proven for small to medium libraries in a consortia setting, and OPALS (commercial support from Media Flex) used mostly by k-12 schools.

In making the case for open source ILS, you need to compare the total cost of ownership, the features and functionality, and the technology platform and conceptual models. Are they next-generation systems or open source versions of legacy models?

Evaluate your RFPs for new systems. Are you asking for the things you really need or are you stuck in a rut of requiring technology that was developed in the 70s and may no longer be relevant?

Current open source ILS products lack serials and acquisitions modules. The initial wave of open source ILS commitments happened in the public library arena, but the recent activity has been in academic libraries (WALDO consortia going from Voyager to Koha, University of Prince Edward Island going from Unicorn to Evergreen in about a month). Do the current open source ILS products provide a new model of automation, or an open source version of what we already have?

Looking forward to the day when there is a standard XML for all ILS that will allow libraries to manipulated their data in any way they need to.

We are working towards a new model of library automation where monolithic legacy architectures are replaced by the fabric of service oriented architecture applications with comprehensive management.

The traditional ILS is diminishing in importance in libraries. Electronic content management is being done outside of core ILS functions. Library systems are becoming less integrated because the traditional ILS isn’t keeping up with our needs, so we find work-around products. Non-integrated automation is not sustainable.

ERMS — isn’t this what the acquisitions module is supposed to do? Instead of enhancing that to incorporate the needs of electronic resources, we had to get another module or work-around that may or may not be integrated with the rest of the ILS.

We are moving beyond metadata searching to searching the actual items themselves. Users want to be able to search across all products and packages. NextGen federated searching will harvest and index subscribed content so that it can be searched and retrieved more quickly and seamlessly.

Opportunities for serials specialists:

  • Be aware of the current trends
  • Be prepared for accelerated change cycles
  • Help build systems based on modern business process automation principles. What is your ideal serials system?
  • Provide input
  • Ensure that new systems provide better support than legacy systems
  • Help drive current vendors towards open systems

How will we deliver serials content through discovery layers?

Reference:

  • “It’s Time to Break the Mold of the Original ILS,” Computers in Libraries, Nov/Dec 2007.

managing electronic resources

Longing for the perfect ERMS….

In 2003, I attended the ACRL conference in Charlotte. One of the sessions I sat in on was about home-grown electronic resource management tools. After having dealt with digital and manilla folders of stuff, constantly searching for info, and not having any sort of long-term archiving plan for getting at the information, the idea of having a system that did that for me seemed miraculous.

Fast-forward five years. I’ve now had the pleasure of working with two moderately functional commercial ERMS, and neither are the miracle solution I had hoped for.

Now that I’ve had the opportunity to get under the hood of “traditional” ERMS, I have an idea as to why they are flawed — they’re approaching electronic resource management as a metadata storage problem, rather than a workflow problem. Creating a system that includes all the fields recommended by the DLF ERM Initiative is a good start, but it’s only a start. We need something that goes beyond that to creating a workflow that can include input and required actions from various different people similar to the workflow outlined in the DLF document.

My ideal ERMS is one that make it easy to input licensing and acquisitions data, automatically triggers alerts for follow-up, and provides relevant license information to users and staff. I’m currently managing more electronic resources than ever. I need a tool that makes keeping track of them as simple and painless as possible. Unfortunately, I don’t think the commercially available products are at that point yet, and as far as I know, no one is working on an open source solution.

css.php