NASIG 2011: Reporting on Collections

Speakers: Sandy Hurd, Tina Feick, & John Smith

Development begins with internal discussion, a business case, and a plan for how the data will be harvested. And discussion may need to include the vendors who house or supply the data, like your ILS or ERM.

Product development on the vendor side can be prompted by several things, including specific needs, competition, and items in an RFP. When customers ask for reports, they need to determine if it is a one-time thing, something that can be created by enhancing what they already have, or something they aren’t doing yet. There may be standards, but collaborative data is still custom development between two entities, every time.

Have you peeked under the rug? The report is only as good as the data you have. How much cleanup are you willing to do? How can your vendor help? Before creating reports, think about what you have to solve and what you wish you could solve, statistics you need, the time available to generate them, and whether or not you can do it yourself.

There are traditional reporting tools like spreadsheets, and increasingly there are specialized data storage and analysis tools. We are looking at trends, transactional data, and projections, and we need this information on demand and more frequently than in the past. And the data needs to be interoperable. (Dani Roach is quietly shouting, “CORE! CORE!”) Ideally, we would be able to load relevant data from our ERMS, acquisitions modules, and other systems.

One use of the data can be to see who is using what, so properly coded patron records are important. The data can also be essential for justifying the redistribution of resources. People may not like what they hear, but at least you have the data to back it up.

The spreadsheets are not the reports. They are the data.

guest post on ACRLog

I see a strong need for the creation, support, and implementation of data standards and tools to provide libraries with the means to effectively evaluate their resources.

A few months ago, Maura Smale contacted me about writing a guest post for ACRLog. I happily obliged, and it has now been published.

When it came time to finally sit down and write about something (anything) that interested me in academic librarianship, I found myself at a loss for words. Last month, I spent some time visiting friends here and there on my way out to California for the Internet Librarian conference, and many of those friends also happened to be academic librarians. It was through those conversations that I found a common thread for the issues that are pushing some of my professional buttons.

Specifically, I see a strong need for the creation, support, and implementation of data standards and tools to provide libraries with the means to effectively evaluate their resources. If that interests you as well, please take a moment to go read the full essay, and leave a comment if you’d like.

NASIG 2009: Moving Mountains of Cost Data

Standards for ILS to ERMS to Vendors and Back

Presenter: Dani Roach

Acronyms you need to know for this presentation: National Information Standards Organization (NISO), Cost of Resource Exchange (CORE), and Draft Standard For Trial Use (DSFTU).

CORE was started by Ed Riding from SirsiDynix, Jeff Aipperspach from Serials Solutions, and Ted Koppel from Ex Libris (and now Auto-Graphics). The saw a need to be able to transfer acquisitions data between systems, so they began working on it. After talking with various related parties, they approached NISO in 2008. Once they realized the scope, it went from being just an ILS to ERMS transfer to also including data from your vendors, agents, consortia, etc, but without duplicating existing standards.

Library input is critical in defining the use cases and the data exchange scenarios. There was also a need for a data dictionary and XML schema in order to make sure everyone involved understood each other. The end result is the NISO CORE DSFTU Z39.95-200x.

CORE could be awesome, but in the mean time, we need a solution. Roach has a few suggestions for what we can do.

Your ILS has a pile of data fields. Your ERMS has a pile of data fields. They don’t exactly overlap. Roach focused on only eight of the elements: title, match point (code), record order number, vendor, fund, what paid for, amount paid, and something else she can’t remember right now.

She developed Access tables with output from her ILS and templates from her ERMS. She then ran a query to match them up and then upload the acquisitions data to her ERMS.

For the database record match, she chose the Serials Solutions three letter database code, which was then put into an unused variable MARC field. For the journals, she used the SSID from the MARC records Serials Solutions supplies to them.

Things that you need to decide in advance: How do you handle multiple payments in a single fiscal year (What are you doing currently? Do you need to continue doing it?)? What about resources that share costs? How will you handle one-time vs. ongoing purchase? How will you maintain the integrity of the match point you’ve chosen?

The main thing to keep in mind is that you need to document your decisions and processes, particularly for when systems change and CORE or some other standard becomes a reality.

css.php