ER&L 2012: Lightening Talks

Shellharbour; Lightening
photo by Steven

Due to a phone meeting, I spent the first 10 min snarfing down my lunch, so I missed the first presenters.

Jason Price: Libraries spend a lot of time trying to get accurate lists of the things we’re supposed to have access to. Publisher lists are marketing lists, and they don’t always include former titles. Do we even need these lists anymore? Should we be pushing harder to get them? Can we capture the loss from inaccurate access information and use that to make our case? Question: Isn’t it up to the link resolver vendors? No, they rely on the publishers/sources like we do. Question: Don’t you think something is wrong with the market when the publisher is so sure of sales that they don’t have to provide the information we want? Question: Haven’t we already done most of this work in OCLC, shouldn’t we use that?

Todd Carpenter: NISO recently launched the Open Discovery Initiative, which is trying to address the problems with indexed discovery services. How do you know what is being indexed in a discovery service? What do things like relevance ranking mean? What about the relationships between organizations that may impact ranking? The project is ongoing and expect to hear more in the fall (LITA, ALA Midwinter, and beyond).

Title change problem — uses xISSN service from OCLC to identify title changes through a Python script. If the data in OCLC isn’t good enough, and librarians are creating it, then how can we expect publishers to do better.

Dani Roach: Anyone seeing an unusual spike in use for 2011? Have you worked with them about it? Do you expect a resolution? They believe our users are doing group searches across the databases, even though we are sending them to specific databases, so they would need to actively choose to search more than one. Caution everyone to check their stats. And how is their explanation still COUNTER compliant.

Angel Black: Was given a mission at ER&L to find out what everyone is doing with OA journals, particularly those that come with traditional paid packages. They are manually adding links to MARC records, and use series fields (830) to keep track of them. But, not sure how to handle the OA stuff, particularly when you’re using a single record. Audience suggestion to use 856 subfield x. “Artesian, handcrafted serials cataloging”

Todd Carpenter part 2: How many of you think your patrons are having trouble finding the OA in a mixed access journal that is not exposed/labeled? KBs are at the journal or volume/issue level. About 1/3 of the room thinks it is a problem.

Has anyone developed their own local mobile app? Yes, there is a great way to do that, but more important to create a mobile-friendly website. PhoneGap will write an app for mobile OS that will wrap your web app in an app, and include some location services. Maybe look to include library in a university-wide app?

Adam Traub: Really into PPV/demand-driven. Some do an advance purchase model with tokens, and some of them will expire. Really wants to make it an unmediated process, but it opens up the library to increasing and spiraling costs. They went unmediated for a quarter, and the use skyrocketed. What’s a good way to do this without spending a ton of money? CCC’s Get It Now drives PPV usage through the link resolver. Another uses a note to indicate that the journal is being purchased by the library.

Kristin Martin: Temporarily had two discovery services, and they don’t know how to display this to users. Prime for some usability testing. Have results from both display side by side and let users “grade” them.

Michael Edwards: Part of a NE consortia, and thinks they should be able to come up with consortial pressure on vendors, and they’re basically telling them to take a leap. Are any of the smaller groups in pressuring vendors in making concessions to consortial acquisitions. Orbis-Cascade and Connect NY have both been doing good things for ebook pricing and reducing the multiplier for SU. Do some collection analysis on the joint borrowing/purchasing policies? The selectors will buy what they buy.

nifty enhancement for the A-Z journal tool

Not sure if I’ve mentioned it here, but my library uses SerialsSolutions for our A-Z journal list, OpenURL linking, and ERMS. I’ve been putting a great deal of effort into the ERMS over the past few years, getting license, cost, and use data in so that we can use this tool for both discovery and assessment. Aside from making the page look pretty much like our library website, we haven’t done much to enhance the display.

Recently (as in, yesterday) my colleague Dani Roach over at the University of St. Thomas shared with me an enhancement they implemented using the “public notes” for a journal title. They have icons that indicate whether there is an RSS feed for the contents and whether the journal is peer reviewed (according to Ulrichs). The icon for the RSS feed is also a link to the feed itself. This is what you  see when you search for the Journal of Biological Chemistry, for example.

Much like the work I’m doing to pull together helpful information on the back-end about the resources from a variety of sources, this pulls in information that would be tremendously useful for students and faculty researchers, I think.

However, I have a feeling this would take quite a bit of time to gather up the information and add it to the records. Normally I would leap in with both feet and just do it, but in the effort to be more responsible, I’m going to talk with the reference librarians first. But, I wanted to share this with you all because I think it’s a wonderful libhack that anyone should consider doing, regardless of which ERMS they have.

NASIG 2011: Reporting on Collections

Speakers: Sandy Hurd, Tina Feick, & John Smith

Development begins with internal discussion, a business case, and a plan for how the data will be harvested. And discussion may need to include the vendors who house or supply the data, like your ILS or ERM.

Product development on the vendor side can be prompted by several things, including specific needs, competition, and items in an RFP. When customers ask for reports, they need to determine if it is a one-time thing, something that can be created by enhancing what they already have, or something they aren’t doing yet. There may be standards, but collaborative data is still custom development between two entities, every time.

Have you peeked under the rug? The report is only as good as the data you have. How much cleanup are you willing to do? How can your vendor help? Before creating reports, think about what you have to solve and what you wish you could solve, statistics you need, the time available to generate them, and whether or not you can do it yourself.

There are traditional reporting tools like spreadsheets, and increasingly there are specialized data storage and analysis tools. We are looking at trends, transactional data, and projections, and we need this information on demand and more frequently than in the past. And the data needs to be interoperable. (Dani Roach is quietly shouting, “CORE! CORE!”) Ideally, we would be able to load relevant data from our ERMS, acquisitions modules, and other systems.

One use of the data can be to see who is using what, so properly coded patron records are important. The data can also be essential for justifying the redistribution of resources. People may not like what they hear, but at least you have the data to back it up.

The spreadsheets are not the reports. They are the data.

ER&L: When Two Become Three — adding additional staff to eresource management

Speaker: Carolyn DeLuca, Dani Roach, & Kari Petryszyn

Over the past six years, they have seen an increase in users, use, and eresources, but not in staffing. In fact, they lost staff. This is not unlike most places.

You need to illustrate the staff need story using the data you have already, both for internal and external comparison. Pie charts showing the percentage of staff dedicated to eresources versus the percentage of the budget spent on them can be poignant.

Initially, they pulled in staff from other areas to do bits and pieces, but it was decentralized and not without problems. Some of the tasks were so splintered that no one was seeing the big picture, or taking ownership.

They took the HERMES report and adopted the lifecycle workflow to redesign locally. That worked, until ebooks, which are even more complex and un-standardized.

In order to convey your needs, you must speak your leader’s language. They don’t need to hear about the problems all the time, they need to hear about the solutions. Then, when they had finally reached the point where eresources took on 70% of new acquisitions, things finally began to change. Their leaders had a soundbite.

In the spring of 2010, they lost seven positions across all departments. They had one position coming back, and because they had been pounding the pavement for years, their director decided to follow the money and put it in adding and eresources staff person. You don’t need people to leave. Space issues can also lead to staffing reorganization.

By re-centralizing the team, they are able to focus the work so that they have the right outcomes in mind. Liaisons benefit because there is one more person to troubleshoot eresources issues. Collection development gets additional assistance. Users benefit because the data is kept cleaner and more accurately.

Fit is as critical as job skills. And once that person is hired, you note only need to train them on the tools, but you also need to indoctrinate them with your philosophy so they understand why things are the way they are.

Tools & strategies: online tutorials and webinars — use the stuff that’s out there already. Talk to other library staff in order to get the big picture.

Unexpected outcomes: the effect of new eyes, fresh energy, and enthusiasm. It also freed up other staff to work on different projects.

NASIG 2010: Integrating Usage Statistics into Collection Development Decisions

Presenters: Dani Roach, University of St. Thomas and Linda Hulbert, University of St. Thomas

As with most libraries, they are faced with needing to downsize their purchases in order to fit within reduced budgets, so good tools must be employed to determine which stuff to remove or acquire.

The statistics for impact factor means little to librarians, since the “best” journals may not be appropriate for the programs the library supports. Quantitative data like cost per use, historical trends, and ILL data are more useful for libraries. Combine these with reviews, availability, features, user feedback, and the dust layer on the materials, and then you have some useful information for making decisions.

Usage statistics are just one component that we can use to analyze the value of resources. There are other variables than cost and other methods than cost per use, but these are what we most often apply.

Other variables can include funds/subjects, format, and identifiers like ISSN. Cost needs to be defined locally, as libraries manage them differently for annual subscriptions, multiple payments/funds, one-time archive fees, hosting fees, and single title databases or ebooks. Use is also tricky. A PDF download in a JR1 report is different from a session count in a DB1 report is different from a reshelve count for a bound journal. Local consistency with documentation is best practice for sorting this out.

Library-wide SharePoint service allows them to drop documents with subscription and analysis information into one location for liaisons to use. [We have a shared network folder that I do some of this with — I wonder if SharePoint would be better at managing all of the files?]

For print statistics, they track separately bound volume use versus new issue use, scanning barcodes into their ILS to keep a count. [I’m impressed that they have enough print journal use to do that rather than hash marks on a sheet of paper. We had 350 reshelved in last year, including ILL use, if I remember correctly.]

Once they have the data, they use what they call a “fairness factor” formula to normalize the various subject areas to determine if materials budgets are fairly allocated across all disciplines and programs. Applying this sort of thing now would likely shock budgets, so they decided to apply new money using the fairness factor, and gradually underfunded areas are being brought into balance without penalizing overfunded areas.

They have stopped trying to achieve a balance between books and periodicals. They’ve left that up to the liaisons to determine what is best for their disciplines and programs.

They don’t hide their cancellation list, and if any of the user community wants to keep something, they’ve been willing to retain it. However, they get few requests to retain content, and they think it is in part because the user community can see the cost, use, and other factors that indicate the value of the resource for the local community.

They have determined that it costs them around $52 a title to manage a print subscription, and over $200 a title to manage an online subscription, mainly because of the level of expertise involved. So, there really are no “free” subscriptions, and if you want to get into the cost of binding/reshelving, you need to factor in the managerial costs of electronic titles, as well.

Future trends and issues: more granularity, more integration of print and online usage, interoperability and migration options for data and systems, continued standards development, and continued development of tools and systems.

Anything worth doing is worth overdoing. You can gather Ulrich’s reports, Eigen factors, relative price indexes, and so much more, but at some point, you have to decide if the return is worth the investment of time and resources.

NASIG 2009: Moving Mountains of Cost Data

Standards for ILS to ERMS to Vendors and Back

Presenter: Dani Roach

Acronyms you need to know for this presentation: National Information Standards Organization (NISO), Cost of Resource Exchange (CORE), and Draft Standard For Trial Use (DSFTU).

CORE was started by Ed Riding from SirsiDynix, Jeff Aipperspach from Serials Solutions, and Ted Koppel from Ex Libris (and now Auto-Graphics). The saw a need to be able to transfer acquisitions data between systems, so they began working on it. After talking with various related parties, they approached NISO in 2008. Once they realized the scope, it went from being just an ILS to ERMS transfer to also including data from your vendors, agents, consortia, etc, but without duplicating existing standards.

Library input is critical in defining the use cases and the data exchange scenarios. There was also a need for a data dictionary and XML schema in order to make sure everyone involved understood each other. The end result is the NISO CORE DSFTU Z39.95-200x.

CORE could be awesome, but in the mean time, we need a solution. Roach has a few suggestions for what we can do.

Your ILS has a pile of data fields. Your ERMS has a pile of data fields. They don’t exactly overlap. Roach focused on only eight of the elements: title, match point (code), record order number, vendor, fund, what paid for, amount paid, and something else she can’t remember right now.

She developed Access tables with output from her ILS and templates from her ERMS. She then ran a query to match them up and then upload the acquisitions data to her ERMS.

For the database record match, she chose the Serials Solutions three letter database code, which was then put into an unused variable MARC field. For the journals, she used the SSID from the MARC records Serials Solutions supplies to them.

Things that you need to decide in advance: How do you handle multiple payments in a single fiscal year (What are you doing currently? Do you need to continue doing it?)? What about resources that share costs? How will you handle one-time vs. ongoing purchase? How will you maintain the integrity of the match point you’ve chosen?

The main thing to keep in mind is that you need to document your decisions and processes, particularly for when systems change and CORE or some other standard becomes a reality.

css.php