ER&L 2010: Where are we headed? Tools & Technologies for the future

Speakers: Ross Singer & Andrew Nagy

Software as a service saves the institution time and money because the infrastructure is hosted and maintained by someone else. Computing has gone from centralized, mainframe processing to an even mix of personal computers on an networked enterprise to once again a very centralized environment with cloud applications and thin clients.

Library resource discovery is, to a certain extent, already in the cloud. We use online databases and open web search, WorldCat, and next gen catalog interfaces. The next gen catalog places the focus on the institution’s resources, but it’s not the complete solution. (People see a search box and they want to run queries on it – doesn’t matter where it is or what it is.) The next gen catalog is only providing access to local resources, and while it looks like modern interfaces, the back end is still old-school library indexing that doesn’t work well with keyword searching.

Web-scale discovery is a one-stop shop that provides increased access, enhances research, and provides and increase ROI for the library. Our users don’t use Google because it’s Google, they use it because it’s simple, easy, and fast.

How do we make our data relevant when administration doesn’t think what we do is as important anymore? Linked data might be one solution. Unfortunately, we don’t do that very well. We are really good at identifying things but bad at linking them.

If every component of a record is given identifiers, it’s possible to generate all sorts of combinations and displays and search results via linking the identifiers together. RDF provides a framework for this.

Also, once we start using common identifiers, then we can pull in data from other sources to increase the richness of our metadata. Mashups FTW!

ER&L 2010: E-book Management – It Sounds Serial!

Speakers: Dani L. Roach & Carolyn DeLuca

How do you define an ebook? How is it different from a print book? From another online resource? Is it like pornography – you know it when you see it? “An electronic equivalent of a distinct print title.” What about regularly updated ebooks? For the purposes of this presentation, an ebook is defined by its content, format, delivery, and fund designation.

Purchase impacts delivery and delivery impacts purchase – we need to know the platform, the publisher, the simultaneous user level, bundle options, pricing options (more than cost – includes release dates, platforms, and licensing), funding options, content, and vendor options (dealing more one-on-one with publishers). We now have multiple purchasing pots and need to budget annually for ebooks – sounds like a serial. Purchasing decisions impact collection development, including selection decisions, duplicate copies, weeding, preferences/impressions, and virtual content that requires new methods of tracking.

After you purchase an ebook bundle, then you have to figure out what you actually have. The publisher doesn’t always know, and the license doesn’t always reflect reality, and your ERMS/link resolve may not have the right information, either. Also, the publisher doesn’t always remove the older editions promptly, so you have to ask them to “weed.”

Do you use vendor-supplied MARC records or purchase OCLC record sets? Do you get vendor-neutral records, or multiple records for each source (and you will have duplicates).

Who does what? Is your binding person managing the archival process? Is circulation downloading the ebooks to readers? Is your acquisitions person ordering ebooks, or does your license manger now need to do that? How many times to library staff touch a printed book after it is cataloged and shelved? How about ebooks?

Users are already used to jumping from platform to platform – don’t let that excuse get in the way of purchasing decisions.

Ebooks that are static monographs that are one-time purchases are pretty much like print books. When ebooks become hybrids that incorporate aspects of ejournals and subscription databases, it gets complicated.

Why would a library buy an ebook rather than purchase it in a consortia setting? With print books, you can share them, so shouldn’t we want to that with ebooks? Yes, but ebooks are relatively so new that we haven’t quite figured out how to do this effectively, and consorital purchases are often too slow for title-by-title purchases.

ER&L 2010: Developing a methodology for evaluating the cost-effectiveness of journal packages

Speaker: Nisa Bakkalbasi

Journal packages offer capped price increases, access to non-subscribed content, and it’s easier to manage than title-by-title subscriptions. But, the economic downturn has resulted in even the price caps not being enough to sustain the packages.

Her library only seriously considers COUNTER reports, which is handy, since most package publishers provide them. They add to that the publisher’s title-by-title list price, as well as some subject categories and fund codes. Their analysis includes quantitative and qualitative variables using pivot tables.

In addition, they look at the pricing/sales model for the package: base value, subscribed/non-subscribed titles, cancellation allowance, price cap/increase, deep discount for print rate, perpetual/post-cancellation access rights, duration of the contract, transfer titles, and third-party titles.

So, the essential question is, are we paying more for the package than for specific titles (perhaps fewer than we currently have) if we dissolved the journal package?

She takes the usage reports for at least the past three years in order to look at trends, and excludes titles that are based on separate pricing models, and also excluded backfile usage if that was a separate purchase (COUNTER JR1a subtracted from JR1 – and you will need to know what years the publisher is calling the backfile). Then she adds list prices for all titles (subscribed & non-subscribed). Then, she calculates the cost-per-use of the titles, and uses the ILL cost (per the ILL department) as a threshold for possible renewals or cancellations.

The final decision depends on the base value paid by the library, the collection budget increase/decrease, price cap, and the quality/consistency of ILL service (money is not everything). This method is only about the costs, and it does not address the value of the resources to the users beyond what they may have looked at. There may be other factors that contributed to non-use.

ER&L 2010: Comparison Complexities – the challenges of automating cost-per-use data management

Speakers: Jesse Koennecke & Bill Kara

We have the use reports, but it’s harder to pull in the acquisitions information because of the systems it lives in and the different subscription/purchase models. Cornell had a cut in staffing and an immediate need to assess their resources, so they began to triage statistics cost/use requests. They are not doing systematic or comprehensive reviews of all usage and cost per use.

In the past, they have tried doing manual preparation of reports (merging files, adding data), but that’s time-consuming. They’ve had to set up processes to consistently record data from year to year. Some vendor solutions have been partially successful, and they are looking to emerging options as well. Non-publisher data such as link resolver use data and proxy logs might be sufficient for some resources, or for adding a layer to the COUNTER information to possibly explain some use. All of this has required certain skill sets (databases, spreadsheets, etc.)

Currently, they are working on managing expectations. They need to define the product that their users (selectors, administrators) can expect on a regular basis, what they can handle on request, and what might need a cost/benefit decision. In order to get accurate time estimates for the work, they looked at 17 of their larger publisher-based accounts (not aggregated collections) to get an idea of patterns and unique issues. As an unfortunate side effect, every time they look at something, they get an idea of even more projects they will need to do.

The matrix they use includes: paid titles v. total titles, differences among publishers/accounts, license period, cancellations/swaps allowed, frontfile/backfile, payment data location (package, title, membership), and use data location and standard. Some of the challenges with usage data include non-COUNTER compliance or no data at all, multiple platforms for the same title, combined subscriptions and/or title changes, titles transferred between publishers, and subscribed content v. purchased content. Cost data depends on the nature of the account and the nature of the package.

For packages, you can divide the single line item by the total use, but that doesn’t help the selectors assess the individual subset of titles relevant to their areas/budgets. This gets more complicated when you have packages and individual titles from a single publisher.

Future possibilities: better automated matching of cost and use data, with some useful data elements such as multiple cost or price points, and formulas for various subscription models. They would also like to consolidate accounts within a single publisher to reduce confusion. Also, they need more documentation so that it’s not just in the minds of long-term staff. 

ER&L 2010: Step Right Up! Planning, Pitfalls, and Performance of an E-Resources Fair

Speakers: Noelle Marie Egan & Nancy G. Eagan

This got started because they had some vendors come in to demonstrate their resources. Elsevier offered to do a demo for students with food. The library saw that several good resources were being under-used, so they decided to try to put together an eresources demo with Elsevier and others. It was also a good opportunity to get usability feedback about the new website.

They decided to have ten tables total, representing the whole fair. They polled the reference librarians to get suggestions for who to invite, and they ended up with resources that crossed most of the major disciplines at the school. The fair was held in a high-traffic location of the library (so that they could get walk-in participation) and publicized in the student paper, posted it in the blog, and the librarians shared it on Facebook with student and faculty friends.

They had a raffle to gather information about the participants, and in the end, they had 64 undergraduates, 19 graduates, 6 faculty, 5 staff, and 2 alumni attend the fair over the four hours. By having the users fill out the raffle information, they were able to interact with library staff in a different way that wasn’t just about them coming for information or help.

After the fair, they looked at the sessions and searches of the resources that were represented at the fair, and compared the monthly stats from the previous year. However, there is no way to determine whether the fair had a direct impact on increases (and the few decreases).

In and of itself, the event created publicity for the library. And, because it was free (minus staff time), they don’t really need to provide solid support for the success (or failure) of the event.

Some of the vendors didn’t take it seriously and showed up late. They thought that it was a waste of their time to talk about only the resources the library already purchases, rather than pushing new sales, and it’s doubtful those vendors will be invited back. It may be better to try to schedule it around the time of your state library conference, if that happens nearby, so the vendors may already be close and not making a special trip.

ER&L 2010: We’ve Got Issues! Discovering the right tool for the job

Speaker: Erin Thomas

The speaker is from a digital repository, so the workflow and needs may be different than your situation. Their collections are very old and spread out among several libraries, but are still highly relevant to current research. They have around 15 people who are involved in the process of maintaining the digital collection, and email got to be too inefficient to handle all of the problems.

The member libraries created the repository because they have content than needed to be shared. They started with the physical collections, and broke up the work of scanning among the holding libraries, attempting to eliminate duplications. Even so, they had some duplication, so they run de-duplication algorithms that check the citations. The Internet Archive is actually responsible for doing the scanning, once the library has determined if the quality of the original document is appropriate.

The low-cost model they are using does not produce preservation-level scans; they’re focusing on access. The user interface for a digital collection can be more difficult to browse than the physical collection, so libraries have to do more and different kinds of training and support.

This is great, but it caused more workflow problems than they expected. So, they looked at issue tracking problems. Their development staff already have access to Gemini, so they went with that.

The issues they receive can be assigned types and specific components for each problem. Some types already existed, and they were able to add more. The components were entirely customized. Tasks are tracked from beginning to end, and they can add notes, have multiple user responses, and look back at the history of related issues.

But, they needed a more flexible system that allowed them to drill-down to sub-issues, email v. no email, and a better user interface. There were many other options out there, so they did a needs assessment and an environmental scan. They developed a survey to ask the users (library staff) what they wanted, and hosted demos of options. And, in the end, Gemini was the best system available for what they needed.

ER&L 2010: Adventures at the Article Level

Speaker: Jamene Brooks-Kieffer

Article level, for those familiar with link resolvers, means the best link type to give to users. The article is the object of pursuit, and the library and the user collaborate on identifying it, locating it, and acquiring it.

In 1980, the only good article-level identification was the Medline ID. Users would need to go through a qualified Medline search to track down relevant articles, and the library would need the article level identifier to make a fast request from another library. Today, the user can search Medline on their own; use the OpenURL linking to get to the full text, print, or ILL request; and obtain the article from the source or ILL. Unlike in 1980, the user no longer needs to find the journal first to get to the article. Also, the librarian’s role is more in providing relevant metadata maintenance to give the user the tools to locate the articles themselves.

In thirty years, the library has moved from being a partner with the user in pursuit of the article to being the magician behind the curtain. Our magic is made possible by the technology we know but that our users do not know.

Unique identifiers solve the problem of making sure that you are retrieving the correct article. CrossRef can link to specific instances of items, but not necessarily the one the user has access to. The link resolver will use that DOI to find other instances of the article available to users of the library. Easy user authentication at the point of need is the final key to implementing article-level services.

One of the library’s biggest roles is facilitating access. It’s not as simple as setting up a link resolver – it must be maintained or the system will break down. Also, document delivery service provides an opportunity to generate goodwill between libraries and users. The next step is supporting the users preferred interface, through tools like LibX, Papers, Google Scholar link resolver integration, and mobile devices. The latter is the most difficult because much of the content is coming from outside service providers and the institutional support for developing applications or web interfaces.

We also need to consider how we deliver the articles users need. We need to evolve our acquisitions process. We need to be ready for article-level usage data, so we need to stop thinking about it as a single-institutional data problem. Aggregated data will help spot trends. Perhaps we could look at the ebook pay-as-you-use model for article-level acquisitions as well?

PIRUS & PIRUS 2 are projects to develop COUNTER-compliant article usage data for all article-hosting entities (both traditional publishers and institutional repositories). Projects like MESUR will inform these kinds of ventures.

Libraries need to be working on recommendation services. Amazon and Netflix are not flukes. Demand, adopt, and promote recommendation tools like bX or LibraryThing for Libraries.

Users are going beyond locating and acquiring the article to storing, discussing, and synthesizing the information. The library could facilitate that. We need something that lets the user connect with others, store articles, and review recommendations that the system provides. We have the technology (magic) to make it available right now: data storage, cloud applications, targeted recommendations, social networks, and pay-per-download.

How do we get there? Cover the basics of identify>locate>acquire. Demand tools that offer services beyond that, or sponsor the creation of desired tools and services. We also need to stay informed of relevant standards and recommendations.

Publishers will need to be a part of this conversation as well, of course. They need to develop models that allow us to retain access to purchased articles. If we are buying on the article level, what incentive is there to have a journal in the first place?

For tenure and promotion purposes, we need to start looking more at the impact factor of the article, not so much the journal-level impact. PLOS provides individual article metrics.

ER&L 2010: Beyond Log-ons and Downloads – meaningful measures of e-resource use

Speaker: Rachel A. Flemming-May

What is “use”? Is it an event? Something that can be measured (with numbers)? Why does it matter?

We spend a lot of money on these resources, and use is frequently treated as an objective for evaluating the value of the resource. But, we don’t really understand what use is.

A primitive concept is something that can’t be boiled down to anything smaller – we just know what it is. Use is frequently treated like a primitive concept – we know it when we see it. To measure use we focus on inputs and outputs, but what do those really say about the nature/value of the library?

This gets more complicated with electronic resources that can be accessed remotely. Patrons often don’t understand that they are using library resources when they use them. “I don’t use the library anymore, I get most of what I need from JSTOR.” D’oh.

Funds are based on assessments and outcomes – how do we show that? The money we spend on electronic resources is not going to get any smaller. ROI is focused more on funded research, but not electronic resources as a whole.

Use is not a primitive concept. When we talk about use, it can be an abstract concept that covers all use of library resources (physical and virtual). Our research often doesn’t specify what we are measuring as use.

Use as a process is the total experience of using the library, from asking reference questions to finding a quiet place to work to accessing resources from home. It is the application of library resources/materials to complete a complex/multi-stage process. We can do observational studies of the physical space, but it’s hard to do them for virtual resources.

Most of our research tends to focus on use as a transaction – things that can be recorded and quantified, but are removed from the user. When we look only at the transaction data, we don’t know anything about why the user viewed/downloaded/searched the resource. Because they are easy to quantify, we over-rely on vendor-supplied usage statistics. We think that COUNTER assures some consistency in measures, but there are still many grey areas (i.e. database time-outs equal more sessions).

We need to shift from focusing on isolated instances of downloads and ref desk questions, but focus on the aggregate of the process from the user perspective. Stats are only one component of this. This is where public services and technical services need to work together to gain a better understanding of the whole. This will require administrative support.

John Law’s study of undergraduate use of resources is a good example of how we need to approach this. Flemming-May thinks that the findings from that study have generated more progress than previous studies that were focused on more specific aspects of use.

How do we do all of this without invading on the privacy of the user? Make sure that your studies are thought-out and pass approval from your institution’s review board.

Transactional data needs to be combined with other information to make it valuable. We can see that a resource is being used or not used, but we need to look deeper to see why and what that means.

As a profession, are we prepared to do the kind of analysis we need to do? Some places are using anthropologists for this. A few LIS programs are requiring a research methods course, but it’s only one class and many don’t get it. This is a great continuing education opportunity for LIS programs.

ER&L 2010: Patron-driven Selection of eBooks – three perspectives on an emerging model of acquisitions

Speaker: Lee Hisle

They have the standard patron-driven acquisitions (PDA) model through Coutts’ MyiLibrary service. What’s slightly different is that they are also working on a pilot program with a three college consortia with a shared collection of PDA titles. After the second use of a book, they are charged 1.2-1.6% of the list price of the book for a 4-SU, perpetual access license.

Issues with ebooks: fair use is replaced by the license terms and software restrictions; ownership has been replaced by licenses, so if Coutts/MyiLibrary were to go away, they would have to renegotiate with the publishers; there is a need for an archiving solution for ebooks much like Portico for ejournals; ILL is not feasible for permissible; potential for exclusive distribution deals; device limitations (computer screens v. ebook readers).

Speaker: Ellen Safley

Her library has been using EBL on Demand. They are only buying 2008-current content within specific subjects/LC classes (history and technology). They purchase on the second view. Because they only purchase a small subset of what they could, the number of records they load fluxuates, but isn’t overwhelming.

After a book has been browsed for more than 10 minutes, the play-per-view purchase is initiated. After eight months, they found that more people used the book at the pay-per-view level than at the purchase level (i.e. more than once).

They’re also a pilot for an Ebrary program. They had to deposit $25,000 for the 6 month pilot, then select from over 100,000 titles. They found that the sciences used the books heavily, but there were also indications that the humanities were popular as well.

The difficulty with this program is an overlap between selector print order requests and PDA purchases. It’s caused a slight modification of their acquisitions flow.

Speaker: Nancy Gibbs

Her library had a pilot with Ebrary. They were cautious about jumping into this, but because it was coming from their approval plan vendor, it was easier to match it up. They culled the title list of 50,000 titles down to 21,408, loaded the records, and enabled them in SFX. But, they did not advertise it at all. They gave no indication of the purchase of a book on the user end.

Within 14 days of starting the project, they had spent all $25,000 of the pilot money. Of the 347 titles purchased, 179 of the purchased titles were also owned in print, but those print only had 420 circulations. The most popularly printed book is also owned in print and has had only two circulations. The purchases leaned more towards STM, political science, and business/economics, with some humanities.

The library tech services were a bit overwhelmed by the number of records in the load. The MARC records lacked OCLC numbers, which they would need in the future. They did not remove the records after the trial ended because of other more pressing needs, but that caused frustration with the users and they do not recommend it.

They were surprised by how quickly they went through the money. If they had advertised, she thinks they may have spent the money even faster. The biggest challenge they had was culling through the list, so in the future running the list through the approval plan might save some time. They need better match routines for the title loads, because they ended up buying five books they already have in electronic format from other vendors.

Ebrary needs to refine circulation models to narrow down subject areas. YBP needs to refine some BISAC subjects, as well. Publishers need to communicate better about when books will be made available in electronic format as well as print. The library needs to revise their funding models to handle this sort of purchasing process.

They added the records to their holdings on OCLC so that they would appear in Google Scholar search results. So, even though they couldn’t loan the books through ILL, there is value in adding the holdings.

They attempted to make sure that the books in the list were not textbooks, but there could have been some, and professors might have used some of the books as supplementary course readings.

One area of concern is the potential of compromised accounts that may result in ebook pirates blowing through funds very quickly. One of the vendors in the room assured us they have safety valves for that in order to protect the publisher content. This has happened, and the vendor reset the download number to remove the fraudulent downloads from the library’s account.

ER&L 2010: ERMS Success – Harvard’s experience implementing and using an ERM system

Speaker: Abigail Bordeaux

Harvard has over 70 libraries and they are very decentralized. This implementation is for the central office that provides the library systems services for all of the libraries. Ex Libris is their primary vendor for library systems, include the ERMS, Verde. They try to go with vended products and only develop in-house solutions if nothing else is available.

Success was defined as migrating data from old system to new, to improve workflows with improved efficiency, more transparency for users, and working around any problems they encountered. They did not expect to have an ideal system – there were bugs with both the system and their local data. There is no magic bullet. They identified the high-priority areas and worked towards their goals.

Phase I involved a lot of project planning with clearly defined goals/tasks and assessment of the results. The team included the primary users of the system, the project manager (Bordeaux), and a programmer. A key part of planning includes scoping the project (Bordeaux provided a handout of the questions they considered in this process). They had a very detailed project plan using Microsoft Project, and at the very least, the listing out of the details made the interdependencies more clear.

The next stage of the project involved data review and clean-up. Bordeaux thinks that data clean-up is essential for any ERM implementation or migration. They also had to think about the ways the old ERM was used and if that is desirable for the new system.

The local system they created was very close to the DLF recommended fields, but even so, they still had several failed attempts to map the fields between the two systems. As a result, they had a cycle of extracting a small set of records, loading them into Verde, reviewing the data, and then delete the test records out of Verde. They did this several times with small data sets (10 or so), and when they were comfortable with that, they increased the number of records.

They also did a lot of manual data entry. They were able to transfer a lot, but they couldn’t do everything. And some bits of data were not migrated because of the work involved compared to the value of it. In some cases, though, they did want to keep the data, so they entered it manually. Part of what they did to visualize the mapping process, they created screenshots with notes that showed the field connections.

Prior to this project, they were not using Aleph to manage acquisitions. So, they created order records for the resources they wanted to track. The acquisitions workflow had to be reorganized from the ground up. Oddly enough, by having everything paid out of one system, the individual libraries have much more flexibility in spending and reporting. However, it took some public relations work to get the libraries to see the benefits.

As a result of looking at the data in this project, they got a better idea of gaps and other projects regarding their resources.

Phase two began this past fall to begin incorporating the data from the libraries that did not participate in phase one. They now have a small group with folks representing the libraries. This group is coming up with best practices for license agreements and entering data into the fields.

css.php