NASIG 2012: Everyone’s a Player — Creation of Standards in a Fast-Paced Shared World

Speaker: Nettie Lagace, NISO – National Information Standards Organization

NISO is responsible for a lot of the things we work with all the time, by making the systems work more seamlessly and getting everyone on the same page. More than you may think. They operate on a staff of five: one who is the public face and cheerleader, one who travels to anywhere needed, one who makes sure that the documents are properly edited, and two who handle the technical aspects of the organization/site/commitees/etc.

Topic committees identify needs that become the working groups that tackle the details. Where there is an issue, there’s a working group, with many people involved in each.

New NISO work items consider:
What is not working and how it impacts stakeholders.
How it relates to existing efforts.
Beneficiaries of the deliverables and how.
Scope of the initiative.
Encouragement for implementation.

Librarians aren’t competitive in the ways that other industries might be, so this kind of work is more natural for them. The makeup of the working group tries to keep a balance so that no single interest category makes up the majority of the membership. Consensus is a must. They are also trying to make the open process aspect be more visible/accessible to the general public.

Speaker: Marshall Breeding

Library search has evolved quite a bit, from catalog searches that essentially replicated the card catalog process to federated searching to discovery interfaces to consolidated indexes. Libraries are increasingly moving towards these consolidated indexes to cover all aspects of their collections.

There is a need to bring some order to the market chaos this has created. Discovery brings value to library collections, but it brings some uncertainty to publishers. More importantly, uneven participation diminishes the impact, and right now the ecosystem is dominated by private agreements.

What is the right level of investment in tools that provide access to the millions of dollars of content libraries purchase every year? To be effective, these tools need to be comprehensive, so what do we need to do to encourage all of the players to participate and make the playing field fair to all. How do libraries figure out which discovery service is best for them?

The NISO Open Discovery Initiative hopes to bring some order to that chaos, and they plan to have a final draft by May 2013.

Speaker: Regina Reynolds, Library of Congress

From the beginning, ejournals have had many pain points. What brought this to a head was the problem with missing previous titles in online collections. Getting from a citation to the content doesn’t work when the name is different.

There were issues with missing or incorrect numbering, publishing statements, and dates. And then there are the publishers that used print ISSN for the electronic version. As publishers began digitizing back content, these issues grew exponentially. Guidelines were needed.

After many, many conversations, The Presentation & Identification of E-Journals (PIE-J) was produced and is available for comment until July 5th. The most important part is the three pages of recommended practices.

See also: In Search of Best Practices for Presentation of E-Journals by Regina Romano Reynolds and Cindy Hepfer

ER&L 2012: Lightening Talks

Shellharbour; Lightening
photo by Steven

Due to a phone meeting, I spent the first 10 min snarfing down my lunch, so I missed the first presenters.

Jason Price: Libraries spend a lot of time trying to get accurate lists of the things we’re supposed to have access to. Publisher lists are marketing lists, and they don’t always include former titles. Do we even need these lists anymore? Should we be pushing harder to get them? Can we capture the loss from inaccurate access information and use that to make our case? Question: Isn’t it up to the link resolver vendors? No, they rely on the publishers/sources like we do. Question: Don’t you think something is wrong with the market when the publisher is so sure of sales that they don’t have to provide the information we want? Question: Haven’t we already done most of this work in OCLC, shouldn’t we use that?

Todd Carpenter: NISO recently launched the Open Discovery Initiative, which is trying to address the problems with indexed discovery services. How do you know what is being indexed in a discovery service? What do things like relevance ranking mean? What about the relationships between organizations that may impact ranking? The project is ongoing and expect to hear more in the fall (LITA, ALA Midwinter, and beyond).

Title change problem — uses xISSN service from OCLC to identify title changes through a Python script. If the data in OCLC isn’t good enough, and librarians are creating it, then how can we expect publishers to do better.

Dani Roach: Anyone seeing an unusual spike in use for 2011? Have you worked with them about it? Do you expect a resolution? They believe our users are doing group searches across the databases, even though we are sending them to specific databases, so they would need to actively choose to search more than one. Caution everyone to check their stats. And how is their explanation still COUNTER compliant.

Angel Black: Was given a mission at ER&L to find out what everyone is doing with OA journals, particularly those that come with traditional paid packages. They are manually adding links to MARC records, and use series fields (830) to keep track of them. But, not sure how to handle the OA stuff, particularly when you’re using a single record. Audience suggestion to use 856 subfield x. “Artesian, handcrafted serials cataloging”

Todd Carpenter part 2: How many of you think your patrons are having trouble finding the OA in a mixed access journal that is not exposed/labeled? KBs are at the journal or volume/issue level. About 1/3 of the room thinks it is a problem.

Has anyone developed their own local mobile app? Yes, there is a great way to do that, but more important to create a mobile-friendly website. PhoneGap will write an app for mobile OS that will wrap your web app in an app, and include some location services. Maybe look to include library in a university-wide app?

Adam Traub: Really into PPV/demand-driven. Some do an advance purchase model with tokens, and some of them will expire. Really wants to make it an unmediated process, but it opens up the library to increasing and spiraling costs. They went unmediated for a quarter, and the use skyrocketed. What’s a good way to do this without spending a ton of money? CCC’s Get It Now drives PPV usage through the link resolver. Another uses a note to indicate that the journal is being purchased by the library.

Kristin Martin: Temporarily had two discovery services, and they don’t know how to display this to users. Prime for some usability testing. Have results from both display side by side and let users “grade” them.

Michael Edwards: Part of a NE consortia, and thinks they should be able to come up with consortial pressure on vendors, and they’re basically telling them to take a leap. Are any of the smaller groups in pressuring vendors in making concessions to consortial acquisitions. Orbis-Cascade and Connect NY have both been doing good things for ebook pricing and reducing the multiplier for SU. Do some collection analysis on the joint borrowing/purchasing policies? The selectors will buy what they buy.

guest post on ACRLog

I see a strong need for the creation, support, and implementation of data standards and tools to provide libraries with the means to effectively evaluate their resources.

A few months ago, Maura Smale contacted me about writing a guest post for ACRLog. I happily obliged, and it has now been published.

When it came time to finally sit down and write about something (anything) that interested me in academic librarianship, I found myself at a loss for words. Last month, I spent some time visiting friends here and there on my way out to California for the Internet Librarian conference, and many of those friends also happened to be academic librarians. It was through those conversations that I found a common thread for the issues that are pushing some of my professional buttons.

Specifically, I see a strong need for the creation, support, and implementation of data standards and tools to provide libraries with the means to effectively evaluate their resources. If that interests you as well, please take a moment to go read the full essay, and leave a comment if you’d like.

ER&L 2010: We’ve Got Data – Now What Do We Do With It? Applying Standards to Assess Information Resources

Speakers: Mary Feeney, Ping Situ, and Jim Martin

They had a budget cut (surprise surprise), so they had to asses what to cut using the data they had. Complicating this was a change in organizational structure. In addition, they adopted the BYU project management model. Also, they had to sort out a common approach to assessment across all of the disciplines/resources.

They used their ILLs to gather stats about print resource use. They hired Scholarly Stats to gather their online resource stats, and for publishers/vendors not in Scholarly Stats, they gathered data directly from the vendors/publishers. Their process involved creating spreadsheets of resources by type, and then divided up the work of filling in the info. Potential cancellations were then provided to interested parties for feedback.

Quality standards:

  • 60% of monographs need to show at least one use in the last four years – this was used to apply cuts to the firm orders book budget, which impacts the flexibility for making one-time purchases with remaining funds and the book money was shifted to serial/subscription lines
  • 95% of individual journal titles need to show use in the last three years (both in-house and full-text downloads) – LJUR data was used to add to the data collected about print titles
  • dual format subscriptions required a hybrid approach, and they compared the costs with the online-only model – one might think that switching to online only would be a no-brainer, but licensing issues complicate the matter
  • cost per use of ejournal packages will not exceed twice the cost of ILL articles

One problem with their approach was with the existing procedures that resulted in not capturing data about all print journals. They also need to include local document delivery requests in future analysis. They need to better integrate the assessment of the use of materials in aggregator databases, particularly since users are inherently lazy and will go the easiest route to the content.

Aggregator databases are difficult to compare, and often the ISSN lists are incomplete. And, it’s difficult to compare based on title by title holdings coverage. It’s useful for long-term use comparison, but not this immediate project. Other problems with aggregator databases include duplication, embargos, and completeness of coverage of a title. They used SerSol’s overlap analysis tool to get an idea of duplication. It’s a time-consuming project, so they don’t plan to continue with it for all of their resources.

What if you don’t have any data or the data you have doesn’t have a quality standard? They relied on subject specialists and other members of the campus to assess the value of those resources.