ER&L 2015 – Did We Forget Something? The Need to Improve Linking at the Core of the Library’s Discovery Strategy

Linked
“Linked” by arbyreed

Speaker: Eddie Neuwirth, ProQuest

Linking is one of the top complaints of library users, and we’re relying on old tools to do it (OpenURL). The link resolver menu is not a familiar thing for our users, and many of them don’t know what to do with it. 30% of users fail to click the appropriate link in the menu (study from 2011).

ProQuest tried to improve the 360 Link resolver. They focused on improving the reliability and the usability. They used something called index enhanced direct linking in Summon (basically publisher data) that bypasses the OpenURL link resolvers from 370 providers. These links are more intuitive and stable than OpenURL. This is great for Summon, with about 60% of links being IEDL, but discovery happens everywhere.

They also created a new sidebar helper frame to replace the old menu. The OpenURL will take them to the best option, but then the frame offers a clean view of other options and can be collapsed if not needed by the user. It also has the library branding, so that the user is able to connect that their access to the content is from the library, rather than just that Google is awesome.

 

Speaker: Jesse Koennecke, Cornell University

They are focusing on the delivery of content as well as the discovery. Brief demo of their side-by-side catalog and discovery search due to nifty API calls (bento box). Another demo of the sidebar helper frame from before, including the built-in problem report form.

 

Speaker: Jacquie Samples, Duke University

Does the website design for the Duke Libraries. They’ve done a lot of usability testing. The new website went out in summer of 2014, and after that, they decided to look at their other services like the link resolver. They came up with some custom designs for those screens, and ended up beta testing the new sidebar instead. They have a bento box results page, too.

The FRBR user tasks matter and should be applied to discovery and access, too: find, identify, select, and obtain. We’re talking about obtaining here.

ER&L 2010: Adventures at the Article Level

Speaker: Jamene Brooks-Kieffer

Article level, for those familiar with link resolvers, means the best link type to give to users. The article is the object of pursuit, and the library and the user collaborate on identifying it, locating it, and acquiring it.

In 1980, the only good article-level identification was the Medline ID. Users would need to go through a qualified Medline search to track down relevant articles, and the library would need the article level identifier to make a fast request from another library. Today, the user can search Medline on their own; use the OpenURL linking to get to the full text, print, or ILL request; and obtain the article from the source or ILL. Unlike in 1980, the user no longer needs to find the journal first to get to the article. Also, the librarian’s role is more in providing relevant metadata maintenance to give the user the tools to locate the articles themselves.

In thirty years, the library has moved from being a partner with the user in pursuit of the article to being the magician behind the curtain. Our magic is made possible by the technology we know but that our users do not know.

Unique identifiers solve the problem of making sure that you are retrieving the correct article. CrossRef can link to specific instances of items, but not necessarily the one the user has access to. The link resolver will use that DOI to find other instances of the article available to users of the library. Easy user authentication at the point of need is the final key to implementing article-level services.

One of the library’s biggest roles is facilitating access. It’s not as simple as setting up a link resolver – it must be maintained or the system will break down. Also, document delivery service provides an opportunity to generate goodwill between libraries and users. The next step is supporting the users preferred interface, through tools like LibX, Papers, Google Scholar link resolver integration, and mobile devices. The latter is the most difficult because much of the content is coming from outside service providers and the institutional support for developing applications or web interfaces.

We also need to consider how we deliver the articles users need. We need to evolve our acquisitions process. We need to be ready for article-level usage data, so we need to stop thinking about it as a single-institutional data problem. Aggregated data will help spot trends. Perhaps we could look at the ebook pay-as-you-use model for article-level acquisitions as well?

PIRUS & PIRUS 2 are projects to develop COUNTER-compliant article usage data for all article-hosting entities (both traditional publishers and institutional repositories). Projects like MESUR will inform these kinds of ventures.

Libraries need to be working on recommendation services. Amazon and Netflix are not flukes. Demand, adopt, and promote recommendation tools like bX or LibraryThing for Libraries.

Users are going beyond locating and acquiring the article to storing, discussing, and synthesizing the information. The library could facilitate that. We need something that lets the user connect with others, store articles, and review recommendations that the system provides. We have the technology (magic) to make it available right now: data storage, cloud applications, targeted recommendations, social networks, and pay-per-download.

How do we get there? Cover the basics of identify>locate>acquire. Demand tools that offer services beyond that, or sponsor the creation of desired tools and services. We also need to stay informed of relevant standards and recommendations.

Publishers will need to be a part of this conversation as well, of course. They need to develop models that allow us to retain access to purchased articles. If we are buying on the article level, what incentive is there to have a journal in the first place?

For tenure and promotion purposes, we need to start looking more at the impact factor of the article, not so much the journal-level impact. PLOS provides individual article metrics.

NASIG 2008: Using Institutional and Library Identifiers to Ensure Access to Electronic Resources

Presenters: Helen Henderson, Don Hamparian, and John Shaw

One of the perpetual problems with online access to journals is that often, something breaks down on the supply chain, and the library discovers that access has disappeared. The presenters seek to offer ideas for preventing this from happening.

Henderson showed a list of 15 transactions that take place in acquiring and maintaining a subscription to a single title. There are plenty of places for a breakdown. Name changes, agent changes, publisher changes, hosting platform changes, price changes, bundle changes, licensing changes, authentication changes, etc.

OCLC’s WorldCat Registry maintains institutional information for libraries, which is populated and augmented by libraries and partners. Libraries can use it to register their OpenURL resolvers, IP addresses, and to share the profile with selected organizations. OCLC uses it to configure WorldCat Local, among other things. Vendors use it as an OpenURL gateway service and to verify customer data.

Ringgold’s Identify database and services normalizes institutional information for publishers. It includes consortia membership information and the Anglicized name, as well as many of the data elements in OCLC’s registry. Rather than OCLC symbol, they have an identifying number for each institution.

Potential interactions between the two identifiers includes a maping between them. The two directories do not have as much overlapping information as you might think.

Standards and identifiers are becoming even more important to the supply chain with the transition to electronic publication. Publishers need clean records in order to provide holdings lists to libraries and OpenURL resolvers, among other things. Publishers use services like WorldCat Registry and Identify to improve their data, service, and cost-savings that gets passed on to subscribers.

ICEDIS is a standard for the exchange of data between publishers and agents. It is old and has been implemented differently. They are hoping to develop an XML version by 2010, which will include the institutional identifier. ONIX is working on developing automatic holdings reports that will be fed into ERMS.

Project TRANSFER will create a way to exchange subscription information using a unique identifier. KBART is another initiative looking at a portion of the solution. I² (part of NISO) is looking at standardizing metadata using identifiers, beyond just for library resources. CORE is a project in the vendor community working on communicating between the ILS and the ERMS.

Standards will help ease the pain of price agreement between publishers and agents, customer identification, consortia membership and entitlements, and many of the other things that cause the supply chain to break down.

Libraries should include their identifier numbers in orders. The subscription agents are too overwhelmed to implement the kind of change that would require them to look up and add this to every record. Ringgold & OCLC are in communication with NISO to create a standard that is not proprietary.