ER&L 2015 – Link Resolvers and Analytics: Using Analytics Tools to Identify Usage Trends and Access Problems

Google Analytics (3rd ed)

Speaker: Amelia Mowry, Wayne State University

Setting up Google Analytics on a link resolver:

  1. Create a new account in Analytics and put the core URL in for your link resolver, which will give you the tracking ID.
  2. Add the tracking code to the header or footer in the branding portion of the link resolver.

Google Analytics was designed for business. If someone spends a lot of time on a business site it’s good, but not necessarily for library sites. Brief interactions are considered to be bounces, which is bad for business, but longer times spent on a link resolver page could be a sign of confusion or frustration rather than success.

The base URL refers to several different pages the user interacts with. Google Analytics, by default, doesn’t distinguish them. This can hide some important usage and trends.

Using custom reports, you can tease out some specific pieces of information. This is where you can filter down to specific kinds of pages within the link resolver tool.

You can create views that will allow you to see what a set of IP ranges are using, which she used to filter to the use by computers in the library and computers not in the library. IP data is not collected by default, so if you want to do this, set it up at the beginning.

To learn where users were coming from to the link resolver, she created another custom report with parameters that would include the referring URLs. She also created a custom view that included the error parameter “SS_Error”. Some were from LibGuides pages, some were from the catalog, and some were from databases.

Ask specific and relevant questions of your data. Apply filters carefully and logically. Your data is a starting point to improving your service.

Google Analytics (3rd edition) by Ledford, Tyler, and Teixeira (Wiley) is a good resource, though it is business focused.

ER&L 2015 – Did We Forget Something? The Need to Improve Linking at the Core of the Library’s Discovery Strategy

Linked
“Linked” by arbyreed

Speaker: Eddie Neuwirth, ProQuest

Linking is one of the top complaints of library users, and we’re relying on old tools to do it (OpenURL). The link resolver menu is not a familiar thing for our users, and many of them don’t know what to do with it. 30% of users fail to click the appropriate link in the menu (study from 2011).

ProQuest tried to improve the 360 Link resolver. They focused on improving the reliability and the usability. They used something called index enhanced direct linking in Summon (basically publisher data) that bypasses the OpenURL link resolvers from 370 providers. These links are more intuitive and stable than OpenURL. This is great for Summon, with about 60% of links being IEDL, but discovery happens everywhere.

They also created a new sidebar helper frame to replace the old menu. The OpenURL will take them to the best option, but then the frame offers a clean view of other options and can be collapsed if not needed by the user. It also has the library branding, so that the user is able to connect that their access to the content is from the library, rather than just that Google is awesome.

 

Speaker: Jesse Koennecke, Cornell University

They are focusing on the delivery of content as well as the discovery. Brief demo of their side-by-side catalog and discovery search due to nifty API calls (bento box). Another demo of the sidebar helper frame from before, including the built-in problem report form.

 

Speaker: Jacquie Samples, Duke University

Does the website design for the Duke Libraries. They’ve done a lot of usability testing. The new website went out in summer of 2014, and after that, they decided to look at their other services like the link resolver. They came up with some custom designs for those screens, and ended up beta testing the new sidebar instead. They have a bento box results page, too.

The FRBR user tasks matter and should be applied to discovery and access, too: find, identify, select, and obtain. We’re talking about obtaining here.

ER&L 2015 – Monday Short Talks: ERM topics

Link
“Link” by Andrew Becraft

[I missed the first talk due to a slightly longer lunch than had been planned.]

Better Linking by Our Bootstraps
Speaker: Aron Wolf, ProQuest

He is a librarian trained as a cataloger.

Error reports are important, because for each one, there were probably ten instances not reported. Report early and report often.

Include the original query for the OpenURL in order to reproduce it. If you have the time, play around with the string data and see if you can “fix” it yourself and report that.

There are a lot of factors into how long it will take to fix whatever is causing the OpenURL error. They don’t want to raise false expectations by giving a date and time.

Once an error has been reported, it enters a triage system. If it has a broader impact, it will be prioritized higher. Then it’s assigned to someone to fix.

 

Trouble Ticket Systems: Help or Hindrance?
Speaker: Margaret Hogarth, The Claremont Colleges Library

We should be polite and helpful. Human.

Detail the issue as specifically as possible, with steps, equipment, screen shots, etc. Include account number or other identifier.

Vendors need to identify themselves in responses. They also need to include the issue in responses, particularly when the message trail gets long. Customers need to keep track of the trouble tickets they have submitted.

Respond promptly, even if it will take longer to resolve. Mine the trouble ticket data to create FAQ, known issues, etc. and add meaningful metadata.

Email is good for tracking the history. Online forms should have an email sent with the ticket detail and number. Some vendors hide their support email address, which is annoying.

If vendors require authentication to submit a ticket, provide examples of what information they are looking for.

Vendors should ask their most frequent support users for feedback on what would make their sites more useful.

Multiple tech supports make it challenging for reporting issues to large companies.

Jing screen casting is helpful for showing how to reproduce the problem, particularly when you can’t attach a screenshot or cast, since it provides a URL.

All of this is useful for your internal support ticketing systems, too.

ER&L 2015 – Understanding Your Users: Using Google Analytics and Forms

Google Analytics v2.0
“Google Analytics v2.0” by Panayotis Vryonis

Speakers: Jaclyn Bedoya & Michael DeMars, CSU Fullerton

There are some challenges to surveying students, including privacy, IRB requirements, and survey fatigue. Don’t collect data for the sake of collecting data. Make sure it is asking what you think it is asking to get results that are worth measuring.

Google Analytics is free, relatively easy to use, and easy to install. And it’s free. We’re being asked to assess, but not being given a budget to do so.

It’s really good about measuring the when and where, but not the why. Is it that you don’t see Chrome users because nobody is using Chrome, or is it that your website is broken for Chrome users?

If people are hanging out on your library pages for too long, then maybe you need to redesign them. We want them heading out quickly to the resources we’re linking to.

They’ve made decisions about whether to spend time on making sites compatible with browser versions based on how much traffic is coming from them. They’ve determined that mobile use is increasing, so they are desigining for that now.

They were able to use click data to eliminate low-used webpages and tools in the redesign. They were able to use traffic data to determine how much server support was needed on the weekends.

Google Forms are free and they can be used to find out things about users that Analytics can’t tell you. They can be embedded into things like LibGuides. There’s a “view summary responses” option that creates pie charts and fancy things for your boss.

They asked who they are (discipline), how often they use the library, where they use it, and what they thought of the library services. There were incentives with gift cards (including ones for In-N-Out Burger). The free-text section had a lot of great content.

The speakers spent some time on the survey data, but the sum total is that it matched their expectations, but now they had data to prove it.

#libday8 day 5 — queasy

funny pictures - can we fix it? yes we can!After a late start due to some unexpected things-that-must-be-done-now, I arrived and began to dig into the action items delayed from yesterday. This included responding to OCLC with information about a billing error, filing my notes from the discovery service presentation, and following-up on a related query from a colleague.

Added a new ejournal to our knowledgebase, but the default URL is different from what the publisher gave me. Added the custom URL and a note in Outlook to check on Monday to make sure the OpenURL linking works with the custom URL. Our KB provider does nightly refreshes of profile changes, so things we do behind the scenes aren’t live until the next day.

Unlike most of the reference librarians, there is one in particular who refuses to provide me with descriptions and coverage details of resources that go with their links on our website and LibGuides. I end up searching for descriptions on other library sites, and usually find something that will work. I added two resources for this librarian today, and rather than a simple copy/paste from the email generated by the form that every other librarian is able to send me, I spent about 20 min digging around for the information I needed. If it’s wrong, the only people who will care are the users, since I doubt this librarian even checks this stuff. This was evident because the librarian asked me to add three other resources that are already listed on the website and in LibGuides.

I tried to keep on slogging through, but the waves of nausea I’d been ignoring all morning were becoming harder to ignore. I decided it would be better to ride them at home than trying to work and maybe staying too long. Good thing I did, because the next 12 hours were very unpleasant, and the 12 after that less so. I’m posting this now a day late, and I’m finally starting to feel human again.

So, for library day in the life round eight, I’m signing out with a whimper.

libday7: day 3

The day began with sorting through the new email messages that arrived since yesterday, flagging actionable items with due dates, responding to those that could be done quickly, and deleting the irrelevant stuff.

Then I began to work my way through the to-do list, starting with verifying which ebook publisher licenses we have set up in GOBI and if any others need to be added. I tried to do this yesterday, but my login wouldn’t work. But, now that I’m in, I think I need admin rights to see them, so once again it’s on hold.

getting over the afternoon slump

Being thwarted in that, I dug back into an ongoing summer project — adding holdings years and correcting holdings errors for print journals in our OpenURL knowledgebase. I was lucky to have a floater assigned to me long enough to get the physical inventory done, and now it’s a matter of checking on anomalies (physical holdings but no catalog record, no physical holdings but with a catalog record, and neither physical holdings nor a catalog record but still listed in KB) and entering the holdings years into a spreadsheet that gets uploaded to the KB. I’m also adding location information, since we currently house print journals in four locations on campus, as well as adding notes about shelved-as titles.

Since I was on a roll with this project today after nearly a week of being distracted by other tasks, I decided to stick with it after lunch. I’m at 55% completed and I was hoping to have it done by mid-August, which will require a bit more diligence than I’ve given it for the past couple of months.

I had a brief afternoon interlude with Reese’s Peanut Butter Cups and a can of Coke Zero. Ahh…

I also paused to help a friend who is tech support at a medical non-profit in town. She was trying out their new remote desktop support service, so I let her take over my computer for a brief moment. Hope that was kosher with campus IS, but I figured it was for a good cause, and librarianly of me to aid in someone’s information needs.

Then it was back to the spreadsheets and the data and the ZOMG WILL THIS EVER END.

Hit a stopping point and decided to use the last 15 min of my day to wrap this post up and catch up on some professional reading.

nifty enhancement for the A-Z journal tool

Not sure if I’ve mentioned it here, but my library uses SerialsSolutions for our A-Z journal list, OpenURL linking, and ERMS. I’ve been putting a great deal of effort into the ERMS over the past few years, getting license, cost, and use data in so that we can use this tool for both discovery and assessment. Aside from making the page look pretty much like our library website, we haven’t done much to enhance the display.

Recently (as in, yesterday) my colleague Dani Roach over at the University of St. Thomas shared with me an enhancement they implemented using the “public notes” for a journal title. They have icons that indicate whether there is an RSS feed for the contents and whether the journal is peer reviewed (according to Ulrichs). The icon for the RSS feed is also a link to the feed itself. This is what you  see when you search for the Journal of Biological Chemistry, for example.

Much like the work I’m doing to pull together helpful information on the back-end about the resources from a variety of sources, this pulls in information that would be tremendously useful for students and faculty researchers, I think.

However, I have a feeling this would take quite a bit of time to gather up the information and add it to the records. Normally I would leap in with both feet and just do it, but in the effort to be more responsible, I’m going to talk with the reference librarians first. But, I wanted to share this with you all because I think it’s a wonderful libhack that anyone should consider doing, regardless of which ERMS they have.

ER&L 2010: Adventures at the Article Level

Speaker: Jamene Brooks-Kieffer

Article level, for those familiar with link resolvers, means the best link type to give to users. The article is the object of pursuit, and the library and the user collaborate on identifying it, locating it, and acquiring it.

In 1980, the only good article-level identification was the Medline ID. Users would need to go through a qualified Medline search to track down relevant articles, and the library would need the article level identifier to make a fast request from another library. Today, the user can search Medline on their own; use the OpenURL linking to get to the full text, print, or ILL request; and obtain the article from the source or ILL. Unlike in 1980, the user no longer needs to find the journal first to get to the article. Also, the librarian’s role is more in providing relevant metadata maintenance to give the user the tools to locate the articles themselves.

In thirty years, the library has moved from being a partner with the user in pursuit of the article to being the magician behind the curtain. Our magic is made possible by the technology we know but that our users do not know.

Unique identifiers solve the problem of making sure that you are retrieving the correct article. CrossRef can link to specific instances of items, but not necessarily the one the user has access to. The link resolver will use that DOI to find other instances of the article available to users of the library. Easy user authentication at the point of need is the final key to implementing article-level services.

One of the library’s biggest roles is facilitating access. It’s not as simple as setting up a link resolver – it must be maintained or the system will break down. Also, document delivery service provides an opportunity to generate goodwill between libraries and users. The next step is supporting the users preferred interface, through tools like LibX, Papers, Google Scholar link resolver integration, and mobile devices. The latter is the most difficult because much of the content is coming from outside service providers and the institutional support for developing applications or web interfaces.

We also need to consider how we deliver the articles users need. We need to evolve our acquisitions process. We need to be ready for article-level usage data, so we need to stop thinking about it as a single-institutional data problem. Aggregated data will help spot trends. Perhaps we could look at the ebook pay-as-you-use model for article-level acquisitions as well?

PIRUS & PIRUS 2 are projects to develop COUNTER-compliant article usage data for all article-hosting entities (both traditional publishers and institutional repositories). Projects like MESUR will inform these kinds of ventures.

Libraries need to be working on recommendation services. Amazon and Netflix are not flukes. Demand, adopt, and promote recommendation tools like bX or LibraryThing for Libraries.

Users are going beyond locating and acquiring the article to storing, discussing, and synthesizing the information. The library could facilitate that. We need something that lets the user connect with others, store articles, and review recommendations that the system provides. We have the technology (magic) to make it available right now: data storage, cloud applications, targeted recommendations, social networks, and pay-per-download.

How do we get there? Cover the basics of identify>locate>acquire. Demand tools that offer services beyond that, or sponsor the creation of desired tools and services. We also need to stay informed of relevant standards and recommendations.

Publishers will need to be a part of this conversation as well, of course. They need to develop models that allow us to retain access to purchased articles. If we are buying on the article level, what incentive is there to have a journal in the first place?

For tenure and promotion purposes, we need to start looking more at the impact factor of the article, not so much the journal-level impact. PLOS provides individual article metrics.

NASIG 2008: Using Institutional and Library Identifiers to Ensure Access to Electronic Resources

Presenters: Helen Henderson, Don Hamparian, and John Shaw

One of the perpetual problems with online access to journals is that often, something breaks down on the supply chain, and the library discovers that access has disappeared. The presenters seek to offer ideas for preventing this from happening.

Henderson showed a list of 15 transactions that take place in acquiring and maintaining a subscription to a single title. There are plenty of places for a breakdown. Name changes, agent changes, publisher changes, hosting platform changes, price changes, bundle changes, licensing changes, authentication changes, etc.

OCLC’s WorldCat Registry maintains institutional information for libraries, which is populated and augmented by libraries and partners. Libraries can use it to register their OpenURL resolvers, IP addresses, and to share the profile with selected organizations. OCLC uses it to configure WorldCat Local, among other things. Vendors use it as an OpenURL gateway service and to verify customer data.

Ringgold’s Identify database and services normalizes institutional information for publishers. It includes consortia membership information and the Anglicized name, as well as many of the data elements in OCLC’s registry. Rather than OCLC symbol, they have an identifying number for each institution.

Potential interactions between the two identifiers includes a maping between them. The two directories do not have as much overlapping information as you might think.

Standards and identifiers are becoming even more important to the supply chain with the transition to electronic publication. Publishers need clean records in order to provide holdings lists to libraries and OpenURL resolvers, among other things. Publishers use services like WorldCat Registry and Identify to improve their data, service, and cost-savings that gets passed on to subscribers.

ICEDIS is a standard for the exchange of data between publishers and agents. It is old and has been implemented differently. They are hoping to develop an XML version by 2010, which will include the institutional identifier. ONIX is working on developing automatic holdings reports that will be fed into ERMS.

Project TRANSFER will create a way to exchange subscription information using a unique identifier. KBART is another initiative looking at a portion of the solution. I² (part of NISO) is looking at standardizing metadata using identifiers, beyond just for library resources. CORE is a project in the vendor community working on communicating between the ILS and the ERMS.

Standards will help ease the pain of price agreement between publishers and agents, customer identification, consortia membership and entitlements, and many of the other things that cause the supply chain to break down.

Libraries should include their identifier numbers in orders. The subscription agents are too overwhelmed to implement the kind of change that would require them to look up and add this to every record. Ringgold & OCLC are in communication with NISO to create a standard that is not proprietary.

NASIG 2008: When Did eBooks Become Serials?

Presenters: Kim Armstrong, Bob Nardini, Peter McCracken, and Rick Lugg

Because this is a serials conference, Lugg provided us with a title change and enumeration to differentiate this presentation from the repeat in the afternoon. Serialists (& librarians in general) love corny inside jokes.

eBook users want to use the work; to browse, to search, and to have the institution subscribe to it for them. Much of this is due to the success and model of ejournals.

eJournals have brought about many changes in information provision. More content is now available to users, and they are increasingly using it more. However, archive and access issues have not been fully addressed, nor have possible solutions thoroughly tested. In addition, ejournals (and other subscription items) have taken over more and more of the materials budget, which has necessitated greater selection. And, in many ways, ejournals are more labor-intensive than print.

Subscription has become one of the most successful models for ebook providers. There are some emerging models in addition to subscription or purchase. EBL, for example, offers short-term rental options.

There are many more titles and decisions involved in purchasing ebooks as opposed to journals. The content isn’t as well advertised through abstracting and indexing sources, since it’s one large thing rather than millions of little things aggregated together under one title.

Acquisition of ebooks provides its own unique challenges, ranging from the variety of sources to the mechanisms of selection. Is the content static or dynamic? One-time purchases or ongoing commitments? What libraries say verses what they do — we say we can’t buy more subscriptions, yet we continue to do so.


Library/Consortial View

Librarians have been trying to figure out what to do with ebooks, whether to purchase them, and how we should go about doing so for at least ten years.

The Committee on Institutional Cooperation coordinated a deal with Springer and MyiLibrary to purchase Springer’s entire ebook collection from 2005-2010. Access went live in January 2008, and over the first five months of 2008, ebook use on the Springer platform was nearly half that of the ejournal use, even without catalog records or promoting it. On the other hand, the MyiLibrary use was a quarter or less, partially due to MyiLibrary’s lack of OpenURL support.

We need to make sure that we stay relevant to our users needs and not become just a place to store their archival literature.


eBookseller View

Back in the day, the hot topic in the monographic world was approval plans. Eventually they figured that out, and book acquisition became routine. Now we have serials-like problems for both booksellers and book buyers.

Approval plans had a seriality to them, but we haven’t come up with something similar for ebooks. Billing and inventory systems for booksellers are set up for individual book sales, not subscriptions.

The vendor/aggregator is challenged with incorporating content from a variety of publisher sources, each with their own unique quirks. Bibliographic control can take the route of treating ebooks like print books, but we’re in a time of change that may necessitate a different solution.

Maybe the panel should have been called, “When will ebooks and serials become one big database?”

eResource Access & Management View

The differences between ebooks and ejournals on the management side really isn’t all that different. The metadata, however, is exponentially larger when dealing with ebooks verses ejournals. The bibliographic standards (or those accepted) are higher for books than for journals.

How do we handle various editions? Do we go with the LibraryThing model of collecting all editions under one work record?

css.php