ER&L 2013: Lightning Talks

“¡Rayos!” by José Eugenio Gómez Rodríguez

Speaker: Emily Guhde, NCLIVE
“We’ve Got Your Number: Making Usage Data Matter” is the project they are working on. What is a good target cost per use for their member libraries? They are organizing this by peer groups. How can the member libraries improve usage? They are hoping that other libraries will be able to replicated this in the future.

Speaker: Francis Kayiwa, UIC
He is a server administrator with library training, and wanted to be here to understand what it is his folks are coming back and asking him to do. Cross-pollinate conferences — try to integrate other kinds of conferences happening nearby.

Speaker: Annette Bailey, Virginia Tech
Co-developed LibX with her husband, now working on a new project to visualize what users are clicking on after they get a search result in Summon. This is a live, real-time visualization, pulled from the Summon API.

Speaker: Angie Rathnel, University of Kansas
Have been using a SAS called Callisto to track and claim eresources. It tracks access to entitlements daily/weekly, and can check to make sure proxy configurations are set up correctly.

Speaker: Cindy Boeke, Southern Methodist University
Why aren’t digital library collections included with other library eresources on lists and such (like the ubiquitous databases A-Z page)?

Speaker: Rick Burke, SCELC
SIPX to manage copyright in a consortial environment. Something something users buying access to stuff we already own. I’m guessing this is more for off-campus access?

Speaker: Margy Avery, MIT Press
Thinking about rich/enhanced digital publications. Want to work with libraries to make this happen, and preservation is a big issue. How do we catalog/classify this kind of resource?

Speaker: Jason Price, Claremont Colleges
Disgruntled with OpenURL and the dependency on our KB for article-level access. It is challenging to keep our lists (KBs) updated and accurate — there has to be a better way. We need to be working with the disgrundterati who are creating startups to address this problem. Pubget was one of the first, and since then there is Dublin Six, Readcube, SIPX, and Callisto. If you get excited about these things, contact the startups and tell them.

Speaker: Wilhelmina Ranke, St. Mary’s University
Collecting mostly born digital collections, or at least collections that are digitized already, in the repository: student newspaper, video projects, and items digitized for classroom use that have no copyright restrictions. Doesn’t save time on indexing, but it does save time on digitizing.

Speaker: Bonnie Tijerina, Harvard
The #ideadrop house was created to be a space for librar* to come together to talk about librar* stuff. They had a little free library box for physical books, and also a collection of wireless boxes with free digital content anyone could download. They streamed conversations from the living room 5-7 times a day.

Speaker: Rachel Frick
Digital Public Library of America focuses on content that is free to all to create a more informed citizenry. They want to go beyond just being a portal for content. They want to be a platform for community involvement and conversations.

ER&L 2012: Lightening Talks

Shellharbour; Lightening
photo by Steven

Due to a phone meeting, I spent the first 10 min snarfing down my lunch, so I missed the first presenters.

Jason Price: Libraries spend a lot of time trying to get accurate lists of the things we’re supposed to have access to. Publisher lists are marketing lists, and they don’t always include former titles. Do we even need these lists anymore? Should we be pushing harder to get them? Can we capture the loss from inaccurate access information and use that to make our case? Question: Isn’t it up to the link resolver vendors? No, they rely on the publishers/sources like we do. Question: Don’t you think something is wrong with the market when the publisher is so sure of sales that they don’t have to provide the information we want? Question: Haven’t we already done most of this work in OCLC, shouldn’t we use that?

Todd Carpenter: NISO recently launched the Open Discovery Initiative, which is trying to address the problems with indexed discovery services. How do you know what is being indexed in a discovery service? What do things like relevance ranking mean? What about the relationships between organizations that may impact ranking? The project is ongoing and expect to hear more in the fall (LITA, ALA Midwinter, and beyond).

Title change problem — uses xISSN service from OCLC to identify title changes through a Python script. If the data in OCLC isn’t good enough, and librarians are creating it, then how can we expect publishers to do better.

Dani Roach: Anyone seeing an unusual spike in use for 2011? Have you worked with them about it? Do you expect a resolution? They believe our users are doing group searches across the databases, even though we are sending them to specific databases, so they would need to actively choose to search more than one. Caution everyone to check their stats. And how is their explanation still COUNTER compliant.

Angel Black: Was given a mission at ER&L to find out what everyone is doing with OA journals, particularly those that come with traditional paid packages. They are manually adding links to MARC records, and use series fields (830) to keep track of them. But, not sure how to handle the OA stuff, particularly when you’re using a single record. Audience suggestion to use 856 subfield x. “Artesian, handcrafted serials cataloging”

Todd Carpenter part 2: How many of you think your patrons are having trouble finding the OA in a mixed access journal that is not exposed/labeled? KBs are at the journal or volume/issue level. About 1/3 of the room thinks it is a problem.

Has anyone developed their own local mobile app? Yes, there is a great way to do that, but more important to create a mobile-friendly website. PhoneGap will write an app for mobile OS that will wrap your web app in an app, and include some location services. Maybe look to include library in a university-wide app?

Adam Traub: Really into PPV/demand-driven. Some do an advance purchase model with tokens, and some of them will expire. Really wants to make it an unmediated process, but it opens up the library to increasing and spiraling costs. They went unmediated for a quarter, and the use skyrocketed. What’s a good way to do this without spending a ton of money? CCC’s Get It Now drives PPV usage through the link resolver. Another uses a note to indicate that the journal is being purchased by the library.

Kristin Martin: Temporarily had two discovery services, and they don’t know how to display this to users. Prime for some usability testing. Have results from both display side by side and let users “grade” them.

Michael Edwards: Part of a NE consortia, and thinks they should be able to come up with consortial pressure on vendors, and they’re basically telling them to take a leap. Are any of the smaller groups in pressuring vendors in making concessions to consortial acquisitions. Orbis-Cascade and Connect NY have both been doing good things for ebook pricing and reducing the multiplier for SU. Do some collection analysis on the joint borrowing/purchasing policies? The selectors will buy what they buy.

NASIG 2010 reflections

When I was booking my flights and sending in my registration during the snow storms earlier this year, Palm Springs sounded like a dream. Sunny, warm, dry — all the things that Richmond was not. This would also be my first visit to Southern California, so I may be excused for my ignorance of the reality, and more specifically, the reality in early June. Palm Springs was indeed sunny, but not as dry and far hotter than I expected.

Despite the weather, or perhaps because of the weather, NASIGers came together for one of the best conferences we’ve had in recent years. All of the sessions were held in rooms that emptied out into the same common area, which also held the coffee and snacks during breaks. The place was constantly buzzing with conversations between sessions, and many folks hung back in the rooms, chatting with their neighbors about the session topics. Not many were eager to skip the sessions and the conversations in favor of drinks/books by the pools, particularly when temperatures peaked over 100°F by noon and stayed up there until well after dark.

As always, it was wonderful to spend time with colleagues from all over the country (and elsewhere) that I see once a year, at best. I’ve been attending NASIG since I was a wee serials librarian in 2002, and this conference/organization has been hugely instrumental in my growth as a librarian. Being there again this year felt like a combination of family reunion and summer camp. At one point, I choked up a little over how much I love being with all of them, and how much I was going to miss them until we come together again next year.

I’ve already blogged about the sessions I attended, so I won’t go into those details so much here. However, there were a few things that stood out to me and came up several times in conversations over the weekend.

One of the big things is a general trend towards publishers handling subscriptions directly, and in some cases, refusing to work with subscription agents. This is more prevalent in the electronic journal subscription world than in print, but that distinction is less significant now that so many libraries are moving to online-only subscriptions. I heard several librarians express concern over the potential increase in their workload if we go back to the era of ordering directly from hundreds of publishers rather than from one (or a handful) of subscription agents.

And then there’s the issue of invoicing. Electronic invoices that dump directly into a library acquisition system have been the industry standard with subscription agents for a long time, but few (and I can’t think of any) publishers are set up to deliver invoices to libraries using this method. In fact, my assistant who processes invoices must manually enter each line item of a large invoice of one of our collections of electronic subscriptions every year, since this publisher refuses to invoice through our agent (or will do so in a way that increases our fees to the point that my assistant would rather just do it himself). I’m not talking about mom & pop society publisher — this is one of the major players. If they aren’t doing EDI, then it’s understandable that librarians are concerned about other publishers following suit.

Related to this, JSTOR and UC Press, along with several other society and small press publishers have announced a new partnership that will allow those publishers to distribute their electronic journals on the JSTOR platform, from issue one to the current. JSTOR will handle all the hosting, payments, and library technical support, leaving the publishers to focus on generating the content. Here’s the kicker: JSTOR will also be handling billing for print subscriptions of these titles.

That’s right – JSTOR is taking on the role of subscription agent for a certain subset of publishers. They say, of course, that they will continue to accept orders through existing agents, but if libraries and consortia are offered discounts for going directly to JSTOR, with whom they are already used to working directly for the archive collections, then eventually there will be little incentive to use a traditional subscription agent for titles from these publishers. On the one hand, I’m pleased to see some competition emerging in this aspect of the serials industry, particularly as the number of players has been shrinking in recent years, but on the other hand I worry about the future of traditional agents.

In addition to the big picture topics addressed above, I picked up a few ideas to add to my future projects list:

  • Evaluate the “one-click” rankings for our link resolver and bump publisher sites up on the list. These sources “count” more when I’m doing statistical reports, and right now I’m seeing that our aggregator databases garner more article downloads than from the sources we pay for specifically. If this doesn’t improve the stats, then maybe we need to consider whether or not access via the aggregator is sufficient. Sometimes the publisher site interface is a deterrent for users.
  • Assess the information I currently provide to liaisons regarding our subscriptions and discuss with them what additional data I could incorporate to make the reports more helpful in making collection development decisions. Related to this is my ongoing project of simplifying the export/import process of getting acquisitions data from our ILS and into our ERMS for cost per use reports. Once I’m not having to do that manually, I can use that time/energy to add more value to the reports.
  • Do an inventory of our holdings in our ERMS to make sure that we have turned on everything that should be turned on and nothing that shouldn’t. I plan to start with the publishers that are KBART participants and move on from there (and yes, Jason Price, I will be sure to push for KBART compliance from those who are not already in the program).
  • Begin documenting and sharing workflow, SQL, and anything else that might help other electronic resource librarians who use our ILS or our ERMS, and make myself available as a resource. This stood out to me during the user group meeting for our ERMS, where I and a handful of others were the experts of the group, and by no means do I feel like an expert, but clearly there are quite a few people who could learn from my experience the way I learned from others before me.

I’m probably forgetting something, but I think those are big enough to keep me busy for quite a while.

If you managed to make it this far, thanks for letting me yammer on. To everyone who attended this year and everyone who couldn’t, I hope to see you next year in St. Louis!

css.php