I participated in my first Battle Decks competition at ER&L this year. I almost did last year, and a friend encouraged me to put my name in the hat this year, so I did.
I was somewhat surprisingly not nervous as I waited for my name to be chosen to present next (the order was random — names drawn from a bag). Rather, I was anxiously waiting for my turn, because I knew I could pull it off, and well.
This confidence is not some arrogance I carry with me all the time. I’ve got spades of impostor syndrome when it comes to conference presentations and the like. Battle Decks, however, is not a presentation on a topic I’m supposed to know more about but secretly suspect I know less about than the audience. They are more in the dark than I am, and my job isn’t to inform so much as to entertain.
Improv — I can do that. I spent a few seasons with the improv troupe in college, and while I was certainly not remarkable or talented, I did learn a lot about “yes, and”. My “yes, and” with the Battle Decks was the slides — no matter what came up, I took it and connected it back to the topic and vice-versa.
There was one slide that came up that was dense with text or imagery or something that just couldn’t register in the split second I looked at it. I turned back to the audience and found I had nothing to say, so I looked at it again, and then made an apology, stating that my assistant had put together the slide deck and I wasn’t sure what this one was supposed to be about. It brought the laughs and on I went.
I would like to take this opportunity to thank Jesse Koennecke for organizing the event, as well as Bonnie Tijerina, Elizabeth Winters, and Carmen Mitchell for judging the event. And, of course, thanks also to April Hathcock for sharing the win with me.
Never use a fad term for your talk because it will make you look bad ten years later.
We’re not so tied up into having to build hardware stacks anymore. Now, that infrastructure is available through other services.
[insert trip down memory lane of the social media and mobile phones of 2005-2006]
We need to rethink how we procure IT in the library. We need better ways of thinking this out before committing to something that you may not be able to get out of it. Market shifts happen.
Server interfaces are much more user friendly now, particularly when you’re using something like AWS. However, bandwidth is still a big cost. Similar infrastructures, though, can mean better sharing of tools and services across institutions.
How much does your library invest in IT? How much of your percentage of your overall budget is that? How do you count that number? How much do we invest on open source or collaborative ventures that involve IT?
Groups have a negativity bias, which can have an impact on meetings. The outcomes need to be positive in order to move an organization forward.
Villanova opted to spend their funds on a locally developed discovery layer (VUFind), rather than dropping that and more on a commercial product. The broader community has benefitted from it as well.
Kuali OLE has received funding and support from a lot of institutions. GOKb is a sister project to develop a better knowledgebase to manage electronic resources, partnering with NCSU and their expertise on building an ERMS.
[Some stuff about HathiTrust, which is a members-only club my institution is not a part of, so I kind of tuned out.]
Something something Hydra and Avalon media system and Sufia.
Forking of open projects means that no one else can really use it, and that the local institution is on its own for maintaining it.
In summary, consider if you can spend money on investing in these kinds of projects rather than buying existing vendor products.
Speakers: Michael Fernandez, University of Mary Washington; Kirsten Ostergaard, Montana State University; Alex Homanchuk, OCAD University Moderator: Kelsey Brett, University of Houston AH:
Specialized studio art and design programs. Became a university in 2002, which meant the library needed to expand to support liberal arts programming. The had limited use of a limited number of resources and wanted to improve the visibility and exposure of the other parts of their collections.
Mid-sized liberal arts institution that is primarily undergraduate. The students want something convenient and easy to use, and they aren’t concerned with where the content is coming from. The library wanted to expose the depth of their collections.
Strong engineering and agriculture programs. Migrated to Primo from Summon recently. They had chosen Summon a few years ago for similar reasons noted by the other panelists. The decision to move was in part due to a statewide consortia, and this had some to do with the University of Montana’s decision.
They looked at how well their resources were represented in the central index. They had a lot of help from other Ontario institutions by learning from their experiences. There was also a provincial RFI from a few years ago to help inform them. They were already using the KB that would power the discovery service, so it was easier to implement. Reference staff strongly favored one particular solution, in part due to some of the features unique to it.
They began implementing in late January and planned a soft launch for March, which they felt was enough time for staff training (both back and front end). It was slightly rough start because they implemented with Summon 2, and in the midst of this ProQuest also moved to a new ticketing system.
They did trials. They looked at costs, including setup fees and rate increases and potential discounts. They looked at content coverage and gaps. They looked at the functionality of the user interface and search relevancy for general and known item resources. They looked at the systems aspects, including ILS integration and other systems via API, and the frequency and timeliness of catalog updates.
They opted to not implement the federated searching widgets in EDS to search the ProQuest content. Instead, they use the database recommender to direct students to relevant, non-EBSCO databases.
They wanted similar usability to what they had in Summon, and similar coverage. The timeline for implementation was longer than they initially planned, in part due to the consortial setup and decisions about how content would be displayed at different institutions. This gets complicated with ebook licenses that are for specific institutions only. Had to remove deduplication of records, which makes for slightly messy search results, but the default search is only for local content.
They had to migrate their eresources to a new KB, and the collections don’t always match up. They are conducting an audit of the data. They still have 360 Core while they are migrating to SFX.
The implementation team included representatives from across the library, which helped for getting buy-in. Feedback responsiveness was important, too. Staff and faculty comments influenced their decisions about user interface options. Instruction librarians vigorously promoted it, particularly in the first year courses.
Similar to the previous speaker’s experience.
They wanted to make sure the students were comfortable with the familiar, but also market the new/different functionality and features of Primo. They promoted them through the newsletter, table tents, library homepage, press release, and Friends of the Library newsletter.
Launched a web survey to get user feedback. The reception has been favorable, with the predictable issues. They’ve seen a bump in the use of their materials in general, but a decline in the multi-disciplinary databases. The latter is due in part to a lower priority of those resources in the rankings and a lack of IEDL for that content.
They did surveys of the staff and student assistants during the trials. The students indicated that there is a learning curve for the discovery systems, and they were using the facets. They also use Google Analytics to analyze usage and also determine which days are lower use for the catalog update.
There hasn’t been any feedback from the website form. Staff report errors. They have done some user testing in the library of known item and general searches. They are working on the branding to take Ex Libris out and put more MSU.
They created user profiles for the different types of users to help both their own staff and publishers understand how their users interact with different aspects of the metadata.
Historically, the library catalog was record of what the library held, but in the 90s, the library began including online resources, but not journal articles, and most library catalogs are still MARC-based.
The OpenURL link resolver takes a citation and formats it as a URL and links to relevant library services. A knowledgebase of the library’s holdings (print and electronic) supports this. [It appears we still need to have an explanation of how this works and why we need a tool like this to get to the appropriate copy?]
Library discovery services are a simple search of comprehensive content with a fast response time and includes local collections. They are meant for undergraduate or novice researchers in a discipline.
The discovery metadata typically comes from many sources of publishers and providers. It needs to be mapped to an underlying set of data elements in order to be indexed. It must be thorough enough to be searched and it must be accurate.
One place where discovery metadata fails is when there is a lack of journal history data. ISSN and title changes need to be associated with each other. Wiley, for example, submitted the current title and ISSN for the entire run of a journal, even when there were other titles and ISSNs in that history. This makes knowledgebases incorrectly tell users that we do not have content that we do. The discovery service providers are having to compensate for the missing data from publishers, who should know better what their journal histories are.
Another place where discovery metadata fails is the tagging of material types through incorrectly designed templates. Streaming audio should not be labeled as a book chapter. A review in Scopus is a “scientific review”, but these are sometimes included in limited searches for book reviews in some discovery services.
Libraries use more than just MARC records and the library catalog to provide access to publisher content. Publisher metadata is distributed to many systems, not just libraries. Any source that supports OpenURL can potential provide access to publisher content. Metadata accuracy is more than just correct transcription.
Publisher support can come from KBART, ODI, SerialsSolutions KnowledgeWorks, Project Transfer, PIE-J, MARC Record Guide for Monograph Aggregator Vendors, and MARCEdit.
Library catalogers can’t do it all. We’re relying more on publisher-supplied data.
Audience question about book chapters — Shadle thinks that those that are separately authored and easily cited, and so should have the same level of metadata as journal articles in our discovery services.
Create a new account in Analytics and put the core URL in for your link resolver, which will give you the tracking ID.
Add the tracking code to the header or footer in the branding portion of the link resolver.
Google Analytics was designed for business. If someone spends a lot of time on a business site it’s good, but not necessarily for library sites. Brief interactions are considered to be bounces, which is bad for business, but longer times spent on a link resolver page could be a sign of confusion or frustration rather than success.
The base URL refers to several different pages the user interacts with. Google Analytics, by default, doesn’t distinguish them. This can hide some important usage and trends.
Using custom reports, you can tease out some specific pieces of information. This is where you can filter down to specific kinds of pages within the link resolver tool.
You can create views that will allow you to see what a set of IP ranges are using, which she used to filter to the use by computers in the library and computers not in the library. IP data is not collected by default, so if you want to do this, set it up at the beginning.
To learn where users were coming from to the link resolver, she created another custom report with parameters that would include the referring URLs. She also created a custom view that included the error parameter “SS_Error”. Some were from LibGuides pages, some were from the catalog, and some were from databases.
Ask specific and relevant questions of your data. Apply filters carefully and logically. Your data is a starting point to improving your service.
Google Analytics (3rd edition) by Ledford, Tyler, and Teixeira (Wiley) is a good resource, though it is business focused.
Everything is Different: Easing the Pain of a Resource Transition
Speaker: Heather Greer Klein, NC Live
They license content as a core collection for all libraries on three year cycles, and have been doing this for the past 15 years. They also provide consultations, help desk, vendor liaisons, usage statistics, and other services.
They’ve had a 5.7% decrease in funding for materials over the past six years. There was a million dollar gap this year between their funding and the cost of existing licensed resources. The resource advisory committee evaluated the situation and came to the conclusion that they would need to change the main aggregator database for the first time in a decade. The NC Live staff had to make this transition as smoothly as possible.
They needed to get the change leaders on board. The advisory committee talked with everyone in ways that the NC Live staff could do. They also needed to give as much lead time as possible, and were able to negotiate a six month overlap between the two. The communication, however, should have begun well before the decision was made. They should have talked about the funding situation well in advance, and some were taken by surprise.
Transparency reduces anxiety and helps build confidence. They announced the change well before the transition process was outlined. They sent weekly updates with what was happening. But they needed a better plan for reaching frontline staff.
Communicate with patrons early and often, and they used the website with a splash page to do that. They feel like they could have done more, and the libraries needed more support to translate the information to their users.
Partner with the vendors. The new vendors did a lot of outreach and training.
Serials Renewal Cycle – Doing it the SMU (a Different U) Way!
Speaker: Heng Kai Leong, Singapore Management University
They have been around for 15 years. The library was recently renovated, and they are primarily electronic and have more electronic collections than print. Most of their journals are from aggregators or big deals, the rest are through two subscription agents.
They had a staff member assigned to each of the agents for the ordering, claiming, receiving, binding, and other processes. Each year they did a collection evaluation review.
Now, they only do the evaluation every two years. The off year is when they evaluate the agents, going with the one that is the best costs savings. This has freed up staff time to do more to support the users. They are now using only one agent for two year terms.
They have a service level agreement from the agent to document the services and products they offer to the library. It’s also helpful for the staff handling the serials so they know what should be done by the agent. It required some negotiation with the agent. When they do the evaluation every two years, they require the agent to send the SLA terms in a template that allows for easy comparison. The quote must be in Excel (not PDF). There is an example of the content of the template in the slides.
Migrating to Intota – Updates and Dispatches from the Front
Speaker: Dani Roach, University of St. Thomas
They are in the middle of the implementation of the library services platform from ProQuest. CLIC is an eight member consortia in St. Paul, MN. They’ve had a shared ILS for a number of years, and when that contract ended. They began looking at things in 2013, and at that point they decided to get a NextGen ILS.
The two systems available at the time weren’t quite what they wanted, and the demo of Intota happened after. Due to unknown factors, one of the consortia members pulled out and selected one of the other two systems at that time. At the end of 2013, Intota was selected by the CLIC board. An implementation team was formed, and contract negotiations were completed December 31, 2013. CLIC was the first academic consortia to subscribe.
Some libraries were long-time SerialsSolultions customers; others had little or no discovery layer. The phase one implementation was setting up Summon for the consortia. The consortial implementation was a whole new creature from a single-site implementation. There were many choices that had to be made early on which had significant (and often unknown) impact further down the road. This implementation was completed by June 2014, with continue revisions of how catalog data was ingested through January 2015.
Meanwhile, in July 2014 they began implementing the Assessment portion. Part of this involved mapping data from the ILS.
Ongoing has been the implementation of the knowledgebase/ERM. Each library needed to have all of their content in there. The new interface was made live in July 2014, bugs and all. Some new features are great, some old features are missed.
Next: acquisitions (including DDA), description (cataloging), and fulfillment (circulation). No plans yet for when those will begin.
The time it takes to do this is challenging because you still have to do your day to day work. Documenting the problems and fixes takes a lot of time. Keeping track of bugs and things is frustrating.
We want vendors to succeed because we want a variety of options. We need to be involved at the development level if we want that to happen.
Linking is one of the top complaints of library users, and we’re relying on old tools to do it (OpenURL). The link resolver menu is not a familiar thing for our users, and many of them don’t know what to do with it. 30% of users fail to click the appropriate link in the menu (study from 2011).
ProQuest tried to improve the 360 Link resolver. They focused on improving the reliability and the usability. They used something called index enhanced direct linking in Summon (basically publisher data) that bypasses the OpenURL link resolvers from 370 providers. These links are more intuitive and stable than OpenURL. This is great for Summon, with about 60% of links being IEDL, but discovery happens everywhere.
They also created a new sidebar helper frame to replace the old menu. The OpenURL will take them to the best option, but then the frame offers a clean view of other options and can be collapsed if not needed by the user. It also has the library branding, so that the user is able to connect that their access to the content is from the library, rather than just that Google is awesome.
Speaker: Jesse Koennecke, Cornell University
They are focusing on the delivery of content as well as the discovery. Brief demo of their side-by-side catalog and discovery search due to nifty API calls (bento box). Another demo of the sidebar helper frame from before, including the built-in problem report form.
Speaker: Jacquie Samples, Duke University
Does the website design for the Duke Libraries. They’ve done a lot of usability testing. The new website went out in summer of 2014, and after that, they decided to look at their other services like the link resolver. They came up with some custom designs for those screens, and ended up beta testing the new sidebar instead. They have a bento box results page, too.
The FRBR user tasks matter and should be applied to discovery and access, too: find, identify, select, and obtain. We’re talking about obtaining here.
Wiley offers the entire collection or subject collections for a set access fee based on FTE tiers. At the end of the access period, titles up to (or for additional cost) exceeding the access fee are selected for perpetual access. Usage data is provided to help with selection.
Speaker: Galadriel Chilton, University of Connecticut
In general, ebook borrowers like public library books, but the formats of many academic ebooks is frustrating. If ebooks are not integrated with journal content, they are often not found or not found as easily. Convenience is key. There’s also the issue of unencrypted usage data being transmitted by Adobe, which is now being transmitted “securely,” but still profiling the reading habits of users.
They started the EBA with Wiley in April, and they saw a jump in usage before the titles were even in the catalog or discovery service. They were finding it on the platform already.
Downsides: Some content is only available on aggregator platforms, rather than Wiley’s platform. Some content is not included in the EBA program. Also, this adds another wrinkle to an already complicated ERM ecosystem.
It’s not an all-encompassing solution, but it is an ebook collection method that has significantly improved user experience.
Speaker: Monica Metz-Wiseman, University of South Florida
About 15% of the audience still has an approval plan. About 40% have a declining monographic budget. About 70% have declining monographic circulation.
They haven’t had an approval plan in 2009, have had a 50% drop in print circulation since 2008, and now rely on ebook packages and PDA (with STL).
They looked at STL costs in 2013 and saw that Wiley and Taylor & Francis were at the top. They decided to try the EBA with Wiley.
They wanted to recalibrate access with ownership. They wanted increased control over costs and content. They wanted to make sure the books would still be there later when a faculty member went looking for it (not always the case with PDA).
Challenges: The collection specialists were already removed from the collection process with PDA, and this was just another stake in the heart. There are two platform for Wiley collections, so they are having to maintain some of the titles on EBL still. The MARC records are not always good, requiring some manual fixes. Scalability is going to be challenging if there isn’t enough staff support. Funding uncertainty may make sustainability difficult, as well.
Benefits: Content integration, preferred DRM features, easier authentication, holding the line on price increases for STL and aggregator ebooks, and increased familiarity with Wiley content.
Selections were made on absolute use, without consulting subject specialists. They did not look to see if there were print copies available in the library already.
Not sure what impact this will have on ebook pricing in the future when publishers have more data about what users want.
Speaker: Robert Murdoch, Brigham Young University
He has prettier slides, but not much to say that wasn’t covered by the others.
[I missed the first talk due to a slightly longer lunch than had been planned.]
Better Linking by Our Bootstraps
Speaker: Aron Wolf, ProQuest
He is a librarian trained as a cataloger.
Error reports are important, because for each one, there were probably ten instances not reported. Report early and report often.
Include the original query for the OpenURL in order to reproduce it. If you have the time, play around with the string data and see if you can “fix” it yourself and report that.
There are a lot of factors into how long it will take to fix whatever is causing the OpenURL error. They don’t want to raise false expectations by giving a date and time.
Once an error has been reported, it enters a triage system. If it has a broader impact, it will be prioritized higher. Then it’s assigned to someone to fix.
Trouble Ticket Systems: Help or Hindrance?
Speaker: Margaret Hogarth, The Claremont Colleges Library
We should be polite and helpful. Human.
Detail the issue as specifically as possible, with steps, equipment, screen shots, etc. Include account number or other identifier.
Vendors need to identify themselves in responses. They also need to include the issue in responses, particularly when the message trail gets long. Customers need to keep track of the trouble tickets they have submitted.
Respond promptly, even if it will take longer to resolve. Mine the trouble ticket data to create FAQ, known issues, etc. and add meaningful metadata.
Email is good for tracking the history. Online forms should have an email sent with the ticket detail and number. Some vendors hide their support email address, which is annoying.
If vendors require authentication to submit a ticket, provide examples of what information they are looking for.
Vendors should ask their most frequent support users for feedback on what would make their sites more useful.
Multiple tech supports make it challenging for reporting issues to large companies.
Jing screen casting is helpful for showing how to reproduce the problem, particularly when you can’t attach a screenshot or cast, since it provides a URL.
All of this is useful for your internal support ticketing systems, too.
Speakers: Jaclyn Bedoya & Michael DeMars, CSU Fullerton
There are some challenges to surveying students, including privacy, IRB requirements, and survey fatigue. Don’t collect data for the sake of collecting data. Make sure it is asking what you think it is asking to get results that are worth measuring.
Google Analytics is free, relatively easy to use, and easy to install. And it’s free. We’re being asked to assess, but not being given a budget to do so.
It’s really good about measuring the when and where, but not the why. Is it that you don’t see Chrome users because nobody is using Chrome, or is it that your website is broken for Chrome users?
If people are hanging out on your library pages for too long, then maybe you need to redesign them. We want them heading out quickly to the resources we’re linking to.
They’ve made decisions about whether to spend time on making sites compatible with browser versions based on how much traffic is coming from them. They’ve determined that mobile use is increasing, so they are desigining for that now.
They were able to use click data to eliminate low-used webpages and tools in the redesign. They were able to use traffic data to determine how much server support was needed on the weekends.
Google Forms are free and they can be used to find out things about users that Analytics can’t tell you. They can be embedded into things like LibGuides. There’s a “view summary responses” option that creates pie charts and fancy things for your boss.
They asked who they are (discipline), how often they use the library, where they use it, and what they thought of the library services. There were incentives with gift cards (including ones for In-N-Out Burger). The free-text section had a lot of great content.
The speakers spent some time on the survey data, but the sum total is that it matched their expectations, but now they had data to prove it.