I participated in my first Battle Decks competition at ER&L this year. I almost did last year, and a friend encouraged me to put my name in the hat this year, so I did.
I was somewhat surprisingly not nervous as I waited for my name to be chosen to present next (the order was random — names drawn from a bag). Rather, I was anxiously waiting for my turn, because I knew I could pull it off, and well.
This confidence is not some arrogance I carry with me all the time. I’ve got spades of impostor syndrome when it comes to conference presentations and the like. Battle Decks, however, is not a presentation on a topic I’m supposed to know more about but secretly suspect I know less about than the audience. They are more in the dark than I am, and my job isn’t to inform so much as to entertain.
Improv — I can do that. I spent a few seasons with the improv troupe in college, and while I was certainly not remarkable or talented, I did learn a lot about “yes, and”. My “yes, and” with the Battle Decks was the slides — no matter what came up, I took it and connected it back to the topic and vice-versa.
There was one slide that came up that was dense with text or imagery or something that just couldn’t register in the split second I looked at it. I turned back to the audience and found I had nothing to say, so I looked at it again, and then made an apology, stating that my assistant had put together the slide deck and I wasn’t sure what this one was supposed to be about. It brought the laughs and on I went.
I would like to take this opportunity to thank Jesse Koennecke for organizing the event, as well as Bonnie Tijerina, Elizabeth Winters, and Carmen Mitchell for judging the event. And, of course, thanks also to April Hathcock for sharing the win with me.
Never use a fad term for your talk because it will make you look bad ten years later.
We’re not so tied up into having to build hardware stacks anymore. Now, that infrastructure is available through other services.
[insert trip down memory lane of the social media and mobile phones of 2005-2006]
We need to rethink how we procure IT in the library. We need better ways of thinking this out before committing to something that you may not be able to get out of it. Market shifts happen.
Server interfaces are much more user friendly now, particularly when you’re using something like AWS. However, bandwidth is still a big cost. Similar infrastructures, though, can mean better sharing of tools and services across institutions.
How much does your library invest in IT? How much of your percentage of your overall budget is that? How do you count that number? How much do we invest on open source or collaborative ventures that involve IT?
Groups have a negativity bias, which can have an impact on meetings. The outcomes need to be positive in order to move an organization forward.
Villanova opted to spend their funds on a locally developed discovery layer (VUFind), rather than dropping that and more on a commercial product. The broader community has benefitted from it as well.
Kuali OLE has received funding and support from a lot of institutions. GOKb is a sister project to develop a better knowledgebase to manage electronic resources, partnering with NCSU and their expertise on building an ERMS.
[Some stuff about HathiTrust, which is a members-only club my institution is not a part of, so I kind of tuned out.]
Something something Hydra and Avalon media system and Sufia.
Forking of open projects means that no one else can really use it, and that the local institution is on its own for maintaining it.
In summary, consider if you can spend money on investing in these kinds of projects rather than buying existing vendor products.
Speakers: Michael Fernandez, University of Mary Washington; Kirsten Ostergaard, Montana State University; Alex Homanchuk, OCAD University Moderator: Kelsey Brett, University of Houston AH:
Specialized studio art and design programs. Became a university in 2002, which meant the library needed to expand to support liberal arts programming. The had limited use of a limited number of resources and wanted to improve the visibility and exposure of the other parts of their collections.
Mid-sized liberal arts institution that is primarily undergraduate. The students want something convenient and easy to use, and they aren’t concerned with where the content is coming from. The library wanted to expose the depth of their collections.
Strong engineering and agriculture programs. Migrated to Primo from Summon recently. They had chosen Summon a few years ago for similar reasons noted by the other panelists. The decision to move was in part due to a statewide consortia, and this had some to do with the University of Montana’s decision.
They looked at how well their resources were represented in the central index. They had a lot of help from other Ontario institutions by learning from their experiences. There was also a provincial RFI from a few years ago to help inform them. They were already using the KB that would power the discovery service, so it was easier to implement. Reference staff strongly favored one particular solution, in part due to some of the features unique to it.
They began implementing in late January and planned a soft launch for March, which they felt was enough time for staff training (both back and front end). It was slightly rough start because they implemented with Summon 2, and in the midst of this ProQuest also moved to a new ticketing system.
They did trials. They looked at costs, including setup fees and rate increases and potential discounts. They looked at content coverage and gaps. They looked at the functionality of the user interface and search relevancy for general and known item resources. They looked at the systems aspects, including ILS integration and other systems via API, and the frequency and timeliness of catalog updates.
They opted to not implement the federated searching widgets in EDS to search the ProQuest content. Instead, they use the database recommender to direct students to relevant, non-EBSCO databases.
They wanted similar usability to what they had in Summon, and similar coverage. The timeline for implementation was longer than they initially planned, in part due to the consortial setup and decisions about how content would be displayed at different institutions. This gets complicated with ebook licenses that are for specific institutions only. Had to remove deduplication of records, which makes for slightly messy search results, but the default search is only for local content.
They had to migrate their eresources to a new KB, and the collections don’t always match up. They are conducting an audit of the data. They still have 360 Core while they are migrating to SFX.
The implementation team included representatives from across the library, which helped for getting buy-in. Feedback responsiveness was important, too. Staff and faculty comments influenced their decisions about user interface options. Instruction librarians vigorously promoted it, particularly in the first year courses.
Similar to the previous speaker’s experience.
They wanted to make sure the students were comfortable with the familiar, but also market the new/different functionality and features of Primo. They promoted them through the newsletter, table tents, library homepage, press release, and Friends of the Library newsletter.
Launched a web survey to get user feedback. The reception has been favorable, with the predictable issues. They’ve seen a bump in the use of their materials in general, but a decline in the multi-disciplinary databases. The latter is due in part to a lower priority of those resources in the rankings and a lack of IEDL for that content.
They did surveys of the staff and student assistants during the trials. The students indicated that there is a learning curve for the discovery systems, and they were using the facets. They also use Google Analytics to analyze usage and also determine which days are lower use for the catalog update.
There hasn’t been any feedback from the website form. Staff report errors. They have done some user testing in the library of known item and general searches. They are working on the branding to take Ex Libris out and put more MSU.
They created user profiles for the different types of users to help both their own staff and publishers understand how their users interact with different aspects of the metadata.
Historically, the library catalog was record of what the library held, but in the 90s, the library began including online resources, but not journal articles, and most library catalogs are still MARC-based.
The OpenURL link resolver takes a citation and formats it as a URL and links to relevant library services. A knowledgebase of the library’s holdings (print and electronic) supports this. [It appears we still need to have an explanation of how this works and why we need a tool like this to get to the appropriate copy?]
Library discovery services are a simple search of comprehensive content with a fast response time and includes local collections. They are meant for undergraduate or novice researchers in a discipline.
The discovery metadata typically comes from many sources of publishers and providers. It needs to be mapped to an underlying set of data elements in order to be indexed. It must be thorough enough to be searched and it must be accurate.
One place where discovery metadata fails is when there is a lack of journal history data. ISSN and title changes need to be associated with each other. Wiley, for example, submitted the current title and ISSN for the entire run of a journal, even when there were other titles and ISSNs in that history. This makes knowledgebases incorrectly tell users that we do not have content that we do. The discovery service providers are having to compensate for the missing data from publishers, who should know better what their journal histories are.
Another place where discovery metadata fails is the tagging of material types through incorrectly designed templates. Streaming audio should not be labeled as a book chapter. A review in Scopus is a “scientific review”, but these are sometimes included in limited searches for book reviews in some discovery services.
Libraries use more than just MARC records and the library catalog to provide access to publisher content. Publisher metadata is distributed to many systems, not just libraries. Any source that supports OpenURL can potential provide access to publisher content. Metadata accuracy is more than just correct transcription.
Publisher support can come from KBART, ODI, SerialsSolutions KnowledgeWorks, Project Transfer, PIE-J, MARC Record Guide for Monograph Aggregator Vendors, and MARCEdit.
Library catalogers can’t do it all. We’re relying more on publisher-supplied data.
Audience question about book chapters — Shadle thinks that those that are separately authored and easily cited, and so should have the same level of metadata as journal articles in our discovery services.
Create a new account in Analytics and put the core URL in for your link resolver, which will give you the tracking ID.
Add the tracking code to the header or footer in the branding portion of the link resolver.
Google Analytics was designed for business. If someone spends a lot of time on a business site it’s good, but not necessarily for library sites. Brief interactions are considered to be bounces, which is bad for business, but longer times spent on a link resolver page could be a sign of confusion or frustration rather than success.
The base URL refers to several different pages the user interacts with. Google Analytics, by default, doesn’t distinguish them. This can hide some important usage and trends.
Using custom reports, you can tease out some specific pieces of information. This is where you can filter down to specific kinds of pages within the link resolver tool.
You can create views that will allow you to see what a set of IP ranges are using, which she used to filter to the use by computers in the library and computers not in the library. IP data is not collected by default, so if you want to do this, set it up at the beginning.
To learn where users were coming from to the link resolver, she created another custom report with parameters that would include the referring URLs. She also created a custom view that included the error parameter “SS_Error”. Some were from LibGuides pages, some were from the catalog, and some were from databases.
Ask specific and relevant questions of your data. Apply filters carefully and logically. Your data is a starting point to improving your service.
Google Analytics (3rd edition) by Ledford, Tyler, and Teixeira (Wiley) is a good resource, though it is business focused.