SSP/NASIG – What Do All of these Changes Mean for Vendors?

Data storage - old and new
data sharing

Speaker: Caitlin Trasande, Head of Research Policy, Digital Science

Social impact is the emerging bacon.

Digital Science supports and funds startups that build software for research. The scope is the full life cycle of research, ranging from reading literature to planning and conducting experiments to publishing and sharing the data. The disgrunterati are those who decided to be the last to complain about broken processes and build better products and models.

[insert overview of several projects funded by Digital Science]

Information may want to be free, but it needs to be accessible and understandable.

SSP/NASIG – Data Wranglers in LibraryLand—Finding Opportunities in the Changing Policy Landscape

All You Can Eat Bacon!
all you can eat…data?

Speaker: T. Scott Plutchak, Director of Digital Data Curation Strategies, The University of Alabama at Birmingham

Data is the new bacon. Data is the hot buzzword in scholarly publishing. He is working on the infrastructure, services, and policies needed to manage data on an institutional level.

Concern about data has been around for a long time. NIH developed their first policy in 2003, but it was pretty weak. Things got serious when the public access policy became mandatory in 2009. NSF developed a data management policy in 2011, which got a little more attention.

A scholarly publishing roundtable was created in 2009, reporting in 2010, made up of university administrators, librarians, publishers, and researchers. They recommended flexible policies for each agency, developed in collaboration with their consitutencies.

Libraries should be thinking about how and where and what kinds of data they should store and manage.

My small liberal arts university probably will have to do some things with this, but not to the extent he’s talking about. This is an R1 library problem, not a library problem at large. Yet.

SSP/NASIG – How One Publisher Is Responding to the Changing World of Scholarly Communication

Money
saving up for this year’s price increases

Speaker: Jayne Marks, Vice President of Global Publishing, LWW Journals, Wollters Kluwer

We have to be adaptable and willing to change to respond to the market. Online platforms are only 19 years old (ejournals are 22), and they have changed a lot in that time.

What will journals look like in 2025? What about books?

Some assumptions people have had: Open Access is the answer to everything; move publishing back to institutions; journals cost less when they aren’t printed; textbooks would be cheaper online; self-publishing will replace publishers; publishers don’t add any value to the educational process.

STM journal output continues to grow while library budgets remain flat at best. The number of researchers in the world is growing as well, and there is a correlation between the output and the researchers. Library budgets in North America are growing the least. There is a huge rise in papers submitted from China in recent years, and not all of them can be published.

The trend in STM is to make the article a hub that links out to other content: data, podcasts, video, etc. Data is becoming a significant piece of research. There are also more tools available to manage the research reputation of scholars, which is becoming more important.

Print is not dead. It’s in decline, but still needed. Physicians want print as an option as well as online, tablet, and smart phone versions. Students still want print textbooks, as well as online resources from the library. Pharma advertising is focused solely on print.

Access and archiving mandates for open access are increasing, though mostly from institutional mandates rather than from funding sources. Open access is here to stay, and as far as publishing is concerned, it’s another business model. OA use is different in different disciplines: medicine/bio is focused on Gold, but most other science focus on Green. There’s also an increase in mandates requiring data to be made available publicly as well, and with that come questions about what data is needed and where it is stored/delivered.

Publishers see pressure on costs and revenue, with demands for content in multiple formats and an increasing number of submissions and requests for new journals. There are more formats, with various apps and delivery models. Requirements for usage metrics across all the formats puts pressure on internal systems and speed of development. OA mandates require creative business models. Increasing US regulations are driving revenue out of medical publishing for pharma.

Publishers are responding by assessing the needs of their target demographics through extensive market research. The challenge after that is taking the diverse wants and needs and developing something that can be implemented universally and automated.

New business models: gold or hybrid OA; platinum OA is where a society or institution pays all costs of publications so that neither author nor reader pays, generally only in emerging markets; new journals are being launched, with OA or blended or bundled models; advertising-funded publishing is not working.

New editorial models: post-publication commentary, open peer review, community-based peer review, and independent companies providing peer review services; non-English speaking services like translation, detailed content editing, compliance and ethics policy assistance, and recommendation engines.

There is experimentation all over the market, with new startups and services being rolled out all the time. Publishers are working with them, and there are varying degrees of adoption by authors and readers.

Listen, question, engage — it’s all about understanding the customers. Publishers need to engage more in the scholarly communication process.

Public policies are driving change and impacting scholarly communication, driving innovation and experimentation. Free drives engagement; revenue will come from new places. Identifying correctly and solving customer problems will drive opportunities. Journals and books will become content to be used in new ways.

battle decks

my #erl15 Battle Decks topic
my #erl15 Battle Decks topic

I participated in my first Battle Decks competition at ER&L this year. I almost did last year, and a friend encouraged me to put my name in the hat this year, so I did.

I was somewhat surprisingly not nervous as I waited for my name to be chosen to present next (the order was random — names drawn from a bag). Rather, I was anxiously waiting for my turn, because I knew I could pull it off, and well.

This confidence is not some arrogance I carry with me all the time. I’ve got spades of impostor syndrome when it comes to conference presentations and the like. Battle Decks, however, is not a presentation on a topic I’m supposed to know more about but secretly suspect I know less about than the audience. They are more in the dark than I am, and my job isn’t to inform so much as to entertain.

Improv — I can do that. I spent a few seasons with the improv troupe in college, and while I was certainly not remarkable or talented, I did learn a lot about “yes, and”. My “yes, and” with the Battle Decks was the slides — no matter what came up, I took it and connected it back to the topic and vice-versa.

There was one slide that came up that was dense with text or imagery or something that just couldn’t register in the split second I looked at it. I turned back to the audience and found I had nothing to say, so I looked at it again, and then made an apology, stating that my assistant had put together the slide deck and I wasn’t sure what this one was supposed to be about. It brought the laughs and on I went.

I would like to take this opportunity to thank Jesse Koennecke for organizing the event, as well as Bonnie Tijerina, Elizabeth Winters, and Carmen Mitchell for judging the event. And, of course, thanks also to April Hathcock for sharing the win with me.

#erl15 Battledecks Monday
photo by Sandy Tijerina

ER&L 2015 – All Thing Distributed: Collaborations Beyond Infrastructure

Collaboration
photo by Chris Lott

Speaker: Robert McDonald, Indiana University

Never use a fad term for your talk because it will make you look bad ten years later.

We’re not so tied up into having to build hardware stacks anymore. Now, that infrastructure is available through other services.

[insert trip down memory lane of the social media and mobile phones of 2005-2006]

We need to rethink how we procure IT in the library. We need better ways of thinking this out before committing to something that you may not be able to get out of it. Market shifts happen.

Server interfaces are much more user friendly now, particularly when you’re using something like AWS. However, bandwidth is still a big cost. Similar infrastructures, though, can mean better sharing of tools and services across institutions.

How much does your library invest in IT? How much of your percentage of your overall budget is that? How do you count that number? How much do we invest on open source or collaborative ventures that involve IT?

Groups have a negativity bias, which can have an impact on meetings. The outcomes need to be positive in order to move an organization forward.

Villanova opted to spend their funds on a locally developed discovery layer (VUFind), rather than dropping that and more on a commercial product. The broader community has benefitted from it as well.

Kuali OLE has received funding and support from a lot of institutions. GOKb is a sister project to develop a better knowledgebase to manage electronic resources, partnering with NCSU and their expertise on building an ERMS.

[Some stuff about HathiTrust, which is a members-only club my institution is not a part of, so I kind of tuned out.]

Something something Hydra and Avalon media system and Sufia.

Forking of open projects means that no one else can really use it, and that the local institution is on its own for maintaining it.

In summary, consider if you can spend money on investing in these kinds of projects rather than buying existing vendor products.

ER&L 2015 – Discovery Systems: Building a Better User Experience

discovery
photo by lecates

Speakers: Michael Fernandez, University of Mary Washington; Kirsten Ostergaard, Montana State University; Alex Homanchuk, OCAD University
Moderator: Kelsey Brett, University of Houston

AH:
Specialized studio art and design programs. Became a university in 2002, which meant the library needed to expand to support liberal arts programming. The had limited use of a limited number of resources and wanted to improve the visibility and exposure of the other parts of their collections.

MF:

Mid-sized liberal arts institution that is primarily undergraduate. The students want something convenient and easy to use, and they aren’t concerned with where the content is coming from. The library wanted to expose the depth of their collections.

KO:
Strong engineering and agriculture programs. Migrated to Primo from Summon recently. They had chosen Summon a few years ago for similar reasons noted by the other panelists. The decision to move was in part due to a statewide consortia, and this had some to do with the University of Montana’s decision.

AH:
They looked at how well their resources were represented in the central index. They had a lot of help from other Ontario institutions by learning from their experiences. There was also a provincial RFI from a few years ago to help inform them. They were already using the KB that would power the discovery service, so it was easier to implement. Reference staff strongly favored one particular solution, in part due to some of the features unique to it.

They began implementing in late January and planned a soft launch for March, which they felt was enough time for staff training (both back and front end). It was slightly rough start because they implemented with Summon 2, and in the midst of this ProQuest also moved to a new ticketing system.

MF:
They did trials. They looked at costs, including setup fees and rate increases and potential discounts. They looked at content coverage and gaps. They looked at the functionality of the user interface and search relevancy for general and known item resources. They looked at the systems aspects, including ILS integration and other systems via API, and the frequency and timeliness of catalog updates.

They opted to not implement the federated searching widgets in EDS to search the ProQuest content. Instead, they use the database recommender to direct students to relevant, non-EBSCO databases.

KO:
They wanted similar usability to what they had in Summon, and similar coverage. The timeline for implementation was longer than they initially planned, in part due to the consortial setup and decisions about how content would be displayed at different institutions. This gets complicated with ebook licenses that are for specific institutions only. Had to remove deduplication of records, which makes for slightly messy search results, but the default search is only for local content.

They had to migrate their eresources to a new KB, and the collections don’t always match up. They are conducting an audit of the data. They still have 360 Core while they are migrating to SFX.

AH:
The implementation team included representatives from across the library, which helped for getting buy-in. Feedback responsiveness was important, too. Staff and faculty comments influenced their decisions about user interface options. Instruction librarians vigorously promoted it, particularly in the first year courses.

MF:
Similar to the previous speaker’s experience.

KO:
They wanted to make sure the students were comfortable with the familiar, but also market the new/different functionality and features of Primo. They promoted them through the newsletter, table tents, library homepage, press release, and Friends of the Library newsletter.

AH:
Launched a web survey to get user feedback. The reception has been favorable, with the predictable issues. They’ve seen a bump in the use of their materials in general, but a decline in the multi-disciplinary databases. The latter is due in part to a lower priority of those resources in the rankings and a lack of IEDL for that content.

MF:
They did surveys of the staff and student assistants during the trials. The students indicated that there is a learning curve for the discovery systems, and they were using the facets. They also use Google Analytics to analyze usage and also determine which days are lower use for the catalog update.

KO:
There hasn’t been any feedback from the website form. Staff report errors. They have done some user testing in the library of known item and general searches. They are working on the branding to take Ex Libris out and put more MSU.

ER&L 2015 – Link Resolvers and Analytics: Using Analytics Tools to Identify Usage Trends and Access Problems

Google Analytics (3rd ed)

Speaker: Amelia Mowry, Wayne State University

Setting up Google Analytics on a link resolver:

  1. Create a new account in Analytics and put the core URL in for your link resolver, which will give you the tracking ID.
  2. Add the tracking code to the header or footer in the branding portion of the link resolver.

Google Analytics was designed for business. If someone spends a lot of time on a business site it’s good, but not necessarily for library sites. Brief interactions are considered to be bounces, which is bad for business, but longer times spent on a link resolver page could be a sign of confusion or frustration rather than success.

The base URL refers to several different pages the user interacts with. Google Analytics, by default, doesn’t distinguish them. This can hide some important usage and trends.

Using custom reports, you can tease out some specific pieces of information. This is where you can filter down to specific kinds of pages within the link resolver tool.

You can create views that will allow you to see what a set of IP ranges are using, which she used to filter to the use by computers in the library and computers not in the library. IP data is not collected by default, so if you want to do this, set it up at the beginning.

To learn where users were coming from to the link resolver, she created another custom report with parameters that would include the referring URLs. She also created a custom view that included the error parameter “SS_Error”. Some were from LibGuides pages, some were from the catalog, and some were from databases.

Ask specific and relevant questions of your data. Apply filters carefully and logically. Your data is a starting point to improving your service.

Google Analytics (3rd edition) by Ledford, Tyler, and Teixeira (Wiley) is a good resource, though it is business focused.

ER&L 2015 – Tuesday Short Talks: ERM topics

Leaf Rainbow
“Leaf Rainbow” by Maryann

Everything is Different: Easing the Pain of a Resource Transition
Speaker: Heather Greer Klein, NC Live

They license content as a core collection for all libraries on three year cycles, and have been doing this for the past 15 years. They also provide consultations, help desk, vendor liaisons, usage statistics, and other services.

They’ve had a 5.7% decrease in funding for materials over the past six years. There was a million dollar gap this year between their funding and the cost of existing licensed resources. The resource advisory committee evaluated the situation and came to the conclusion that they would need to change the main aggregator database for the first time in a decade. The NC Live staff had to make this transition as smoothly as possible.

They needed to get the change leaders on board. The advisory committee talked with everyone in ways that the NC Live staff could do. They also needed to give as much lead time as possible, and were able to negotiate a six month overlap between the two. The communication, however, should have begun well before the decision was made. They should have talked about the funding situation well in advance, and some were taken by surprise.

Transparency reduces anxiety and helps build confidence. They announced the change well before the transition process was outlined. They sent weekly updates with what was happening. But they needed a better plan for reaching frontline staff.

Communicate with patrons early and often, and they used the website with a splash page to do that. They feel like they could have done more, and the libraries needed more support to translate the information to their users.

Partner with the vendors. The new vendors did a lot of outreach and training.

 

Serials Renewal Cycle – Doing it the SMU (a Different U) Way!
Speaker: Heng Kai Leong, Singapore Management University

They have been around for 15 years. The library was recently renovated, and they are primarily electronic and have more electronic collections than print. Most of their journals are from aggregators or big deals, the rest are through two subscription agents.

They had a staff member assigned to each of the agents for the ordering, claiming, receiving, binding, and other processes. Each year they did a collection evaluation review.

Now, they only do the evaluation every two years. The off year is when they evaluate the agents, going with the one that is the best costs savings. This has freed up staff time to do more to support the users. They are now using only one agent for two year terms.

They have a service level agreement from the agent to document the services and products they offer to the library. It’s also helpful for the staff handling the serials so they know what should be done by the agent. It required some negotiation with the agent. When they do the evaluation every two years, they require the agent to send the SLA terms in a template that allows for easy comparison. The quote must be in Excel (not PDF). There is an example of the content of the template in the slides.

 

Migrating to Intota – Updates and Dispatches from the Front
Speaker: Dani Roach, University of St. Thomas

They are in the middle of the implementation of the library services platform from ProQuest. CLIC is an eight member consortia in St. Paul, MN. They’ve had a shared ILS for a number of years, and when that contract ended. They began looking at things in 2013, and at that point they decided to get a NextGen ILS.

The two systems available at the time weren’t quite what they wanted, and the demo of Intota happened after. Due to unknown factors, one of the consortia members pulled out and selected one of the other two systems at that time. At the end of 2013, Intota was selected by the CLIC board. An implementation team was formed, and contract negotiations were completed December 31, 2013. CLIC was the first academic consortia to subscribe.

Some libraries were long-time SerialsSolultions customers; others had little or no discovery layer. The phase one implementation was setting up Summon for the consortia. The consortial implementation was a whole new creature from a single-site implementation. There were many choices that had to be made early on which had significant (and often unknown) impact further down the road. This implementation was completed by June 2014, with continue revisions of how catalog data was ingested through January 2015.

Meanwhile, in July 2014 they began implementing the Assessment portion. Part of this involved mapping data from the ILS.

Ongoing has been the implementation of the knowledgebase/ERM. Each library needed to have all of their content in there. The new interface was made live in July 2014, bugs and all. Some new features are great, some old features are missed.

Next: acquisitions (including DDA), description (cataloging), and fulfillment (circulation). No plans yet for when those will begin.

The time it takes to do this is challenging because you still have to do your day to day work. Documenting the problems and fixes takes a lot of time. Keeping track of bugs and things is frustrating.

We want vendors to succeed because we want a variety of options. We need to be involved at the development level if we want that to happen.

ER&L 2015 – Did We Forget Something? The Need to Improve Linking at the Core of the Library’s Discovery Strategy

Linked
“Linked” by arbyreed

Speaker: Eddie Neuwirth, ProQuest

Linking is one of the top complaints of library users, and we’re relying on old tools to do it (OpenURL). The link resolver menu is not a familiar thing for our users, and many of them don’t know what to do with it. 30% of users fail to click the appropriate link in the menu (study from 2011).

ProQuest tried to improve the 360 Link resolver. They focused on improving the reliability and the usability. They used something called index enhanced direct linking in Summon (basically publisher data) that bypasses the OpenURL link resolvers from 370 providers. These links are more intuitive and stable than OpenURL. This is great for Summon, with about 60% of links being IEDL, but discovery happens everywhere.

They also created a new sidebar helper frame to replace the old menu. The OpenURL will take them to the best option, but then the frame offers a clean view of other options and can be collapsed if not needed by the user. It also has the library branding, so that the user is able to connect that their access to the content is from the library, rather than just that Google is awesome.

 

Speaker: Jesse Koennecke, Cornell University

They are focusing on the delivery of content as well as the discovery. Brief demo of their side-by-side catalog and discovery search due to nifty API calls (bento box). Another demo of the sidebar helper frame from before, including the built-in problem report form.

 

Speaker: Jacquie Samples, Duke University

Does the website design for the Duke Libraries. They’ve done a lot of usability testing. The new website went out in summer of 2014, and after that, they decided to look at their other services like the link resolver. They came up with some custom designs for those screens, and ended up beta testing the new sidebar instead. They have a bento box results page, too.

The FRBR user tasks matter and should be applied to discovery and access, too: find, identify, select, and obtain. We’re talking about obtaining here.

css.php