Speakers: Doralyn Rossmann & Nathan Hosburgh, Montana State University
Proactive deselection is getting rid of things before you have to. It allows for meeting requests for new subscriptions, adjusting for curricular change, adjusting for research change, redirecting funds elsewhere, and reducing the budget if needed.
Step one: Identify core journals. This sets a positive tone. They created lists organized by LC class and provided them to the liaisons for departments. (They filtered out packages and JSTOR collections.) The communication with faculty varied by librarian, as well as the type of feedback provided. This resulted in some requests for new subscriptions, and enhance the credibility of the library as good stewards.
They kept track of who said what in the feedback, so that if down the road that person left, they could revisit the titles.
Step two: Journal coverage in a unified discovery tool. They identified the compartmentalized/marginalized titles that were not included in the unified discovery tool index (report from vendor).
Step three: Database coverage in a unified discovery tool. This can be challenging to make sure the comparison is even. Also, what is a database versus a journal package with a searchable interface? Not clear how they compared A&I information, since there is no good tool for that kind of overlap.
Step four: Usage statistics. Typical challenges (which COUNTER format, not COUNTER, no stats, changing platforms, etc.) along with timeliness and file format. This also identified resources that were not listed on the DB page.
Step five: Coverage in A&I databases. This may help identify A&I sources you should add, but it’s time consuming and may not have big payoffs if you are emphasizing a discovery service as a primary search interface.
Step six: Coverage in aggregators or freely available. Can be risky, though.
Step seven: Other considerations. Impact factor — does it matter? Cost metrics, alternative access options like PPV or ILL, swappability in big deal packages.
Step eight: Feedback from liaisons. Get input on titles considered for cancellation Share externally to make sure that everyone is on board and have time to comment.
Step nine: Do we have the right stuff? Review ILL statistics and compare with download stats (should be trending down as subscriptions go up). Citation studies, LibQual+, and liaison communication. Publicize what was added each year with freed funds, and which department requested it.
They plan to review this every year, and keep it updated with additions/deletions and coverage information. They are also considering the sustainability of high cost packages plus inflation.
Updates from Serials Solutions – mostly Resource Manager (Ashley Bass):
Keep up to date with ongoing enhancements for management tools (quarterly releases) by following answer #422 in the Support Center, and via training/overview webinars.
Populating and maintaining the ERM can be challenging, so they focused a lot of work this year on that process: license template library, license upload tool, data population service, SUSHI, offline date and status editor enhancements (new data elements for sort & filter, new logic, new selection elements, notes), and expanded and additional fields.
Workflow, communication, and decision support enhancements: in context help linking, contact tool filters, navigation, new Counter reports, more information about vendors, Counter summary page, etc. Her most favorite new feature is “deep linking” functionality (aka persistent links to records in SerSol). [I didn’t realize that wasn’t there before — been doing this for my own purposes for a while.]
Next up (in two weeks, 4th quarter release): new alerts, resource renewals feature (reports! and checklist!, will inherit from Admin data), Client Center navigation improvements (i.e. keyword searching for databases, system performance optimization), new license fields (images, public performance rights, training materials rights) & a few more, Counter updates, SUSHI updates (making customizations to deal with vendors who aren’t strictly following the standard), gathering stats for Springer (YTD won’t be available after Nov 30 — up to Sept avail now), and online DRS form enhancements.
In the future: license API (could allow libraries to create a different user interface), contact tools improvements, interoperability documentation, new BI tools and reporting functionality, and improving the Client Center.
Also, building a new KB (2014 release) and a web-scale management solution (Intota, also coming 2014). They are looking to have more internal efficiencies by rebuilding the KB, and it will include information from Ulrich’s, new content types metadata (e.g. A/V), metadata standardization, industry data, etc.
Summon Updates (Andrew Nagy):
I know very little about Summon functionality, so just listened to this one and didn’t take notes. Take-away: if you haven’t looked at Summon in a while, it would be worth giving it another go.
Goal #1: Allow users to easily link to full-text resources. Solution: Go beyond the out-of-the box 360 Link display.
Goal #2: Allow users to report problems or contact library staff at the point of failure. Solution: eresources problem report form
They created the eresources problem report form using Drupal. The fields include contact information, description of the resource, description of the problem, and the ability to attach a screenshot.
Some enhancements included: making the links for full-text (article & journal) butttons, hiding additional help information and giving some hover-over information, parsing the citation into the problem report page, and moving the citation below the links to full-text. For journal citations with no full-text, they made the links to the catalog search large buttons with more text detail in them.
Some of the challenges of implementing these changes is the lack of a test environment because of the limited preview capablities in 360 Link. Any changes actually made required an overnight refresh and they would be live, opening the risk of 24 hour windows of broken resource links. So, they created their own test environment by modifying test scenarios into static HTML files and wrapping them in their own custom PHP to mimic the live pages without having to work with the live pages.
[At this point, it got really techy and lost me. Contact the presenters for details if you’re interested. They’re looking to go live with this as soon as they figure out a low-use time that will have minimal impact on their users.]
Customizing 360 Link menu with jQuery (Laura Wrubel, George Washington University)
They wanted to give better visual clues for users, emphasize the full-text, have more local control over linkns, and visual integration with other library tools so it’s more seamless for users.
They started with Reidsma’s code, then then forked off from it. They added a problem link to a Google form, fixed ebook chapter links and citation formatting, created conditional links to the catalog, and linked to their other library’s link resolver.
They hope to continue to tweak the language on the page, particularly for ILL suggestion. The coverage date is currently hidden behind the details link, which is fine most of the time, but sometimes that needs to be displayed. They also plan to load the print holdings coverage dates to eliminate confusion about what the library actually has.
In the future, they would rather use the API and blend the link resolver functionality with catalog tools.
Custom document delivery services using 360 Link API (Kathy Kilduff, WRLC)
License information for course reserves for faculty (Shanyun Zhang, Catholic University)
Included course reserve in the license information, but then it became an issue to convey that information to the faculty who were used to negotiating it with publishers directly. Most faculty prefer to use Blackboard for course readings, and handle it themselves. But, they need to figure out how to incorporate the library in the workflow. Looking for suggestions from the group.
Advanced Usage Tracking in Summon with Google Anaytics (Kun Lin, Catholic University)
Use of ERM/KB for collection analysis (Mitzi Cole, NASA Goddard Library)
Used the overlap analysis to compare print holdings with electronic and downloaded the report. The partial overlap can actually be a full overlap if the coverage dates aren’t formatted the same, but otherwise it’s a decent report. She incorporated license data from Resource Manager and print collection usage pulled from her ILS. This allowed her to create a decision tool (spreadsheet), and denoted the print usage in 5 year increments, eliminating previous 5 years use with each increment (this showed a drop in use over time for titles of concern).
Discussion of KnowledgeWorks Management/Metadata (Ben Johnson, Lead Metadata Librarian, SerialsSolutions)
After they get the data from the provider or it is made available to them, they have a system to automatically process the data so it fits their specifications, and then it is integrated into the KB.
They deal with a lot of bad data. 90% of databases change every month. Publishers have their own editorial policies that display the data in certain ways (e.g., title lists) and deliver inconsistent, and often erroneous, metadata. The KB team tries to catch everything, but some things still slip through. Throught the data ingestion process, they apply rules based on past experience with the data source. After that, the data is normalized so that various title/ISSN/ISBN combinations can be associated with the authority record. Finally, the data is incorporated into the KB.
Authority rules are used to correct errors and inconsistencies. Rule automatically and consistently correct holdings, and they are often used to correct vendor reporting problems. Rules are condified for provider and database, with 76,000+ applied to thousands of databases, and 200+ new rules are added each month.
Why does it take two months for KB data to be corrected when I report it? Usually it’s because they are working with the data providers, and some respond more quickly than others. They are hoping that being involved with various initiatives like KBART will help fix data from the provider so they don’t have to worry about correcting it for us, but also making it easier to make those corrections by using standards.
Client Center ISSN/ISBN doesn’t always work in 360 Links, which may have something to do with the authority record, but it’s unclear. It’s possible that there are some data in the Client Center that haven’t been normalized, and could cause this disconnect. And sometimes the provider doesn’t send both print and electronic ISSN/ISBN.
What is the source for authority records for ISSN/ISBN? LC, Bowker, ISSN.org, but he’s not clear. Clarification: Which field in the MARC record is the source for the ISBN? It could be the source of the normalization problem, according to the questioner. Johnson isn’t clear on where it comes from.
His position is new for his library (July 2011), and when Barbara Fister saw the job posting, she lamented that user-centered collection development would relegate librarians to signing licenses and paying invoices, but Sowell doesn’t agree.
Values and assumptions: As an academic library, we derive our reason for existing from our students and faculty. Our collections are a means to an end, rather than an end to themselves. They can do this in part because they don’t have ARL-like expectations of themselves. A number of studies has shown that users do a better job of selecting materials than we do, and they’ve been moving to more of a just in time model than a just in case.
They have had to deal with less money and many needs, so they’ve gotten creative. The university recently realigned departments and positions, and part of that included the creation of the Collections & Resource Sharing Department (CRSD). It’s nicknamed the “get it” department. Their mission is to connect the community to the content.
PDA, POV, PPV, approval plans, shelf-ready, and shared preservation are just a few of the things that have changed how we collect and do budget planning.
CRSD includes collection development, electronic resources, collections management, resource sharing & delivery, and circulation (refocusing on customer service and self-servicing, as well as some IT services). However, this is a new department, and Sowell speaks more about what these things will be doing than about what they are doing or how the change has been effective or not.
One of the things they’ve done is to rewrite position descriptions to refocus on the department goals. They’ve also been focusing on group facilitation and change management through brainstorming, parking lot, and multi-voting systems. Staff have a lot of anxiety over feeling like an expert in something and moving to where they are a novice and having to learn something new. They had to say goodbye to the old routines, mix them with new, and then eventually make the full shift.
They are using process mapping to keep up with the workflow changes. They’re also using service design tools like journey mapping (visualization of the user’s experience with a service), five whys, personas, experience analogy, and storyboards (visualization of how you would like things to occur).
For the reference staff, they are working on strategic planning about the roles and relationships of the librarians with faculty and collections.
Change takes time. When he proposed this topic, he expected to be further along than he is. Good communication, system thinking, and staff involvement are very important. There is a delicate balance between uncertainty/abstract with a desire for concrete.
Some unresolved issues include ereaders, purchasing rather than borrowing via ILL and the impact on their partner libraries, role of the catalog as an inventory in the world of PDA/PPV. The re-envisioning of the collection budget as a just in time resource. Stakeholder involvement and assessment wrap up the next steps portion of his talk.
In moving print to the collection maintenance area, how are you handling bundled purchases (print + online)? How are you handling the impression of importance or lack thereof for staff who still work with traditional print collection management? Delicately.
Question about budgeting. Not planning to tie PDA/PPV to specific subjects. They plan to do an annual review of what was purchased and what might have been had they followed their old model.
How are they doing assessment criteria? Not yet, but will take suggestions. Need to tie activities to student academic success and teaching/researching on campus. Planning for a budget cut if they don’t get an increase to cover inflation. Planning to do some assessment of resource use.
What will you do if people can’t do their new jobs? Hopefully they will after the retraining. Will find a seat for them if they can’t do what we hope they can do.
What are you doing to organize the training so they don’t get mired in the transitional period? Met with staff to reassure them that the details will be worked out in the process. They prepared the ground a bit, and the staff are ready for change.
Question about the digital divide and how that will be addressed. Content is available on university equipment, so not really an issue/barrier.
What outreach/training to academic departments? Not much yet. Will honor print requests. Subject librarians will still have a consultative role, but not necessarily item by item selection.
His library has left the GWLA Springer/Kluwer and Wiley-Blackwell consortia deals, and a smaller consortia deal for Elsevier. The end result is a loss of access to a little less than 2000 titles, but most of the titles had fewer than 1 download per month in the year prior to departure. So, they feel that ILL is a better price than subscription for them.
Because of the hoops jumped for ILL, he thinks those indicate more of a real need than downloading content available directly to the user. Because they retain archival access, withdrawing from the deals only impacts current volumes, and the time period has been too short to truly determine the impact, as they left the deals in 2009 and 2010. However, his conclusion based on the low ILL requests is that the download stats are not accurate due to incidental use, repeat use, convenience, and linking methods.
The other area of impact is reaction and response, and so far they have had only three complaints. It could be because faculty are sympathetic, or it could be because they haven’t needed the current content, yet. They have used this as an opportunity to educate faculty about the costs. They also opened up cancellations from the big publishers, spreading the pain more than they could in the past.
In the end, they saved the equivalent of half their monograph budget by canceling the big deals and additional serials. Will the collection be based on the contracts they have or by the needs of the community?
Moving forward, they have hit some issues. One is that a certain publisher will impose a 25% content fee to go title by title. Another issue is that title by title purchasing put them back at the list price which is much higher than the capped prices they had under the deal. They were able to alleviate some issues with negotiation and agreeing to multi-year deals that begin with the refreshed lists of titles.
The original GWLA deal with Springer allowed for LOCKSS as a means for archival access. However, they took the stance that they would not work with LOCKSS, so the lawyers got involved with the apparent breech of contract. In the end, Springer agreed to abide by the terms of the contract and make their content available to LOCKSS harvesting.
Make sure you address license issues before the end of the terms.
Speaker: David Fowler
They left the Elsevier and Wiley deals for their consortias. They have done cost savings measures in the past with eliminating duplication of format and high cost & low use titles, but in the end, they had to consider their big deals.
The first thing they eliminated was the pay per use access to Elsevier due to escalating costs and hacking abuse. The second thing they did was talk to OSU and PSU about collaborative collection development, including a shared collection deal with Elsevier. Essentially, they left the Orbis Cascade deal to make their own.
Elsevier tried to negotiate with the individual schools, but they stood together and were able to reduce the cancellations to 14% due to a reduced content fee. So far, the 2 year deal has been good, and they are working on a 4 year deal, and they won’t exceed their 2009 spend until 2014.
They think that ILL increase has more to do with WorldCat Local implementation, and few Elsevier titles were requested. Some faculty are concerned about the loss of low use high cost titles, so they are considering a library mediated pay-per-view option.
The Wiley deal was through GWLA, and when it came to the end, they determined that they needed to cancel titles that were not needed anymore, which meant leaving the deal. They considered going the same route they did with Elsevier, but were too burnt out to move forward. Instead, they have a single-site enhanced license.
We cannot continue to do business as usual. They expect to have to do a round of cancellations in the future.
As we shift to a demand-driven collection development approach, we will better be able to provide content at the point of need.
Speaker: Greg Raschke
Raschke started off with several assumptions about the future of library collections. These should not be a surprise to anyone who’s been paying attention: The economics of our collections is not sustainable – the cost and spend has gone up over the years, but there is a ceiling to funding, so we need to lower the costs of the entire system. We’re at a tipping point where just in case no longer delivers at the point of need. We must change the way we collect, and it will be hard, but not impossible.
The old system of supply-side collection development assumes that we’re working with limited resources (i.e. print materials), so we have to buy everything just in case someone needs it 10 years down the road when the book/journal/whatever is out of print. As a result, we judge the quality of a collection by its size, rather than by its relevance to the users. All of this contributes to an inelastic demand for journals and speculative buying.
The new system of demand-driven collections views them as drivers of research and teaching. It’s not really a new concept so much as a new workflow. There’s less tolerance in investing in a low-use collection, so there is an increase in the importance of use data and modifying what we collect based on that use data. The risks of not evolving and failing to innovate can be seen in the fate of the newspapers, many of whom held onto the old systems for too long and are dying or becoming irrelevant as a result.
Demand-driven collection development can create a tension between the philosophy of librarians as custodians of scholarship and librarians as enablers of a digital environment for scholars. Some think that this type of collection development may result in lower unit costs, but the reality is that unless the traditions of tenure and promotion change, the costs of publishing scholarly works will not go down. One of the challenging/difficult aspects of demand-driven collection development is that we won’t be getting new funds to do it – we must free funds from other areas in order to invest in these new methods (i.e. local digital production and patron-driven acquisitions).
The rewards of adapting are well worth it. The more our constituencies use the library and its resources, the more vital we become. Look at your data, and then bet on the numbers. Put resources into enabling a digital environment for your scholars.
Demand-driven collection development is not just patron-driven acquisitions! It’s about becoming an advanced analyst and increasing the precision in collection development. For NCSU‘s journal review, they look at downloads, impact factors, publications by NCSU authors, publications that cite NCSU authors, and gather feedback from the community. These bibliometrics are processed through a variety of formulas to standardize them for comparison and to identify outliers.
For print resources, they pulled circulation and bibliographic information out of their ILS and dropped it into SAS to assess the use of these materials over time. It was eye-opening to see what subject areas saw circulation greater than one over 10 years from the year they were added to the collection and those that saw no circulations. As a result, they were able to identify funds that could go towards supporting other areas of the collection, and they modified the scopes of their approval profiles. [A stacked graph showing the use of their collection, such as print circulation, ejournals/books downloads, reserves, and ILL has been one of their most popular promotional tools.]
As we shift to a demand-driven collection development approach, we will better be able to provide content at the point of need. This includes incorporating more than just our local collections (i.e. adding HathiTrust and other free resources to our catalog). Look to fund patron-driven acquisitions that occur both in the ebook purchasing models and through ILL requests. Integrate electronic profiling with your approval plans so that you are not just looking at purchasing print. Consider ebook packages to lower the unit costs, and use short-term loans for ebooks as an alternative to ILL. Get content to users in the mode they want to consume it. Do less speculative buying, and move money into new areas. It is imperative that libraries/librarians collaborate with each other in digital curation, digital collections, and collective bargaining for purchases.
There are challenges, of course. You will encounter the CAVE people. Data-driven and user-driven approaches can punish niche areas, disciplinary variation, and resources without data. The applications and devices we use to interact with digital content are highly personalized, which is a challenge for standardizing access.
I asked Raschke to explain how he evaluates resources that don’t have use data, and he says he’s more likely to stop buying them. For some resources, he can look at proxy logs and whether they are being cited by authors at his institution, but otherwise there isn’t enough data beyond user feedback.