Presenters: Steven R. Harris and Molly Beisler, University of Nevada, Reno
Evolution doesn’t happen in slow increments. Moments of punctuations happen quite suddenly. Ebooks are kind of like that in the evolution of the book.
In 2005, they were putting all formats on one record, manually updating the electronic content. As the quantity of ebooks increased, and the various licensing terms expanded, they were struggling to maintain this. In 2008, they began batch loading large numbers of eresources materials, with one person maintaining QA and merging records.
Then discovery services came in like an asteroid during the dinosaur age. They finally shifted from single record to separate records. They began tracking/coding ebooks to distinguish DDA from purchased, and expanded the ERM to track SU and other terms. This also prompted another staff reorganization.
They developed workflows via Sharepoint for new eresources depending on what the thing was: subscriptions/standing orders, one-time purchases with annual fees, and one-time purchases without annual fees. The streaming video packages fit okay in this workflow.
Streaming media has more complex access and troubleshooting issues. Platform as are variable, plugins may not be compatible. There are also many different access models (DDA, EBA) and many come with short-term licenses. Feel like the organization structure can support them as they figure out how to manage these.
They use a LibAnswers queue to track the various eresources problems come up.
Reiteration of the current library technology climate for eresources, with various challenges. No solutions.
The future comes with new problems due to next-gen ILS and their workflow quirks. With the industry consolidation, will everybody’s products work well with each other or will it become like the Amazon device ecosystem? Changing acquisitions models are challenges for collection development.
Goal is to be more of a dialogue than a monologue.
In 2011, they were a traditional acquisitions and cataloging department. They had 18.1 FTE in technical services, with 8 acquisitions people working on both print and electronic, and 5 in cataloging. It felt very fragmented.
They were getting more eresources but no new staff. Less print, but staff weren’t interchangeable. The hybrid positions weren’t working well, and print was still seen as a priority by some of the staff. They could see the backlogs and made it seem like they had to deal with them first.
They hired consultants and decided to create two format-based teams: tangible formats and electronic resources. They defined the new positions and asked staff for their preferences, and then assigned staff to one team or the other. The team leads are focused on cataloging side and acquisition side, rather than by format.
To implement this they: oriented and trained staff; created workflow teams for ejournals, ebooks, and databases; talked with staff extensively; tried to be as transparent as possible; and hired another librarian.
They increased the FTE working on eresources, and they could use more, but this is good enough for now.
Some of the challenges include: staff buy-in and morale; communicating who does what to all the points of contact; workflows for orders with dual formats; budget structure (monographs/serials, with some simplification where possible, but still not tangible/electronic); and documentation organization (documenting isn’t a problem — find it is).
The benefits are: staff focusing on a single format; bringing acquisitions and cataloging together (better communication between functions); easier cross-training opportunities; workflows streamlined easier; and ease in planning and priorities.
Used TERMS as a framework for developing an eresources team and a course at University of Wisconsin.
How are we going to systematically ensure that our eresources knowledge evolves and continues?
75% of UConn’s budget is for e-content. The human resources was 3.25 FTE when she arrived, but now they are at 5.65 FTE. The only other unit smaller is the digital scholarship and data curation team created a year ago.
Why does collection development for non-ERM staff exist as a term for non-electronic monograph acquisitions? In 2014? How do we establish eresources teams and teach this to staff?
TERMS helps build a framework for discussion among her students and her work team. The Core Competencies was used for class reading and discussions with her team, and became a framework for submitting training requests. TERMS has been a lighthouse for them, and they’ve continued to go back to them and review the cyclical process to identify successes and areas for improvements.
Only 19% of the ALA accredited LIS programs cover ERM topics, yet 73% of recent job ads require ERM competencies.
The financial resources are be allocated, what about the human resources to do our work. Eresources positions are not entry-level, and yet the spend in that content is increases. How can we expand/grow the ERM skill-set to more of our staff positions? This is not a new problem. We’ve been talking about this as a profession since 2000 or earlier.
The Core Competencies should be for the entire library, not just the ERM staff.
We need to eliminate the delineation between print and electronic management/acquisitions.
Establish partnerships with LIS programs. Establish paid fellowships that are at least two fiscal years in length. Get support from library administrators for adequate staffing and the time to teach courses, etc.
Good strategies for training staff: Listening to them and knowing what they already know how to do. Making analogies from what you know to what they know. Small chunks at a time.
Are the NASIG Core Competencies a laundry list of the ideal rather than true core competencies that can be expected at the beginning of an ERM career? No. The point is that no one person can do everything ERM. But, these are the things that are needed to manage eresources, regardless of how many people it takes to do it.
Audience member says she had to fail badly with only two staff in order to get the change needed to have a sufficient number of people on her team.
This is going to be long and not my usual style of conference notetaking. Because this was an unconference, there really wasn’t much in the way of prepared presentations, except for the lightening talks in the morning. What follows below the jump is what I captured from the conversations, often simply questions posed that were left open for anyone to answer, or at least consider.
Some of the good aspects of the unconference style was the free-form nature of the discussions. We generally stayed on topic, but even when we didn’t, it was about a relevant or important thing that lead to the tangents, so there were still plenty of things to take away. However, this format also requires someone present who is prepared to seed the conversation if it lulls or dies and no one steps in to start a new topic.
Also, if a session is designed to be a conversation around a topic, it will fall flat if it becomes all about one person or the quirks of their own institution. I had to work pretty hard on that one during the session I led, particularly when it seemed that the problem I was hoping to discuss wasn’t an issue for several of the folks present because of how they handle the workflow.
Some of the best conversations I had were during the gathering/breakfast time as well as lunch, lending even more to the unconference ethos of learning from each other as peers.
This is about losing staff to retirement, and not about losing staff to death, which is similar but different.
They started as one librarian and six staff, and now two of them have retired and have not been replaced. This is true of most of technical services, where staff were not replaced or shifted to other departments.
The staff she lost were key to helping run the department, often filling in when she was out for extended leaves. They were also the only experienced support staff catalogers.
The stages:
Shock and denial
Pain and guilt
Anger and bargaining
Depression, reflection, loneliness
Upward turn
Reconstruction and working through
Acceptance and hope
The pain went beyond friends leaving, because they also lost a lot of institutional memory and the workload was spread across the remaining staff. They couldn’t be angry at the staff who left, and they couldn’t bargain except to let administrators know that with less people, not all of the work could be continued and there may be some backlogs.
However, this allowed them to focus on the reflection stage and assess what may have changed about the work in recent years, and how that could be reflected in the new unit responsibilities. The serials universe is larger and more complex, with diverse issues that require higher-level understanding. There are fewer physical items to manage, and they don’t catalog as many titles anymore, with most of them being for special collections donations.
They are still expected to get the work done, despite having fewer staff, and if they got more staff, they would need more than one to handle it all. Given the options, she decided to take the remaining staff in the unit who have a lot of serials-related experience and train them up to handle the cataloging as well, as long as they were willing to do it.
In the end, they re-wrote the positions to be the same, with about half focused on cataloging and the rest with the other duties rotated through the unit on a monthly basis.
They have acceptance and hope, with differing levels of anxiety among the staff. The backlogs will grow, but as they get more comfortable with the cataloging they will catch up.
What worked in their favor: they had plenty of notice, giving them time to plan and prepare, and do some training before the catalogers left.
One of the recommended coping strategies was for the unit head to be as available as possible for problem solving. They needed clear priorities with documented procedures that are revised as needed. The staff also needed to be willing to consult with each other. The staff also needed to be okay with not finishing everything every day, and that backlogs will happen.
They underestimated the time needed for problem-solving, and need to provide more training about basic cataloging as well as serials cataloging specifically. There is always too much work with multiple simultaneous demands.
She is considering asking for another librarian, even if only on a term basis, to help catch up on the work. There is also the possibility of another reorganization or having someone from cataloging come over to help.
[lovely quote at the end that I will add when the slides are uploaded]
Began in 2008 after a new director and consultant group came in and recommended a reorganization. They had some trouble deciding which larger group electronic resource management should be a part of, and ended up on Information Delivery Services, which includes Acquisitions, Cataloging, and Access Services. The ERM unit used to include acquisitions, cataloging, and a service point. By moving the cataloging functions out (and closing the service point), the group could then focus on access and discovery systems (eresource management, licensing). During the same time, they also moved a huge chunk of bound journal volumes to storage to create student spaces.
Focused on moving away from redundancy across different systems, and moving towards cloud-based unified knowledgebases that populated all user interfaces.
Most serials are now electronic, and they are increasingly being tasked to acquire new forms of eresources. Needed to change some workflow models to incorporate ebook acquisitions and management, for example. They are now starting to work more with Acquisitions and Cataloging for those workflows. Large data sets will be the next challenge.
Focusing more on discovery access and assessment, which had been on the back burner. This requires shifting more of the workflow out of the unit.
Training and skill building in ERM techniques include: ERM “class” to orient to role in the library, trouble-shooting access issues, e-resource forums for other tech services staff taught by members of the ERM unit, vendor training sessions, cross-training within the unit, annual evaluation of responsibilities to determine what could be delegated to a specialist (make sure they are interested in it and it is appropriate for them to do), project prioritization, and relevant committee service.
Cataloging has been overwhelmed with legacy print projects, so incorporating ERM work has been challenging. Acquisitions staffing has been disproportionately weighted towards print, so moving more of the ebook process in is a solution and a challenge. Training circ/service point staff to handle basic questions about eresource access issues.
They are using CORAL resources module for tracking ebook workflows.
[Would really like to have a session like this focus on examples more than challenges and things they still need to do. I want to know job descriptions/responsibilities and examples of workflows for different resources.]
Speaker: Christine Korytnyk Dulaney, American University
Staff didn’t talk to each other about work, so they had to make some changes in communication and give them a broader view of the workflow (i.e. how each thing impacted another). They used some project management techniques to begin this process, and it helped them finish the project where they have a history of not doing so. The fundamental concepts of PM can be scaled down to any kind of project. [The presenter goes into this, but you probably have lots of books in your library that covers it.] One advantage of PM is that it focuses on the work and diffuses the emotion that can come from making changes.
Speakers: Jennifer Bazeley (Miami University) & Nancy Beals (Wayne State University)
Despite all the research on what we need/want, but no one is building commercial products that meet all our needs and addresses the impediments of cost and dwindling staff.
Beals says that the ERM is not used for workflow, so they needed other tools, with a priority on project management and Excel proficiency. They use an internal listserv, UKSG Transfer, Trello (project management software), and a blog, to keep track of changes in eresources.
Other tools for professional productivity and collaboration: iPads with Remember the Milk or Evernote, Google spreadsheets (project portfolio management organization-wide), and LibGuides.
Bazeley stepped into the role of organizing eresources information in 2009, with no existing tool or hub, which gave her room to experiment. For documentation, they use PBWiki (good for version tracking, particularly to correct errors) with an embedded departmental Google calender. For communication, they use LibGuides for internal documents, and you can embed RSS, Google Docs, Yahoo Pipes aggregating RSS feeds, Google forms for eresource access issues, links to Google spreadsheets with usage data, etc.. For login information, they use KeePass Password Safe. Rather than claiming in the ILS, they’ve moved to using the claim checker tool from the subscription agent.
Speaker: Susan Stearns, VP of Strategic Partnerships of Ex Libris Group
Both library as a percentage of university expenditures and the number of library staff per student have been going down. The percentage of library expenditures spent on electronic resources has been going up dramatically.
There is a need to eliminate the duplication of data and workflows, and the silo systems in libraries today. Alma intends to unify both the data and the data environment: acquisitions, metadata management, fulfillment, and analytics.
Collaborative metadata management is a hybrid model to balance global sharing with local needs. In English, this means you can have a catalog that includes both an inventory of locally owned items and a collection of items shared by one or more “communities.” Multiple metadata schema are supported within the system in their native formats — no crosswalks required.
Individual library staff users can set up “home pages” within the system that includes widgets with data, alerts, and reports. This can help with making decisions about the collection. Analytics are also embedded directly in the workflow (i.e. a graph representing the balance remaining in a fund displayed when an order using that fund is viewed/entered).
Speaker: Maria Bunevski, Ex Libris
Preparation for moving to a new system, particularly a radically new system like Alma, requires spending some time thinking about workflows, data, technical aspects (integration points, etc.), and training.
Project initiation phase requires a lot of training sessions to fully grasp all of the change that needs to happen.
The implementation phase involves a mix of on-site work and remote tweaking. At some point work has to freeze in the old system before cutting over to the new one.
VCU is currently in the post-implementation phase. This is the point where un-configured things are discovered, along with gaps in workflow.
Speaker: John Duke, VCU Libraries
They had Aleph, SFX, Verde, MetaLib, Primo, ARC, ILLiad, university systems, etc. before, and they wanted to bring the functions together. They didn’t end up with a monolithic system for everything, but they got closer.
Workflows and other aspects have been simplified.
The system is not complete, either because Ex Libris hadn’t thought of it or because VCU hasn’t figured out how to incorporate it. Internet outages, security issues, and conceptual difficulties have thrown up road blocks along the way.
Updates from Serials Solutions – mostly Resource Manager (Ashley Bass):
Keep up to date with ongoing enhancements for management tools (quarterly releases) by following answer #422 in the Support Center, and via training/overview webinars.
Populating and maintaining the ERM can be challenging, so they focused a lot of work this year on that process: license template library, license upload tool, data population service, SUSHI, offline date and status editor enhancements (new data elements for sort & filter, new logic, new selection elements, notes), and expanded and additional fields.
Workflow, communication, and decision support enhancements: in context help linking, contact tool filters, navigation, new Counter reports, more information about vendors, Counter summary page, etc. Her most favorite new feature is “deep linking” functionality (aka persistent links to records in SerSol). [I didn’t realize that wasn’t there before — been doing this for my own purposes for a while.]
Next up (in two weeks, 4th quarter release): new alerts, resource renewals feature (reports! and checklist!, will inherit from Admin data), Client Center navigation improvements (i.e. keyword searching for databases, system performance optimization), new license fields (images, public performance rights, training materials rights) & a few more, Counter updates, SUSHI updates (making customizations to deal with vendors who aren’t strictly following the standard), gathering stats for Springer (YTD won’t be available after Nov 30 — up to Sept avail now), and online DRS form enhancements.
In the future: license API (could allow libraries to create a different user interface), contact tools improvements, interoperability documentation, new BI tools and reporting functionality, and improving the Client Center.
Also, building a new KB (2014 release) and a web-scale management solution (Intota, also coming 2014). They are looking to have more internal efficiencies by rebuilding the KB, and it will include information from Ulrich’s, new content types metadata (e.g. A/V), metadata standardization, industry data, etc.
Summon Updates (Andrew Nagy):
I know very little about Summon functionality, so just listened to this one and didn’t take notes. Take-away: if you haven’t looked at Summon in a while, it would be worth giving it another go.
360 Link Customization via JavaScript and CSS (Liz Jacobson & Terry Brady, Georgetown University):
Goal #1: Allow users to easily link to full-text resources. Solution: Go beyond the out-of-the box 360 Link display.
Goal #2: Allow users to report problems or contact library staff at the point of failure. Solution: eresources problem report form
They created the eresources problem report form using Drupal. The fields include contact information, description of the resource, description of the problem, and the ability to attach a screenshot.
When they evaluated the slightly customized out of the box 360 Link page, they determined that it was confusing to users, with too many options and confusing links. So, they took some inspiration from other libraries (Matthew Reidsma’s GVUS jQuery code available on Github) and developed a prototype that uses custom JavaScript and CSS to walk the user through the process.
Some enhancements included: making the links for full-text (article & journal) butttons, hiding additional help information and giving some hover-over information, parsing the citation into the problem report page, and moving the citation below the links to full-text. For journal citations with no full-text, they made the links to the catalog search large buttons with more text detail in them.
Some of the challenges of implementing these changes is the lack of a test environment because of the limited preview capablities in 360 Link. Any changes actually made required an overnight refresh and they would be live, opening the risk of 24 hour windows of broken resource links. So, they created their own test environment by modifying test scenarios into static HTML files and wrapping them in their own custom PHP to mimic the live pages without having to work with the live pages.
[At this point, it got really techy and lost me. Contact the presenters for details if you’re interested. They’re looking to go live with this as soon as they figure out a low-use time that will have minimal impact on their users.]
Customizing 360 Link menu with jQuery (Laura Wrubel, George Washington University)
They wanted to give better visual clues for users, emphasize the full-text, have more local control over linkns, and visual integration with other library tools so it’s more seamless for users.
They started with Reidsma’s code, then then forked off from it. They added a problem link to a Google form, fixed ebook chapter links and citation formatting, created conditional links to the catalog, and linked to their other library’s link resolver.
They hope to continue to tweak the language on the page, particularly for ILL suggestion. The coverage date is currently hidden behind the details link, which is fine most of the time, but sometimes that needs to be displayed. They also plan to load the print holdings coverage dates to eliminate confusion about what the library actually has.
In the future, they would rather use the API and blend the link resolver functionality with catalog tools.
Custom document delivery services using 360 Link API (Kathy Kilduff, WRLC)
They facilitate inter-consortial loans (Consortium Loan Service), and originally requests were only done through the catalog. When they started using SFX, they added a link there, too. Now that they have 360 Link, they still have a link there, but now the request form is prepopulated with all of the citation information. In the background, they are using the API to gather the citation information, as well as checking to see if there are terms of use, and then checking to see if there are ILL permissions listed. They provide a link to the full-text in the staff client developed for the CLS if the terms of use allow for ILL of the electronic copy. If there isn’t a copy available in WRLC, they forward the citation information to the user’s library’s ILL form.
License information for course reserves for faculty (Shanyun Zhang, Catholic University)
Included course reserve in the license information, but then it became an issue to convey that information to the faculty who were used to negotiating it with publishers directly. Most faculty prefer to use Blackboard for course readings, and handle it themselves. But, they need to figure out how to incorporate the library in the workflow. Looking for suggestions from the group.
Advanced Usage Tracking in Summon with Google Anaytics (Kun Lin, Catholic University)
In order to tweak user experience, you need to know who, what, when, how, and most important, what were they thinking. Google Anayltics can help figure those things out in Summon. Parameters are easy ways to track facets, and you can use the data from Google Analytics to figure out the story based on that. Tracking things the “hard way,” you can use the conversion/goal function of Google Analytics. But, you’ll need to know a little about coding to make it work, because you have to add some javascripts to your Summon pages.
Use of ERM/KB for collection analysis (Mitzi Cole, NASA Goddard Library)
Used the overlap analysis to compare print holdings with electronic and downloaded the report. The partial overlap can actually be a full overlap if the coverage dates aren’t formatted the same, but otherwise it’s a decent report. She incorporated license data from Resource Manager and print collection usage pulled from her ILS. This allowed her to create a decision tool (spreadsheet), and denoted the print usage in 5 year increments, eliminating previous 5 years use with each increment (this showed a drop in use over time for titles of concern).
Discussion of KnowledgeWorks Management/Metadata (Ben Johnson, Lead Metadata Librarian, SerialsSolutions)
After they get the data from the provider or it is made available to them, they have a system to automatically process the data so it fits their specifications, and then it is integrated into the KB.
They deal with a lot of bad data. 90% of databases change every month. Publishers have their own editorial policies that display the data in certain ways (e.g., title lists) and deliver inconsistent, and often erroneous, metadata. The KB team tries to catch everything, but some things still slip through. Throught the data ingestion process, they apply rules based on past experience with the data source. After that, the data is normalized so that various title/ISSN/ISBN combinations can be associated with the authority record. Finally, the data is incorporated into the KB.
Authority rules are used to correct errors and inconsistencies. Rule automatically and consistently correct holdings, and they are often used to correct vendor reporting problems. Rules are condified for provider and database, with 76,000+ applied to thousands of databases, and 200+ new rules are added each month.
Why does it take two months for KB data to be corrected when I report it? Usually it’s because they are working with the data providers, and some respond more quickly than others. They are hoping that being involved with various initiatives like KBART will help fix data from the provider so they don’t have to worry about correcting it for us, but also making it easier to make those corrections by using standards.
Client Center ISSN/ISBN doesn’t always work in 360 Links, which may have something to do with the authority record, but it’s unclear. It’s possible that there are some data in the Client Center that haven’t been normalized, and could cause this disconnect. And sometimes the provider doesn’t send both print and electronic ISSN/ISBN.
What is the source for authority records for ISSN/ISBN? LC, Bowker, ISSN.org, but he’s not clear. Clarification: Which field in the MARC record is the source for the ISBN? It could be the source of the normalization problem, according to the questioner. Johnson isn’t clear on where it comes from.
Speakers: Ladd Brown, Andi Ogier, and Annette Bailey, Virginia Tech
Libraries are not about the collections anymore, they’re about space. The library is a place to connect to the university community. We are aggressively de-selecting, buying digital backfiles in the humanities to clear out the print collections.
Guess what? We still have our legacy workflows. They were built for processing physical items. Then eresources came along, and there were two parallel processes. Ebooks have the potential of becoming a third process.
Along with the legacy workflows, they have a new Dean, who is forward thinking. The Dean says it’s time to rip off the bandaid. (Titanic = old workflow; iceberg = eresources; people in life boats = technical resources team) Strategic plans are living documents kept on top of the desk and not in the drawer.
With all of this in mind, acquisitions leaders began meeting daily in a group called Eresources Workflow Weekly Work, planning the changes they needed to make. They did process mapping with sharpies, post-its, and incorporated everyone in the library that had anything to do with eresources. After lots of meetings, position descriptions began to emerge.
Electronic Resource Supervisor is the title of the former book and serials acquisitions heads. The rest — wasn’t clear from the description.
They had a MARC record service for ejournals, but after this reorganization process, they realized they needed the same for ebooks, and could be handled by the same folks.
Two person teams were formed based on who did what in the former parallel processes, and they reconfigured their workspace to make this more functional. The team cubes are together, and they have open collaboration spaces for other groupings.
They shifted focus from maintaining MARC records in their ILS to maintaining accurate title lists and data in their ERMS. They’re letting the data from the ERMS populate the ILS with appropriate MARC records.
They use some Python scripts to help move data from system to system, and more staff are being trained to support it. They’re also using the Google Apps portal for collaborative projects.
They wanted to take risks, make mistakes, fail quickly, but also see successes come quickly. They needed someplace to start, and to avoid reinventing the wheel, so they borrowed heavily from the work done by colleagues at James Madison University. They also hired Carl Grant as a consultant to ask questions and facilitate cross-departmental work.
Big thing to keep in mind: Administration needs to be prepared to allow staff to spend time learning new processes and not keeping up with everything they used to do at the same time. And, as they let go of the work they used to do, please tell them it was important or they won’t adopt the new work.