Speaker: Susan Stearns, VP of Strategic Partnerships of Ex Libris Group
Both library as a percentage of university expenditures and the number of library staff per student have been going down. The percentage of library expenditures spent on electronic resources has been going up dramatically.
There is a need to eliminate the duplication of data and workflows, and the silo systems in libraries today. Alma intends to unify both the data and the data environment: acquisitions, metadata management, fulfillment, and analytics.
Collaborative metadata management is a hybrid model to balance global sharing with local needs. In English, this means you can have a catalog that includes both an inventory of locally owned items and a collection of items shared by one or more “communities.” Multiple metadata schema are supported within the system in their native formats — no crosswalks required.
Individual library staff users can set up “home pages” within the system that includes widgets with data, alerts, and reports. This can help with making decisions about the collection. Analytics are also embedded directly in the workflow (i.e. a graph representing the balance remaining in a fund displayed when an order using that fund is viewed/entered).
Speaker: Maria Bunevski, Ex Libris
Preparation for moving to a new system, particularly a radically new system like Alma, requires spending some time thinking about workflows, data, technical aspects (integration points, etc.), and training.
Project initiation phase requires a lot of training sessions to fully grasp all of the change that needs to happen.
The implementation phase involves a mix of on-site work and remote tweaking. At some point work has to freeze in the old system before cutting over to the new one.
VCU is currently in the post-implementation phase. This is the point where un-configured things are discovered, along with gaps in workflow.
Speaker: John Duke, VCU Libraries
They had Aleph, SFX, Verde, MetaLib, Primo, ARC, ILLiad, university systems, etc. before, and they wanted to bring the functions together. They didn’t end up with a monolithic system for everything, but they got closer.
Workflows and other aspects have been simplified.
The system is not complete, either because Ex Libris hadn’t thought of it or because VCU hasn’t figured out how to incorporate it. Internet outages, security issues, and conceptual difficulties have thrown up road blocks along the way.
“Educational Utility Computing: Perspectives on .edu and the Cloud”
Mark Ryland, Chief Solutions Architect at Amazon Web Services
AWS has been a part of revolutionizing the start-up industries (i.e. Instagram, Pinterest) because they don’t have the cost of building server infrastructures in-house. Cloud computing in the AWS sense is utility computing — pay for what you use, easy to scale up and down, and local control of how your products work. In the traditional world, you have to pay for the capacity to meet your peak demand, but in the cloud computing world, you can level up and down based on what is needed at that moment.
Economies, efficiencies of scale in many ways. Some obvious: storage, computing, and networking equipment supply change; internet connectivity and electric power; and data center sitting, redundancy, etc. Less obvious: security and compliance best practices; datacenter internal innovations in networking, power, etc.
AWS and .EDU: EdX, Coursera, Texas Digital Library, Berkeley AMP Lab, Harvard Medical, University of Phoenix, and an increasing number of university/school public-facing websites.
Expects that we are heading toward cloud computing utilities to function much like the electric grid — just plug in and use it.
“Libraries in Transition”
Marshall Breeding, library systems expert
We’ve already seen the shift of print to electronic in academic journals, and we’re heading that way with books. Our users are changing in the way they expect interactions with libraries to be, and the library as space is evolving to meet that, along with library systems.
Web-based computing is better than client/server computing. We expect social computing to be integrated into the core infrastructure of a service, rather than add-ons and afterthoughts. Systems need to be flexible for all kinds of devices, not just particular types of desktops. Metadata needs to evolve from record-by-record creation to bulk management wherever possible. MARC is going to die, and die soon.
How are we going to help our researchers manage data? We need the infrastructure to help us with that as well. Semantic web — what systems will support it?
Cooperation and consolidation of library consortia; state-wide implementations of SaaS library systems. Our current legacy ILS are holding libraries back from being able to move forward and provide the services our users want and need.
A true cloud computing system comes with web-based interfaces, externally hosted, subscription OR utility pricing, highly abstracted computing model, provisioned on demand, scaled according to variable needs, elastic.
“Moving Up to the Cloud”
Mark Triest, President of Ex Libris North America
Currently, libraries are working with several different systems (ILS, ERMS, DRs, etc.), duplicating data and workflows, and not always very accurately or efficiently, but it was the only solution for handling different kinds of data and needs. Ex Libris started in 2007 to change this, beginning with conversations with librarians. Their solution is a single system with unified data and workflows.
They are working to lower the total cost of ownership by reducing IT needs, minimize administration time, and add new services to increase productivity. Right now there are 120+ institutions world-wide who are in the process of or have gone live with Alma.
Automated workflows allow staff to focus on the exceptions and reduce the steps involved.
Descriptive analytics are built into the system, with plans for predictive analytics to be incorporated in the future.
Future: collaborative collection development tools, like joint licensing and consortial ebook programs; infrastructure for ad-hoc collaboration
“Cloud Computing and Academic Libraries: Promise and Risk”
John Ulmschneider, Dean of Libraries at VCU
When they first looked at Alma, they had two motivations and two concerns. They were not planning or thinking about it until they were approached to join the early adopters. All academic libraries today are seeking to discover and exploit new efficiencies. The growth of cloud-resident systems and data requires academic libraries to reinvigorate their focus on core mission. Cloud-resident systems are creating massive change throughout out institutions. Managing and exploiting pervasive change is a serious challenge. Also, we need to deal with security and durability of data.
Cloud solutions shift resources from supporting infrastructure to supporting innovation.
Efficiencies are not just nice things, they are absolutely necessary for academic libraries. We are obligated to upend long-held practice, if in doing so we gain assets for practice essential to our mission. We must focus recovered assets on the core library mission.
Agility is the new stability.
Libraries must push technology forward in areas that advance their core mission. Infuse technology evolution for libraries with the values needs of libraries. Libraries must invest assets as developers, development partners, and early adopters. Insist on discovery and management tools that are agnostic regarding data sources.
Managing the change process is daunting.. but we’re already well down the road. It’s not entirely new, but it does involve a change in culture to create a pervasive institutional agility for all staff.
My library is often on the forefront of innovation, having the advantage of a healthy budget and staff size, yet small enough to be nimble. Frequently, when my colleagues return from conferences and give their reports, they’ll conclude with something along the lines of “we’re already doing most of the things they talked about.” At a recent conference report session, that was repeated again, with one exception: we have not implemented a web-scale discovery system.
I’m of two minds about web-scale discovery systems. In theory, they’re pretty awesome, allowing users to discover all of the content available to them from the library, regardless of the source or format. But in reality, they’re hamstrung by exclusive deals and coding limitations. The initial buzz was that they caused a dramatic increase in the use of library resources, but a few years in, and I’m hearing conflicting reports and grumblings.
We held off on buying a web-scale discovery system for two main reasons: one, we didn’t have the funding secured, and two, most of the reference librarians felt indifferent to outright dislike towards the systems out there at the time. We’re now in the process of reviewing and evaluating the current systems available, after many discussions about which problems we are hoping they will solve.
In the end, they really aren’t “Google for Libraries.” We think that our users want a single search box, but do they really? I heard an anecdote about how the library had spent a lot of time teaching users where to find their web-scale discovery system, making sure it was visible on the main library page, etc. After a professor assigned the same students to find a known article (gave them the full citation) using the web-scale discovery system (called it by name), the most frequent question the library got was, “How do I google the <name of web-scale discovery system>?”
I wonder if the ROI really is significant enough to implement and promote a web-scale discovery system? These systems are not cheap, and they take a bit of labor to maintain them. And, frankly, if the battle over exclusive content continues to be waged, it won’t be easy to pick the best one for our collection/users and know that it will stay that way for more than six months or a year.
Does your library have a web-scale discovery system? Is it everything you thought it would be? Would you pick the same one if you had to choose again?