NASIG 2013: Adopting and Implementing an Open Access Policy — The Library’s Role

CC BY-NC-SA 2.0 2013-06-10
“Open Access promomateriaal” by biblioteekje

Speaker: Brian Kern

Open access policy was developed late last year and adopted/implemented in March. They have had it live for 86 days, so he’s not an expert, but has learned a lot in the process.

His college is small, and he expects less than 40 publications submitted a year, and they are using the institutional repository to manage this.

They have cut about 2/3 of their journal collections over the past decade, preferring publisher package deals and open access publications. They have identified the need to advocate for open access as a goal of the library. They are using open source software where they can, hosted and managed by a third party.

The policy borrowed heavily from others, and it is a rights-retention mandate in the style of Harvard. One piece of advice they had was to not focus on the specifics of implementation within the policy.

The policy states that it will be automatically granted, but waivers are available for embargo or publisher prohibitions. There are no restrictions on where they can publish, and they are encouraged to remove restrictive language from contracts and author addendum. Even with waivers, all articles are deposited to at least a “closed” archive. It stipulates that they are only interested in peer-reviewed articles, and are not concerned with which version of the article is deposited. Anything published or contracted to be published before the adoption date is not required to comply, but they can if they want to.

The funding, as one may expect, was left out. The library is going to cover the open access fees, with matching funds from the provost. Unused funds will be carried over year to year.

This was presented to the faculty as a way to ensure that their rights were being respected when they publish their work. Nothing was said about the library and our traditional concerns about saving money and opening access to local research output.

The web hub will include the policy, a FAQ, recommended author addendum based on publisher, funding information, and other material related to the process. The faculty will be self-depositing, with review/edit by Kern.

They have a monthly newsletter/blog to let the campus know about faculty and student publications, so they are using this to identify materials that should be submitted to the collection. He’s also using Stephen X. Flynn’s code to identify OA articles via SHERPA/RoMEO to find the ones already published that can be used to populate the repository.

They are keeping the senior projects closed in order to keep faculty/student collaborations private (and faculty research data offline until they publish).

They have learned that the policy is dependent on faculty seeing open access as a reality and library keeping faculty informed of the issues. They were not prepared for how fast they would get this through and that submissions would begin. Don’t expect faculty to be copyright lawyers. Keep the submission process as simple as possible, and allow them to use alternatives like email or paper.

ER&L 2013: Lightning Talks

“¡Rayos!” by José Eugenio Gómez Rodríguez

Speaker: Emily Guhde, NCLIVE
“We’ve Got Your Number: Making Usage Data Matter” is the project they are working on. What is a good target cost per use for their member libraries? They are organizing this by peer groups. How can the member libraries improve usage? They are hoping that other libraries will be able to replicated this in the future.

Speaker: Francis Kayiwa, UIC
He is a server administrator with library training, and wanted to be here to understand what it is his folks are coming back and asking him to do. Cross-pollinate conferences — try to integrate other kinds of conferences happening nearby.

Speaker: Annette Bailey, Virginia Tech
Co-developed LibX with her husband, now working on a new project to visualize what users are clicking on after they get a search result in Summon. This is a live, real-time visualization, pulled from the Summon API.

Speaker: Angie Rathnel, University of Kansas
Have been using a SAS called Callisto to track and claim eresources. It tracks access to entitlements daily/weekly, and can check to make sure proxy configurations are set up correctly.

Speaker: Cindy Boeke, Southern Methodist University
Why aren’t digital library collections included with other library eresources on lists and such (like the ubiquitous databases A-Z page)?

Speaker: Rick Burke, SCELC
SIPX to manage copyright in a consortial environment. Something something users buying access to stuff we already own. I’m guessing this is more for off-campus access?

Speaker: Margy Avery, MIT Press
Thinking about rich/enhanced digital publications. Want to work with libraries to make this happen, and preservation is a big issue. How do we catalog/classify this kind of resource?

Speaker: Jason Price, Claremont Colleges
Disgruntled with OpenURL and the dependency on our KB for article-level access. It is challenging to keep our lists (KBs) updated and accurate — there has to be a better way. We need to be working with the disgrundterati who are creating startups to address this problem. Pubget was one of the first, and since then there is Dublin Six, Readcube, SIPX, and Callisto. If you get excited about these things, contact the startups and tell them.

Speaker: Wilhelmina Ranke, St. Mary’s University
Collecting mostly born digital collections, or at least collections that are digitized already, in the repository: student newspaper, video projects, and items digitized for classroom use that have no copyright restrictions. Doesn’t save time on indexing, but it does save time on digitizing.

Speaker: Bonnie Tijerina, Harvard
The #ideadrop house was created to be a space for librar* to come together to talk about librar* stuff. They had a little free library box for physical books, and also a collection of wireless boxes with free digital content anyone could download. They streamed conversations from the living room 5-7 times a day.

Speaker: Rachel Frick
Digital Public Library of America focuses on content that is free to all to create a more informed citizenry. They want to go beyond just being a portal for content. They want to be a platform for community involvement and conversations.

Moving Up to the Cloud, a panel lecture hosted by the VCU Libraries

“Sky symphony” by Kevin Dooley

“Educational Utility Computing: Perspectives on .edu and the Cloud”
Mark Ryland, Chief Solutions Architect at Amazon Web Services

AWS has been a part of revolutionizing the start-up industries (i.e. Instagram, Pinterest) because they don’t have the cost of building server infrastructures in-house. Cloud computing in the AWS sense is utility computing — pay for what you use, easy to scale up and down, and local control of how your products work. In the traditional world, you have to pay for the capacity to meet your peak demand, but in the cloud computing world, you can level up and down based on what is needed at that moment.

Economies, efficiencies of scale in many ways. Some obvious: storage, computing, and networking equipment supply change; internet connectivity and electric power; and data center sitting, redundancy, etc. Less obvious: security and compliance best practices; datacenter internal innovations in networking, power, etc.

AWS and .EDU: EdX, Coursera, Texas Digital Library, Berkeley AMP Lab, Harvard Medical, University of Phoenix, and an increasing number of university/school public-facing websites.

Expects that we are heading toward cloud computing utilities to function much like the electric grid — just plug in and use it.


“Libraries in Transition”
Marshall Breeding, library systems expert

We’ve already seen the shift of print to electronic in academic journals, and we’re heading that way with books. Our users are changing in the way they expect interactions with libraries to be, and the library as space is evolving to meet that, along with library systems.

Web-based computing is better than client/server computing. We expect social computing to be integrated into the core infrastructure of a service, rather than add-ons and afterthoughts. Systems need to be flexible for all kinds of devices, not just particular types of desktops. Metadata needs to evolve from record-by-record creation to bulk management wherever possible. MARC is going to die, and die soon.

How are we going to help our researchers manage data? We need the infrastructure to help us with that as well. Semantic web — what systems will support it?

Cooperation and consolidation of library consortia; state-wide implementations of SaaS library systems. Our current legacy ILS are holding libraries back from being able to move forward and provide the services our users want and need.

A true cloud computing system comes with web-based interfaces, externally hosted, subscription OR utility pricing, highly abstracted computing model, provisioned on demand, scaled according to variable needs, elastic.


“Moving Up to the Cloud”
Mark Triest, President of Ex Libris North America

Currently, libraries are working with several different systems (ILS, ERMS, DRs, etc.), duplicating data and workflows, and not always very accurately or efficiently, but it was the only solution for handling different kinds of data and needs. Ex Libris started in 2007 to change this, beginning with conversations with librarians. Their solution is a single system with unified data and workflows.

They are working to lower the total cost of ownership by reducing IT needs, minimize administration time, and add new services to increase productivity. Right now there are 120+ institutions world-wide who are in the process of or have gone live with Alma.

Automated workflows allow staff to focus on the exceptions and reduce the steps involved.

Descriptive analytics are built into the system, with plans for predictive analytics to be incorporated in the future.

Future: collaborative collection development tools, like joint licensing and consortial ebook programs; infrastructure for ad-hoc collaboration


“Cloud Computing and Academic Libraries: Promise and Risk”
John Ulmschneider, Dean of Libraries at VCU

When they first looked at Alma, they had two motivations and two concerns. They were not planning or thinking about it until they were approached to join the early adopters. All academic libraries today are seeking to discover and exploit new efficiencies. The growth of cloud-resident systems and data requires academic libraries to reinvigorate their focus on core mission. Cloud-resident systems are creating massive change throughout out institutions. Managing and exploiting pervasive change is a serious challenge. Also, we need to deal with security and durability of data.

Cloud solutions shift resources from supporting infrastructure to supporting innovation.

Efficiencies are not just nice things, they are absolutely necessary for academic libraries. We are obligated to upend long-held practice, if in doing so we gain assets for practice essential to our mission. We must focus recovered assets on the core library mission.

Agility is the new stability.

Libraries must push technology forward in areas that advance their core mission. Infuse technology evolution for libraries with the values needs of libraries. Libraries must invest assets as developers, development partners, and early adopters. Insist on discovery and management tools that are agnostic regarding data sources.

Managing the change process is daunting.. but we’re already well down the road. It’s not entirely new, but it does involve a change in culture to create a pervasive institutional agility for all staff.

Charleston 2012: The Twenty-First Century University Press: Assessing the Past, Envisioning the Future

Lecture by uniinnsbruck
“Lecture” by uniinnsbruck

Speaker: Doug Armato, the ghost of university presses past, University of Minnesota Press

The first book published at a university was in 1836 at Harvard. The AAUP began in 1928 when UP directors met in NYC to talk about marketing and sales for their books. Arguably, UP have been in some form of crisis since the 1970s, between the serials crisis and the current ebook crisis.

Libraries now account for only 20-25% of UP sales, with more than half of the sales coming from retail sources. UP worry about the library budget ecology and university funding as a whole.

“Books possessed of such little popular appeal but at the same time such real importance” from a 1937 publication called Some Presses You will Be Glad to Know About. Armato says, “A monograph is a scholarly book that fails to sell.”

Libraries complain that their students don’t read monographs. University Presses complain that libraries don’t buy monographs. And some may wonder why authors write them in the first place. UP rely on libraries to buy the books they publish for mission, not to recover the cost of production by being popular enough to be sold in the retail market.

Armato sees the lack of library concern over the University of Missouri Press potential closure and the UP role in the Georgia State case as bellwethers of the devolving relationship between the two, and we should be concerned.

But, there is hope. The evolving relationships with Project Muse and JSTOR to incorporate UP monographs is a sign of new life. UP have evolved, but they need to evolve much faster. UP press publications need better technology that incorporates the manual hyperlinks of footnotes and references into a highly linked database. A policy for copyright that favors authors over publishers is necessary.

Speaker: Alison Mudditt, ghost of university presses present, University of California Press

[Zoned out when it became clear this would be another dense essay lecture with very little interesting/innovative content, rather than what I’d consider to be a keynote. Maybe it’s an age thing? I just don’t have the attention span for a lecture anymore, and I certainly don’t expect one at a library conference. As William Gunn from Mendeley tweeted, “To hear people read speeches and not ask questions, that’s why we’re all in the same room.”]

ER&L 2010: ERMS Success – Harvard’s experience implementing and using an ERM system

Speaker: Abigail Bordeaux

Harvard has over 70 libraries and they are very decentralized. This implementation is for the central office that provides the library systems services for all of the libraries. Ex Libris is their primary vendor for library systems, include the ERMS, Verde. They try to go with vended products and only develop in-house solutions if nothing else is available.

Success was defined as migrating data from old system to new, to improve workflows with improved efficiency, more transparency for users, and working around any problems they encountered. They did not expect to have an ideal system – there were bugs with both the system and their local data. There is no magic bullet. They identified the high-priority areas and worked towards their goals.

Phase I involved a lot of project planning with clearly defined goals/tasks and assessment of the results. The team included the primary users of the system, the project manager (Bordeaux), and a programmer. A key part of planning includes scoping the project (Bordeaux provided a handout of the questions they considered in this process). They had a very detailed project plan using Microsoft Project, and at the very least, the listing out of the details made the interdependencies more clear.

The next stage of the project involved data review and clean-up. Bordeaux thinks that data clean-up is essential for any ERM implementation or migration. They also had to think about the ways the old ERM was used and if that is desirable for the new system.

The local system they created was very close to the DLF recommended fields, but even so, they still had several failed attempts to map the fields between the two systems. As a result, they had a cycle of extracting a small set of records, loading them into Verde, reviewing the data, and then delete the test records out of Verde. They did this several times with small data sets (10 or so), and when they were comfortable with that, they increased the number of records.

They also did a lot of manual data entry. They were able to transfer a lot, but they couldn’t do everything. And some bits of data were not migrated because of the work involved compared to the value of it. In some cases, though, they did want to keep the data, so they entered it manually. Part of what they did to visualize the mapping process, they created screenshots with notes that showed the field connections.

Prior to this project, they were not using Aleph to manage acquisitions. So, they created order records for the resources they wanted to track. The acquisitions workflow had to be reorganized from the ground up. Oddly enough, by having everything paid out of one system, the individual libraries have much more flexibility in spending and reporting. However, it took some public relations work to get the libraries to see the benefits.

As a result of looking at the data in this project, they got a better idea of gaps and other projects regarding their resources.

Phase two began this past fall to begin incorporating the data from the libraries that did not participate in phase one. They now have a small group with folks representing the libraries. This group is coming up with best practices for license agreements and entering data into the fields.