NASIG 2013: Adopting and Implementing an Open Access Policy — The Library’s Role

CC BY-NC-SA 2.0 2013-06-10
“Open Access promomateriaal” by biblioteekje

Speaker: Brian Kern

Open access policy was developed late last year and adopted/implemented in March. They have had it live for 86 days, so he’s not an expert, but has learned a lot in the process.

His college is small, and he expects less than 40 publications submitted a year, and they are using the institutional repository to manage this.

They have cut about 2/3 of their journal collections over the past decade, preferring publisher package deals and open access publications. They have identified the need to advocate for open access as a goal of the library. They are using open source software where they can, hosted and managed by a third party.

The policy borrowed heavily from others, and it is a rights-retention mandate in the style of Harvard. One piece of advice they had was to not focus on the specifics of implementation within the policy.

The policy states that it will be automatically granted, but waivers are available for embargo or publisher prohibitions. There are no restrictions on where they can publish, and they are encouraged to remove restrictive language from contracts and author addendum. Even with waivers, all articles are deposited to at least a “closed” archive. It stipulates that they are only interested in peer-reviewed articles, and are not concerned with which version of the article is deposited. Anything published or contracted to be published before the adoption date is not required to comply, but they can if they want to.

The funding, as one may expect, was left out. The library is going to cover the open access fees, with matching funds from the provost. Unused funds will be carried over year to year.

This was presented to the faculty as a way to ensure that their rights were being respected when they publish their work. Nothing was said about the library and our traditional concerns about saving money and opening access to local research output.

The web hub will include the policy, a FAQ, recommended author addendum based on publisher, funding information, and other material related to the process. The faculty will be self-depositing, with review/edit by Kern.

They have a monthly newsletter/blog to let the campus know about faculty and student publications, so they are using this to identify materials that should be submitted to the collection. He’s also using Stephen X. Flynn’s code to identify OA articles via SHERPA/RoMEO to find the ones already published that can be used to populate the repository.

They are keeping the senior projects closed in order to keep faculty/student collaborations private (and faculty research data offline until they publish).

They have learned that the policy is dependent on faculty seeing open access as a reality and library keeping faculty informed of the issues. They were not prepared for how fast they would get this through and that submissions would begin. Don’t expect faculty to be copyright lawyers. Keep the submission process as simple as possible, and allow them to use alternatives like email or paper.

ER&L 2013: Lightning Talks

“¡Rayos!” by José Eugenio Gómez Rodríguez

Speaker: Emily Guhde, NCLIVE
“We’ve Got Your Number: Making Usage Data Matter” is the project they are working on. What is a good target cost per use for their member libraries? They are organizing this by peer groups. How can the member libraries improve usage? They are hoping that other libraries will be able to replicated this in the future.

Speaker: Francis Kayiwa, UIC
He is a server administrator with library training, and wanted to be here to understand what it is his folks are coming back and asking him to do. Cross-pollinate conferences — try to integrate other kinds of conferences happening nearby.

Speaker: Annette Bailey, Virginia Tech
Co-developed LibX with her husband, now working on a new project to visualize what users are clicking on after they get a search result in Summon. This is a live, real-time visualization, pulled from the Summon API.

Speaker: Angie Rathnel, University of Kansas
Have been using a SAS called Callisto to track and claim eresources. It tracks access to entitlements daily/weekly, and can check to make sure proxy configurations are set up correctly.

Speaker: Cindy Boeke, Southern Methodist University
Why aren’t digital library collections included with other library eresources on lists and such (like the ubiquitous databases A-Z page)?

Speaker: Rick Burke, SCELC
SIPX to manage copyright in a consortial environment. Something something users buying access to stuff we already own. I’m guessing this is more for off-campus access?

Speaker: Margy Avery, MIT Press
Thinking about rich/enhanced digital publications. Want to work with libraries to make this happen, and preservation is a big issue. How do we catalog/classify this kind of resource?

Speaker: Jason Price, Claremont Colleges
Disgruntled with OpenURL and the dependency on our KB for article-level access. It is challenging to keep our lists (KBs) updated and accurate — there has to be a better way. We need to be working with the disgrundterati who are creating startups to address this problem. Pubget was one of the first, and since then there is Dublin Six, Readcube, SIPX, and Callisto. If you get excited about these things, contact the startups and tell them.

Speaker: Wilhelmina Ranke, St. Mary’s University
Collecting mostly born digital collections, or at least collections that are digitized already, in the repository: student newspaper, video projects, and items digitized for classroom use that have no copyright restrictions. Doesn’t save time on indexing, but it does save time on digitizing.

Speaker: Bonnie Tijerina, Harvard
The #ideadrop house was created to be a space for librar* to come together to talk about librar* stuff. They had a little free library box for physical books, and also a collection of wireless boxes with free digital content anyone could download. They streamed conversations from the living room 5-7 times a day.

Speaker: Rachel Frick
Digital Public Library of America focuses on content that is free to all to create a more informed citizenry. They want to go beyond just being a portal for content. They want to be a platform for community involvement and conversations.

Moving Up to the Cloud, a panel lecture hosted by the VCU Libraries

“Sky symphony” by Kevin Dooley

“Educational Utility Computing: Perspectives on .edu and the Cloud”
Mark Ryland, Chief Solutions Architect at Amazon Web Services

AWS has been a part of revolutionizing the start-up industries (i.e. Instagram, Pinterest) because they don’t have the cost of building server infrastructures in-house. Cloud computing in the AWS sense is utility computing — pay for what you use, easy to scale up and down, and local control of how your products work. In the traditional world, you have to pay for the capacity to meet your peak demand, but in the cloud computing world, you can level up and down based on what is needed at that moment.

Economies, efficiencies of scale in many ways. Some obvious: storage, computing, and networking equipment supply change; internet connectivity and electric power; and data center sitting, redundancy, etc. Less obvious: security and compliance best practices; datacenter internal innovations in networking, power, etc.

AWS and .EDU: EdX, Coursera, Texas Digital Library, Berkeley AMP Lab, Harvard Medical, University of Phoenix, and an increasing number of university/school public-facing websites.

Expects that we are heading toward cloud computing utilities to function much like the electric grid — just plug in and use it.


“Libraries in Transition”
Marshall Breeding, library systems expert

We’ve already seen the shift of print to electronic in academic journals, and we’re heading that way with books. Our users are changing in the way they expect interactions with libraries to be, and the library as space is evolving to meet that, along with library systems.

Web-based computing is better than client/server computing. We expect social computing to be integrated into the core infrastructure of a service, rather than add-ons and afterthoughts. Systems need to be flexible for all kinds of devices, not just particular types of desktops. Metadata needs to evolve from record-by-record creation to bulk management wherever possible. MARC is going to die, and die soon.

How are we going to help our researchers manage data? We need the infrastructure to help us with that as well. Semantic web — what systems will support it?

Cooperation and consolidation of library consortia; state-wide implementations of SaaS library systems. Our current legacy ILS are holding libraries back from being able to move forward and provide the services our users want and need.

A true cloud computing system comes with web-based interfaces, externally hosted, subscription OR utility pricing, highly abstracted computing model, provisioned on demand, scaled according to variable needs, elastic.


“Moving Up to the Cloud”
Mark Triest, President of Ex Libris North America

Currently, libraries are working with several different systems (ILS, ERMS, DRs, etc.), duplicating data and workflows, and not always very accurately or efficiently, but it was the only solution for handling different kinds of data and needs. Ex Libris started in 2007 to change this, beginning with conversations with librarians. Their solution is a single system with unified data and workflows.

They are working to lower the total cost of ownership by reducing IT needs, minimize administration time, and add new services to increase productivity. Right now there are 120+ institutions world-wide who are in the process of or have gone live with Alma.

Automated workflows allow staff to focus on the exceptions and reduce the steps involved.

Descriptive analytics are built into the system, with plans for predictive analytics to be incorporated in the future.

Future: collaborative collection development tools, like joint licensing and consortial ebook programs; infrastructure for ad-hoc collaboration


“Cloud Computing and Academic Libraries: Promise and Risk”
John Ulmschneider, Dean of Libraries at VCU

When they first looked at Alma, they had two motivations and two concerns. They were not planning or thinking about it until they were approached to join the early adopters. All academic libraries today are seeking to discover and exploit new efficiencies. The growth of cloud-resident systems and data requires academic libraries to reinvigorate their focus on core mission. Cloud-resident systems are creating massive change throughout out institutions. Managing and exploiting pervasive change is a serious challenge. Also, we need to deal with security and durability of data.

Cloud solutions shift resources from supporting infrastructure to supporting innovation.

Efficiencies are not just nice things, they are absolutely necessary for academic libraries. We are obligated to upend long-held practice, if in doing so we gain assets for practice essential to our mission. We must focus recovered assets on the core library mission.

Agility is the new stability.

Libraries must push technology forward in areas that advance their core mission. Infuse technology evolution for libraries with the values needs of libraries. Libraries must invest assets as developers, development partners, and early adopters. Insist on discovery and management tools that are agnostic regarding data sources.

Managing the change process is daunting.. but we’re already well down the road. It’s not entirely new, but it does involve a change in culture to create a pervasive institutional agility for all staff.

Charleston 2012: The Twenty-First Century University Press: Assessing the Past, Envisioning the Future

Lecture by uniinnsbruck
“Lecture” by uniinnsbruck

Speaker: Doug Armato, the ghost of university presses past, University of Minnesota Press

The first book published at a university was in 1836 at Harvard. The AAUP began in 1928 when UP directors met in NYC to talk about marketing and sales for their books. Arguably, UP have been in some form of crisis since the 1970s, between the serials crisis and the current ebook crisis.

Libraries now account for only 20-25% of UP sales, with more than half of the sales coming from retail sources. UP worry about the library budget ecology and university funding as a whole.

“Books possessed of such little popular appeal but at the same time such real importance” from a 1937 publication called Some Presses You will Be Glad to Know About. Armato says, “A monograph is a scholarly book that fails to sell.”

Libraries complain that their students don’t read monographs. University Presses complain that libraries don’t buy monographs. And some may wonder why authors write them in the first place. UP rely on libraries to buy the books they publish for mission, not to recover the cost of production by being popular enough to be sold in the retail market.

Armato sees the lack of library concern over the University of Missouri Press potential closure and the UP role in the Georgia State case as bellwethers of the devolving relationship between the two, and we should be concerned.

But, there is hope. The evolving relationships with Project Muse and JSTOR to incorporate UP monographs is a sign of new life. UP have evolved, but they need to evolve much faster. UP press publications need better technology that incorporates the manual hyperlinks of footnotes and references into a highly linked database. A policy for copyright that favors authors over publishers is necessary.

Speaker: Alison Mudditt, ghost of university presses present, University of California Press

[Zoned out when it became clear this would be another dense essay lecture with very little interesting/innovative content, rather than what I’d consider to be a keynote. Maybe it’s an age thing? I just don’t have the attention span for a lecture anymore, and I certainly don’t expect one at a library conference. As William Gunn from Mendeley tweeted, “To hear people read speeches and not ask questions, that’s why we’re all in the same room.”]

ER&L 2010: ERMS Success – Harvard’s experience implementing and using an ERM system

Speaker: Abigail Bordeaux

Harvard has over 70 libraries and they are very decentralized. This implementation is for the central office that provides the library systems services for all of the libraries. Ex Libris is their primary vendor for library systems, include the ERMS, Verde. They try to go with vended products and only develop in-house solutions if nothing else is available.

Success was defined as migrating data from old system to new, to improve workflows with improved efficiency, more transparency for users, and working around any problems they encountered. They did not expect to have an ideal system – there were bugs with both the system and their local data. There is no magic bullet. They identified the high-priority areas and worked towards their goals.

Phase I involved a lot of project planning with clearly defined goals/tasks and assessment of the results. The team included the primary users of the system, the project manager (Bordeaux), and a programmer. A key part of planning includes scoping the project (Bordeaux provided a handout of the questions they considered in this process). They had a very detailed project plan using Microsoft Project, and at the very least, the listing out of the details made the interdependencies more clear.

The next stage of the project involved data review and clean-up. Bordeaux thinks that data clean-up is essential for any ERM implementation or migration. They also had to think about the ways the old ERM was used and if that is desirable for the new system.

The local system they created was very close to the DLF recommended fields, but even so, they still had several failed attempts to map the fields between the two systems. As a result, they had a cycle of extracting a small set of records, loading them into Verde, reviewing the data, and then delete the test records out of Verde. They did this several times with small data sets (10 or so), and when they were comfortable with that, they increased the number of records.

They also did a lot of manual data entry. They were able to transfer a lot, but they couldn’t do everything. And some bits of data were not migrated because of the work involved compared to the value of it. In some cases, though, they did want to keep the data, so they entered it manually. Part of what they did to visualize the mapping process, they created screenshots with notes that showed the field connections.

Prior to this project, they were not using Aleph to manage acquisitions. So, they created order records for the resources they wanted to track. The acquisitions workflow had to be reorganized from the ground up. Oddly enough, by having everything paid out of one system, the individual libraries have much more flexibility in spending and reporting. However, it took some public relations work to get the libraries to see the benefits.

As a result of looking at the data in this project, they got a better idea of gaps and other projects regarding their resources.

Phase two began this past fall to begin incorporating the data from the libraries that did not participate in phase one. They now have a small group with folks representing the libraries. This group is coming up with best practices for license agreements and entering data into the fields.

Ithaka’s What to Withdraw tool

Have you seen the tool that Ithaka developed to determine what print scholarly journals you could withdraw (discard/store) that are already in your digital collections? It’s pretty nifty for a spreadsheet. About 10-15 minutes of playing with it and a list of our print holdings resulted in giving me a list of around 200 or so actionable titles in our collection, which I passed on to our subject liaison librarians.

The guys who designed it are giving some webinar sessions, and I just attended one. Here are my notes, for what it’s worth. I suggest you participate in a webinar if you’re interested in it. The next one is tomorrow and there’s one on February 10th as well.


Background

  • They have an organizational commitment to preservation: JSTOR, Portico, and Ithaka S+R
  • Libraries are under pressure to both decrease their print collections and to maintain some print copies for the library community as a whole
  • Individual libraries are often unable to identify materials that are sufficiently well-preserved elsewhere
  • The What to Withdraw framework is for general collections of scholarly journals, not monographs, rare books, newspapers, etc.
  • The report/framework is not meant to replace the local decision-making process

What to Withdraw Framework

  • Why do we need to preserve the print materials once we have a digital version?
    • Fix errors in the digital versions
    • Replace poor quality scans or formats
    • Inadequate preservation of the digital content
    • Unreliable access to the digital content
    • Also, local politics or research needs might require access to or preservation of the print
  • Once they developed the rationales, they created specific preservation goals for each category of preservation and then determined the level of preservation needed for each goal.
    • Importance of images in journals (the digitization standards for text is not the same as for images, particularly color images)
    • Quality of the digitization process
    • Ongoing quality assurance processes to fix errors
    • Reliability of digital access (business model, terms & conditions)
    • Digital preservation
  • Commissioned Candace Yano (operations researcher at UC Berkeley) to develop a model for copies needed to meet preservation goals, with the annual loss rate of 0.1% for a dark archive.
    • As a result, they found they needed only two copies to have a >99% confidence than they will still have remaining copies left in twenty years.
    • As a community, this means we need to be retaining at least two copies, if not more.

Decision-Support Tool (proof of concept)

  • JSTOR is an easy first step because many libraries have this resource and many own print copies of the titles in the collections and Harvard & UC already have dim/dark archives of JSTOR titles
  • The tool provides libraries information to identify titles held by Harvard & UC libraries which also have relatively few images

Future Plans

  • Would like to apply the tool to other digital collections and dark/dim archives, and they are looking for partners in this
  • Would also like to incorporate information from other JSTOR repositories (such as Orbis-Cascade)

luck is fickle on the interstate

boxes and boxesThe past week has been a mix of good and bad, along with an overwhelming volume of “stuff what must be done,” and the end result is that I neglect the blog. In brief: Hit a F-150 in my (new) car two days before I moved into my new apartment. Discovered that my insurance doesn’t cover rentals, so paying for that out of pocket. On the upside, no one was injured. On the downside, experiencing unexpected and expensive transportation costs. As for the move, it was a scramble, but I managed to find a few friends to help me, which saved both time and my back, and I drove a UHaul for the first time.

The new digs are nice — not fancy, but livable and close to both work and play. I haven’t really begun to unpack yet, but I hope to get a good bit of that finished this weekend. For now, I’m catching up on sleep and some reading/writing that needs to be done.

Speaking of which, my review of Eccentric Cubicle by Kaden Harris was published on Blogcritics last week. That puts me at five books so far this year. I’m slipping behind again, but that was to be expected. I’m currently reading book #6, so hopefully I’ll have something to post here about that soon.

Oh, and before I forget, my commentary on Harvard and open access was noted in the Library Journal Academic Newswire. There’s my 15 seconds of fame, not to mention the honor of being included along with the other more thoughtful and scholarly types.

Harvard & the Open Access movement

A colleague called the Harvard faculty’s decision on making all of their works available in an institutional repository a “bold step towards online scholarship and open access.” I thought about this for a bit, and I’m not so sure it’s the right step, depending on how this process is done. Initially, I thought the resolution called for depositing articles before they are published, which would be difficult to enforce and likely result in the non-publication of said articles. However, upon further reflection and investigation, it seems that the resolution simply limits the outlets for faculty publication to those journals that allow for pre- or post-publication versions to be deposited in institutional repositories. Many publishers are moving in that direction, but it’s still not universal, and is unlikely to be so in the near future.

I am concerned that the short-term consequences will be increased difficulty in junior faculty getting their work published, thus creating another unnecessary barrier to tenure. I like the idea of a system that retains the scholarship generated at an institution, but I’m not sure if this is the right way to do it. Don’t get me wrong — repositories are a great way to collect the knowledge of an institution’s researchers, but they aren’t the holy grail solution to the scholarly communication crisis. Unless faculty put more of a priority on making their scholarship readily available to the world than on the prestige of the journal in which it is published, there will be little incentive to exclusively submit articles to publishers that allow them to be deposited in institutional repositories beyond mandatory participation. There are enough hungry junior faculty in the world to keep the top-shelf journal publishers in the black for years to come.

watch what you write

Who is reading your blog?

An employee a Harvard has been fired due to comments she posted on her personal blog about her supervisors and co-workers.

Burch said that the weblog did not affect her job performance in any negative way.

“Most of it is total heat of the moment stuff,” said Burch. “I’m not dangerous and I don’t wish anyone harm or malice and I don’t even dislike anybody. I just had momentary frustration and the blog was a good way to get it out so I can get on with things.”

The moral of the story is that you don’t post anything in a public space that you wouldn’t want someone else to read, particularly if it involves physical threats and your workplace.

I’m amazed whenever I get a comment or a response to something I have posted here, more so when it’s from someone I don’t know. Besides being generally laid back about my workplace, I wouldn’t even think to publish something negative. While this is my personal space to write, I look at it as a constantly shifting environment that includes both personal and professional elements.

css.php