NASIG 2013: Adopting and Implementing an Open Access Policy — The Library’s Role

CC BY-NC-SA 2.0 2013-06-10
“Open Access promomateriaal” by biblioteekje

Speaker: Brian Kern

Open access policy was developed late last year and adopted/implemented in March. They have had it live for 86 days, so he’s not an expert, but has learned a lot in the process.

His college is small, and he expects less than 40 publications submitted a year, and they are using the institutional repository to manage this.

They have cut about 2/3 of their journal collections over the past decade, preferring publisher package deals and open access publications. They have identified the need to advocate for open access as a goal of the library. They are using open source software where they can, hosted and managed by a third party.

The policy borrowed heavily from others, and it is a rights-retention mandate in the style of Harvard. One piece of advice they had was to not focus on the specifics of implementation within the policy.

The policy states that it will be automatically granted, but waivers are available for embargo or publisher prohibitions. There are no restrictions on where they can publish, and they are encouraged to remove restrictive language from contracts and author addendum. Even with waivers, all articles are deposited to at least a “closed” archive. It stipulates that they are only interested in peer-reviewed articles, and are not concerned with which version of the article is deposited. Anything published or contracted to be published before the adoption date is not required to comply, but they can if they want to.

The funding, as one may expect, was left out. The library is going to cover the open access fees, with matching funds from the provost. Unused funds will be carried over year to year.

This was presented to the faculty as a way to ensure that their rights were being respected when they publish their work. Nothing was said about the library and our traditional concerns about saving money and opening access to local research output.

The web hub will include the policy, a FAQ, recommended author addendum based on publisher, funding information, and other material related to the process. The faculty will be self-depositing, with review/edit by Kern.

They have a monthly newsletter/blog to let the campus know about faculty and student publications, so they are using this to identify materials that should be submitted to the collection. He’s also using Stephen X. Flynn’s code to identify OA articles via SHERPA/RoMEO to find the ones already published that can be used to populate the repository.

They are keeping the senior projects closed in order to keep faculty/student collaborations private (and faculty research data offline until they publish).

They have learned that the policy is dependent on faculty seeing open access as a reality and library keeping faculty informed of the issues. They were not prepared for how fast they would get this through and that submissions would begin. Don’t expect faculty to be copyright lawyers. Keep the submission process as simple as possible, and allow them to use alternatives like email or paper.

Charleston 2012: Curating a New World of Publishing

Looking through spy glass by Arild Nybø
“Looking through spy glass” by Arild Nybø

Hypothesis: Rapid publishing output and a wide disparity of publishing sources and formats has made finding the right content at the right time harder for librarians.

Speaker: Mark Coker, founder of Smashwords

Old model of publishing was based on scarcity, with publishers as mediators for everything. Publishers aren’t in the business of publishing books, they are in the business of selling books, so they really focus more on what books they think readers want to read. Ebook self publishing overcomes many of the limitations of traditional publishing.

Users want flexibility. Authors want readers. Libraries want books accessible to anyone, and they deliver readership.

The tools for self publishing are now free and available to anyone around the world. The printing press is now in the cloud. Smashwords will release about 100,000 new books in 2012, and they are hitting best seller lists at major retailers and the New York Times.

How do you curate this flood? Get involved at the beginning. Libraries need to also promote a culture of authorship. Connect local writers with local readers. Give users the option to publish to the library. Emulate the best practices of the major retailers. Readers are the new curators, not publishers.

Smashwords Library Direct is a new service they are offering.

Speaker: Eric Hellman, from Unglue.it

[Missed the first part as I sought a more comfortable seat.]

They look for zero margin distribution solutions by connecting publishers and libraries. They do it by running crowd-funded pledge drive for every book offer, much like Kickstarter. They’ve been around since May 2012.

For example, Oral Literature in Africa was published by Oxford UP in 1970, and it’s now out of print with the rights reverted to the author. The rights holder set a target amount needed to make the ebook available free to anyone. The successful book is published with a Creative Commons license and made available to anyone via archive.org.

Unglue.it verifies that the rights holder really has the rights and that they can create an ebook. The rights holder retains copyright, and the ebook format is neutral. Books are distributed globally, and distribution rights are not restricted to anyone. No DRM is allowed, so the library ebook vendors are having trouble adopting these books.

This is going to take a lot of work to make it happen, if we just sit and watch it won’t. Get involved.

Speaker: Rush Miller, library director at University of Pittsburgh

Why would a library want to become a publisher? It incentivizes the open access model. It provides services that scholars need and value. It builds collaborations with partners around the world. It improves efficiencies and encourages innovation in scholarly communications.

Began by collaborating with the university press, but it focuses more on books and monographs than journals. The library manages several self-archiving repositories, and they got into journal publishing because the OJS platform looked like something they could handle.

They targeted diminishing circulation journals that the university was already invested in (authors, researchers, etc.) and helped them get online to increase their circulation. They did not charge the editors/publishers of the journals to do it, and encouraged them to move to open access.

NASIG 2012: Copyright and New Technologies in the Library: Conflict, Risk and Reward

Speaker: Kevin Smith, Duke University

It used to be that libraries didn’t have to care about copyright because most of our practices were approved of by copyright law. However, what we do has changed (we are not in the age of the photocopier), but the law hasn’t progressed with it.

Getting sued is a new experience for libraries. Copyright law is developed through the court system, because the lawmakers can’t keep up with the changes in technology. This is a discovery process, because we find out more about how the law will be applied in these situations.

Three suits — Georgia State e-reserves, UCLA streamed digital video, and Hathi Trust & 5 partners for distributing digital scans and plans for orphaned works. In all three cases, the same defense is being used — fair use. In the Hathi Trust case, the author’s guild has asked the judge to not allow libraries to apply fair use to what they do because the copyright law covers specific things that libraries can do, even though it explicitly says it doesn’t negate fair use as well.

Whenever we talk about copyright, we are thinking about risk. Libraries and universities deal with risk all the time. Always evaluate the risk of allowing an activity against the risk of not doing it. Fair use is no different.

Without taking risks, we also abdicate rewards. What can we gain by embracing fair use? Take a look at the ARL Code of Best Practices in Fair Use for Academic & Research Libraries (which is applicable outside of the academic library context). The principles and limitations of fair use is more of a guide than a set of rules, and the best practices help understand practical applications of those guidelines.

From the audience: No library wants to be the one that wrecked fair use for everybody. Taking this risk is not the same as more localized risk-taking, as this could lead to a precedent-setting legal case.

These cases are not necessarily binding, they are a data point, and particularly so at the trial court level. However, the damages can be huge, and much more than many other legal risks we take. Luckily, in these cases, you are only liable for the actual damages, which are usually quite small.

The key question for fair use has been, “is the use transformative?” This is not what the law asks, but it came about because of an influential law review article by a judge who said this is the question he asked himself when evaluating copyright cases. The other consideration is whether the works are competitive in the market, but transformative trumps this.

When is a work derivative and when is it transformative? Derivative works are under the auspices of the copyright holder, but transformative works are considered fair use.

In the “Pretty Women” case, the judges said that multiple copies for educational purposes is a classic example of fair use. This is what the Georgia State judge cited in her ruling, even though she did not think that the e-reserves were transformative.

Best practices are not the same as negotiated guidelines. These are a broad consensus on how librarians can think about fair use in practice in an educational setting. Using the code of best practices is not a guaranteed that you will not get sued. It’s a template for thinking about particular activities.

In the Hathi Trust case, the National Federation for the Blind has asked to be added as a defendant because they see the services for their constituents being challenged if libraries cannot apply fair use to their activities that bring content to users in the format they need. In this case the benefit is great and the risk is small. Few will bring a lawsuit because the library has made copies so that the blind can use a text-to-speech program. Which lawsuit would you rather defend — for providing access or because you haven’t provided access?

Fair use can facilitate text-mining that is for research purposes, not commercial. For example, looking at how concepts are addressed/discussed across a large body of work and time. Fair use is more efficient in this kind of transformative activity.

What about incorporating previously published content in new content that will be deposited into an institutional repository? Fair use allows adaptation, particularly as technologies change. This is the heart of transformative use — quoting someone else’s work — and should be no different from using a graph or chart. However, you are using the entirety of the work, and should consider if the amount used is appropriate (not excessive) for the new work.

What about incorporating music into video projects? If the music or the video is a fundamental part of the argument and help tell the story, then it’s fair use. If you don’t need that particular song, or it’s just a pretty soundtrack, then go find something that is licensed for you to use (Creative Commons).

One area to be concerned with, though, is the fair use of distributing content for educational purposes. Course packs created by commercial entities is not fair use. Electronic course readings have not been judged in the same way because the people making the electronic copies were educators in a non-commercial setting. Markets matter — not having a market for these kinds of things helped in the GSU case.

The licensing market for streaming digital is more “hit or miss,” and education has a long precedent for using excerpts. It’s uncertain if an entirety of a work would be considered fair use or not.

Orphan works is a classic market failure, and has the best chance of being supported by fair use.

Solutions:

  • Stop giving up copyright in scholarly works.
  • Help universities develop new promotion & tenure policies.
  • Use Creative Commons licenses.
  • Publish in open access venues or retain rights and self-archive.

NASIG 2012: Managing E-Publishing — Perfect Harmony for Serialists

Presenters: Char Simser (Kansas State University) & Wendy Robertson (University of Iowa)

Iowa looks at e-publishing as an extension of the central mission of the library. This covers not only text, but also multimedia content. After many years of ad-hoc work, they formed a department to be more comprehensive and intentional.

Kansas really didn’t do much with this until they had a strategic plan that included establishing an open access press (New Prairie). This also involved reorganizing personnel to create a new department to manage the process, which includes the institutional depository. The press includes not only their own publications, but also hosts publications from a few other sources.

Iowa went with BEPress’ Digital Commons to provide both the repository and the journal hosting. Part of why they went this route for their journals was because they already had it for their repository, and they approach it more as being a hosting platform than as being a press/publisher. This means they did not need to add staff to support it, although they did add responsibilities to exiting staff in addition to their other work.

Kansas is using Open Journal Systems hosted on a commercial server due to internal politics that prevented it from being hosted on the university server. All of their publications are Gold OA, and the university/library is paying all of the costs (~$1700/year, not including the .6 FTE staff hours).

Day in the life of New Prairie Press — most of the routine stuff at Kansas involves processing DOI information for articles and works-cited, and working with DOAJ for article metadata. The rest is less routine, usually involving journal setups, training, consultation, meetings, documentation, troubleshooting, etc.

The admin back-end of OJS allows Char to view it as if she is different types of users (editor, author, etc.) to be able to trouble-shoot issues for users. Rather than maintaining a test site, they have a “hidden” journal on the live site that they use to test functions.

A big part of her daily work is submitting DOIs to CrossRef and going through the backfile of previously published content to identify and add DOIs to the works-cited. The process is very manual, and the error rate is high enough that automation would be challenging.

Iowa does have some subscription-based titles, so part of the management involves keeping up with a subscriber list and IP addresses. All of the titles eventually fall into open access.

Most of the work at Iowa has been with retrospective content — taking past print publications and digitizing them. They are also concerned with making sure the content follows current standards that are used by both library systems and Google Scholar.

There is more. I couldn’t take notes and keep time towards the end.

NASIG 2012: Results of Web-scale discovery — Data, discussions and decisions

Speakers: Jeff Daniels, Grand Valley State University

GVSU has had Summon for almost three years — longer than most any other library.

Whether you have a web-scale discovery system or are looking at getting one, you need to keep asking questions about it to make sure you’re moving in the right direction.

1. Do we want web-scale discovery?
Federated searching never panned out, and we’ve been looking for an alternative ever since. Web-scale discovery offers that alternative, to varying degrees.

2. Where do we want it?
Searching at GVSU before Summon — keyword (Encore), keyword (classic), title, author, subject, journal title
Searching after Summon — search box is the only search offered on their website now, so users don’t have to decide first what they are searching
The heat map of clicks indicates the search box was the most used part of the home page, but they still had some confusion, so they made the search box even more prominent.

3. Who is your audience?
GVSU focused on 1st and 2nd year students as well as anyone doing research outside their discipline — i.e. people who don’t know what they are looking for.

4. Should we teach it? If so, how?
What type of class is it? If it’s a one-off instruction session with the audience you are directing to your web-scale discovery, then teach it. If not, then maybe don’t. You’re teaching the skill-set more than the resource.

5. Is it working?
People are worried that known item searches will get lost (i.e. catalog items). GVSU found that the known items make up less than 1% of Summon, but over 15% of items selected from searches come from that pool.
Usage statistics from publisher-supplied sources might be skewed, but look at your link resolver stats for a better picture of what is happening.

GVSU measured use before and after Summon, and they expected searches to go down for A&I resources. They did, but ultimately decided to keep them because they were needed for accreditation, they had been driving advanced users to them via Summon, and publishers were offering bundles and lower pricing. For the full-text aggregator databases, they saw a decrease in searching, but an increase in full-text use, so they decided to keep them.

Speaker: Laura Robinson, Serials Solutions

Libraries need information that will help us make smart decisions, much like what we provide to our users.

Carol Tenopir looked at the value gap between the amount libraries spend on materials and the perceived value of the library. Collection size matters less these days — it’s really about access. Traditional library metrics fail to capture the value of the library.

tl;dr — Web-scale discovery is pretty awesome and will help your users find more of your stuff, but you need to know why you are implementing it and who you are doing it for, and ask those questions regularly even after you’ve done so.