NASIG 2013: Adopting and Implementing an Open Access Policy — The Library’s Role

CC BY-NC-SA 2.0 2013-06-10
“Open Access promomateriaal” by biblioteekje

Speaker: Brian Kern

Open access policy was developed late last year and adopted/implemented in March. They have had it live for 86 days, so he’s not an expert, but has learned a lot in the process.

His college is small, and he expects less than 40 publications submitted a year, and they are using the institutional repository to manage this.

They have cut about 2/3 of their journal collections over the past decade, preferring publisher package deals and open access publications. They have identified the need to advocate for open access as a goal of the library. They are using open source software where they can, hosted and managed by a third party.

The policy borrowed heavily from others, and it is a rights-retention mandate in the style of Harvard. One piece of advice they had was to not focus on the specifics of implementation within the policy.

The policy states that it will be automatically granted, but waivers are available for embargo or publisher prohibitions. There are no restrictions on where they can publish, and they are encouraged to remove restrictive language from contracts and author addendum. Even with waivers, all articles are deposited to at least a “closed” archive. It stipulates that they are only interested in peer-reviewed articles, and are not concerned with which version of the article is deposited. Anything published or contracted to be published before the adoption date is not required to comply, but they can if they want to.

The funding, as one may expect, was left out. The library is going to cover the open access fees, with matching funds from the provost. Unused funds will be carried over year to year.

This was presented to the faculty as a way to ensure that their rights were being respected when they publish their work. Nothing was said about the library and our traditional concerns about saving money and opening access to local research output.

The web hub will include the policy, a FAQ, recommended author addendum based on publisher, funding information, and other material related to the process. The faculty will be self-depositing, with review/edit by Kern.

They have a monthly newsletter/blog to let the campus know about faculty and student publications, so they are using this to identify materials that should be submitted to the collection. He’s also using Stephen X. Flynn’s code to identify OA articles via SHERPA/RoMEO to find the ones already published that can be used to populate the repository.

They are keeping the senior projects closed in order to keep faculty/student collaborations private (and faculty research data offline until they publish).

They have learned that the policy is dependent on faculty seeing open access as a reality and library keeping faculty informed of the issues. They were not prepared for how fast they would get this through and that submissions would begin. Don’t expect faculty to be copyright lawyers. Keep the submission process as simple as possible, and allow them to use alternatives like email or paper.

Charleston 2012: Curating a New World of Publishing

Looking through spy glass by Arild Nybø
“Looking through spy glass” by Arild Nybø

Hypothesis: Rapid publishing output and a wide disparity of publishing sources and formats has made finding the right content at the right time harder for librarians.

Speaker: Mark Coker, founder of Smashwords

Old model of publishing was based on scarcity, with publishers as mediators for everything. Publishers aren’t in the business of publishing books, they are in the business of selling books, so they really focus more on what books they think readers want to read. Ebook self publishing overcomes many of the limitations of traditional publishing.

Users want flexibility. Authors want readers. Libraries want books accessible to anyone, and they deliver readership.

The tools for self publishing are now free and available to anyone around the world. The printing press is now in the cloud. Smashwords will release about 100,000 new books in 2012, and they are hitting best seller lists at major retailers and the New York Times.

How do you curate this flood? Get involved at the beginning. Libraries need to also promote a culture of authorship. Connect local writers with local readers. Give users the option to publish to the library. Emulate the best practices of the major retailers. Readers are the new curators, not publishers.

Smashwords Library Direct is a new service they are offering.

Speaker: Eric Hellman, from Unglue.it

[Missed the first part as I sought a more comfortable seat.]

They look for zero margin distribution solutions by connecting publishers and libraries. They do it by running crowd-funded pledge drive for every book offer, much like Kickstarter. They’ve been around since May 2012.

For example, Oral Literature in Africa was published by Oxford UP in 1970, and it’s now out of print with the rights reverted to the author. The rights holder set a target amount needed to make the ebook available free to anyone. The successful book is published with a Creative Commons license and made available to anyone via archive.org.

Unglue.it verifies that the rights holder really has the rights and that they can create an ebook. The rights holder retains copyright, and the ebook format is neutral. Books are distributed globally, and distribution rights are not restricted to anyone. No DRM is allowed, so the library ebook vendors are having trouble adopting these books.

This is going to take a lot of work to make it happen, if we just sit and watch it won’t. Get involved.

Speaker: Rush Miller, library director at University of Pittsburgh

Why would a library want to become a publisher? It incentivizes the open access model. It provides services that scholars need and value. It builds collaborations with partners around the world. It improves efficiencies and encourages innovation in scholarly communications.

Began by collaborating with the university press, but it focuses more on books and monographs than journals. The library manages several self-archiving repositories, and they got into journal publishing because the OJS platform looked like something they could handle.

They targeted diminishing circulation journals that the university was already invested in (authors, researchers, etc.) and helped them get online to increase their circulation. They did not charge the editors/publishers of the journals to do it, and encouraged them to move to open access.

NASIG 2012: Copyright and New Technologies in the Library: Conflict, Risk and Reward

Speaker: Kevin Smith, Duke University

It used to be that libraries didn’t have to care about copyright because most of our practices were approved of by copyright law. However, what we do has changed (we are not in the age of the photocopier), but the law hasn’t progressed with it.

Getting sued is a new experience for libraries. Copyright law is developed through the court system, because the lawmakers can’t keep up with the changes in technology. This is a discovery process, because we find out more about how the law will be applied in these situations.

Three suits — Georgia State e-reserves, UCLA streamed digital video, and Hathi Trust & 5 partners for distributing digital scans and plans for orphaned works. In all three cases, the same defense is being used — fair use. In the Hathi Trust case, the author’s guild has asked the judge to not allow libraries to apply fair use to what they do because the copyright law covers specific things that libraries can do, even though it explicitly says it doesn’t negate fair use as well.

Whenever we talk about copyright, we are thinking about risk. Libraries and universities deal with risk all the time. Always evaluate the risk of allowing an activity against the risk of not doing it. Fair use is no different.

Without taking risks, we also abdicate rewards. What can we gain by embracing fair use? Take a look at the ARL Code of Best Practices in Fair Use for Academic & Research Libraries (which is applicable outside of the academic library context). The principles and limitations of fair use is more of a guide than a set of rules, and the best practices help understand practical applications of those guidelines.

From the audience: No library wants to be the one that wrecked fair use for everybody. Taking this risk is not the same as more localized risk-taking, as this could lead to a precedent-setting legal case.

These cases are not necessarily binding, they are a data point, and particularly so at the trial court level. However, the damages can be huge, and much more than many other legal risks we take. Luckily, in these cases, you are only liable for the actual damages, which are usually quite small.

The key question for fair use has been, “is the use transformative?” This is not what the law asks, but it came about because of an influential law review article by a judge who said this is the question he asked himself when evaluating copyright cases. The other consideration is whether the works are competitive in the market, but transformative trumps this.

When is a work derivative and when is it transformative? Derivative works are under the auspices of the copyright holder, but transformative works are considered fair use.

In the “Pretty Women” case, the judges said that multiple copies for educational purposes is a classic example of fair use. This is what the Georgia State judge cited in her ruling, even though she did not think that the e-reserves were transformative.

Best practices are not the same as negotiated guidelines. These are a broad consensus on how librarians can think about fair use in practice in an educational setting. Using the code of best practices is not a guaranteed that you will not get sued. It’s a template for thinking about particular activities.

In the Hathi Trust case, the National Federation for the Blind has asked to be added as a defendant because they see the services for their constituents being challenged if libraries cannot apply fair use to their activities that bring content to users in the format they need. In this case the benefit is great and the risk is small. Few will bring a lawsuit because the library has made copies so that the blind can use a text-to-speech program. Which lawsuit would you rather defend — for providing access or because you haven’t provided access?

Fair use can facilitate text-mining that is for research purposes, not commercial. For example, looking at how concepts are addressed/discussed across a large body of work and time. Fair use is more efficient in this kind of transformative activity.

What about incorporating previously published content in new content that will be deposited into an institutional repository? Fair use allows adaptation, particularly as technologies change. This is the heart of transformative use — quoting someone else’s work — and should be no different from using a graph or chart. However, you are using the entirety of the work, and should consider if the amount used is appropriate (not excessive) for the new work.

What about incorporating music into video projects? If the music or the video is a fundamental part of the argument and help tell the story, then it’s fair use. If you don’t need that particular song, or it’s just a pretty soundtrack, then go find something that is licensed for you to use (Creative Commons).

One area to be concerned with, though, is the fair use of distributing content for educational purposes. Course packs created by commercial entities is not fair use. Electronic course readings have not been judged in the same way because the people making the electronic copies were educators in a non-commercial setting. Markets matter — not having a market for these kinds of things helped in the GSU case.

The licensing market for streaming digital is more “hit or miss,” and education has a long precedent for using excerpts. It’s uncertain if an entirety of a work would be considered fair use or not.

Orphan works is a classic market failure, and has the best chance of being supported by fair use.

Solutions:

  • Stop giving up copyright in scholarly works.
  • Help universities develop new promotion & tenure policies.
  • Use Creative Commons licenses.
  • Publish in open access venues or retain rights and self-archive.

NASIG 2012: Managing E-Publishing — Perfect Harmony for Serialists

Presenters: Char Simser (Kansas State University) & Wendy Robertson (University of Iowa)

Iowa looks at e-publishing as an extension of the central mission of the library. This covers not only text, but also multimedia content. After many years of ad-hoc work, they formed a department to be more comprehensive and intentional.

Kansas really didn’t do much with this until they had a strategic plan that included establishing an open access press (New Prairie). This also involved reorganizing personnel to create a new department to manage the process, which includes the institutional depository. The press includes not only their own publications, but also hosts publications from a few other sources.

Iowa went with BEPress’ Digital Commons to provide both the repository and the journal hosting. Part of why they went this route for their journals was because they already had it for their repository, and they approach it more as being a hosting platform than as being a press/publisher. This means they did not need to add staff to support it, although they did add responsibilities to exiting staff in addition to their other work.

Kansas is using Open Journal Systems hosted on a commercial server due to internal politics that prevented it from being hosted on the university server. All of their publications are Gold OA, and the university/library is paying all of the costs (~$1700/year, not including the .6 FTE staff hours).

Day in the life of New Prairie Press — most of the routine stuff at Kansas involves processing DOI information for articles and works-cited, and working with DOAJ for article metadata. The rest is less routine, usually involving journal setups, training, consultation, meetings, documentation, troubleshooting, etc.

The admin back-end of OJS allows Char to view it as if she is different types of users (editor, author, etc.) to be able to trouble-shoot issues for users. Rather than maintaining a test site, they have a “hidden” journal on the live site that they use to test functions.

A big part of her daily work is submitting DOIs to CrossRef and going through the backfile of previously published content to identify and add DOIs to the works-cited. The process is very manual, and the error rate is high enough that automation would be challenging.

Iowa does have some subscription-based titles, so part of the management involves keeping up with a subscriber list and IP addresses. All of the titles eventually fall into open access.

Most of the work at Iowa has been with retrospective content — taking past print publications and digitizing them. They are also concerned with making sure the content follows current standards that are used by both library systems and Google Scholar.

There is more. I couldn’t take notes and keep time towards the end.

NASIG 2012: Results of Web-scale discovery — Data, discussions and decisions

Speakers: Jeff Daniels, Grand Valley State University

GVSU has had Summon for almost three years — longer than most any other library.

Whether you have a web-scale discovery system or are looking at getting one, you need to keep asking questions about it to make sure you’re moving in the right direction.

1. Do we want web-scale discovery?
Federated searching never panned out, and we’ve been looking for an alternative ever since. Web-scale discovery offers that alternative, to varying degrees.

2. Where do we want it?
Searching at GVSU before Summon — keyword (Encore), keyword (classic), title, author, subject, journal title
Searching after Summon — search box is the only search offered on their website now, so users don’t have to decide first what they are searching
The heat map of clicks indicates the search box was the most used part of the home page, but they still had some confusion, so they made the search box even more prominent.

3. Who is your audience?
GVSU focused on 1st and 2nd year students as well as anyone doing research outside their discipline — i.e. people who don’t know what they are looking for.

4. Should we teach it? If so, how?
What type of class is it? If it’s a one-off instruction session with the audience you are directing to your web-scale discovery, then teach it. If not, then maybe don’t. You’re teaching the skill-set more than the resource.

5. Is it working?
People are worried that known item searches will get lost (i.e. catalog items). GVSU found that the known items make up less than 1% of Summon, but over 15% of items selected from searches come from that pool.
Usage statistics from publisher-supplied sources might be skewed, but look at your link resolver stats for a better picture of what is happening.

GVSU measured use before and after Summon, and they expected searches to go down for A&I resources. They did, but ultimately decided to keep them because they were needed for accreditation, they had been driving advanced users to them via Summon, and publishers were offering bundles and lower pricing. For the full-text aggregator databases, they saw a decrease in searching, but an increase in full-text use, so they decided to keep them.

Speaker: Laura Robinson, Serials Solutions

Libraries need information that will help us make smart decisions, much like what we provide to our users.

Carol Tenopir looked at the value gap between the amount libraries spend on materials and the perceived value of the library. Collection size matters less these days — it’s really about access. Traditional library metrics fail to capture the value of the library.

tl;dr — Web-scale discovery is pretty awesome and will help your users find more of your stuff, but you need to know why you are implementing it and who you are doing it for, and ask those questions regularly even after you’ve done so.

NASIG 2011: Books in Chains

Speaker: Paul Duguid

Unlike the automotive brand wars, tech brand wars still require a level of coordination and connectivity between each other. Intel, Windows, and Dell can all be in one machine, and it became a competition as to which part motivated the purchase.

The computer/tech supply chain is odd. The most important and difficult component to replace is the hard drive, and yet most of us don’t know who makes the drives in our computers. It makes a huge difference in profit when your name is out front.

Until the mid 1800s, the wine sold had the retailer name on it, not the vineyard. Eventually, that shifted, and then shifted again to being sold by the name of the varietal.

In the book supply chain, there are many links, and the reader who buys the book may not see any of the names involved, and at different times in history, the links were the brand that sold it. Mark Twain and Rudyard Kipling tried to trademark their names so that publishers could not abuse them.

In academia, degrees are an indication of competency, and the institution behind the degree is a part of the brand. Certification marks began with unions in the US, and business schools were among the first to go out and register their names. However, it gets tricky when the institution conferring the degrees is also taking in fees from students. Is it certification or simply selling the credentials?

Who brands in publishing? We think the author, but outside of fiction, that starts to break down. Reference works are generally branded by the publisher. Reprint series are branded by the series. Romances are similar. Do we pay attention to who wrote the movie, TV series, or even newspaper article?

What happens when we go digital? The idealist’s view is that information wants to be free. The pragmatic view is that information needs to be constrained. Many things that are constraints are also resources. The structure and organization of a newspaper has much to do with the paper it is on. Also, by limiting to what fits on the paper, it conveys an indication of importance if it makes it into print. Free information suffers from a lack of filters to make the important bits rise to the top.

We think of technologies replacing each other, but in fact they tend to create new niches by taking away some but not all of the roles of the old tech. What goes and what stays is what you see as integral.

get off my lawn…er…library

Going back to some idealized vision of the way things were won’t solve the problem.

The librarian community (at least, those in higher education) is all abuzz over a recent article in The Chronicle by social science and humanities librarian Daniel Goldstein. He makes several damning statements about the trend in libraries towards access over ownership and “good enough” over perfect.

Before reading the byline at the end of the article, I had a sense that the author was a well-meaning if ill-informed professor, standing up for what he thinks is what libraries should be. Needless to say, I was surprised to learn that he’s a librarian who aught to know better.

Yes, librarians should be making careful decisions about collections that guide users to the best resources, but at the same time we are facing increasing demands for more and expensive content than what we already provide. And yes, we should be instructing users on how to carefully construct searches in specialized bibliographic databases, but we’re also facing increased class sizes with decreased staff.

There is no easy answer, and going back to some idealized vision of the way things were won’t solve the problem, either. If you do go read this article, I highly recommend reading the comments as well. At least the first few do an excellent job of pointing out the flaws in Goldstein’s either-or argument.

WordCamp Richmond: Starting From Scratch – Introduction to Building Custom Themes

presenter: Wren Lanier

Why use WordPress as a CMS for a small website? It’s flexible enough to build all sorts of kinds of sites. It’s free as in beer and there is a huge support community. It has a beautiful admin (particularly compared to other CMS like Drupal) that clients like to use, which means it is more likely to succeed and make them happy repeat clients.

First things first. Set up a local development server (MAMP or XAMPP) or use a web host. This allows you to develop on a desktop machine as if it were a web server.

Next, download dummy content like posts and comments. There are plugins (WP Dummy Content, Demo Data Creator) or imports in XML form.

Start with a blank theme. You could start from scratch, but nobody needs to reinvent the wheel. Really good ones: Starkers (semantic, thorough, and functional), Naked (created for adding your own XHTML), Blank (now with HTML5), and more.

A blank theme will come with several php files for pages/components and a css file. To create a theme, you really only need an index.php, screenshot.png, and style.css files. Lanier begs you to name your theme (i.e. sign your work).

Now that you have a theme name, start with the header and navigation. Next, take advantage of WPs dynamic tags. Don’t use an absolute path to your style sheet, home page, or anywhere else on your site if possible.

Make things even more awesome with some if/then statements. You can do that in PHP. [I should probably dig out my PHP for Dummies reference type books and read up on this.] This allows you to code elements different depending on what type of page you use.

Once you have your header file, build your footer file, making sure to close any tags you have in your header. Code the copyright year to be dynamic.

It doesn’t have to be a blog!

If you’re going to create a static homepage, make sure you name the custom template. If you don’t do this, the WP admin can’t see it. Go into Reading Settings to select the page you created using the homepage template.

Now that you have all that, what goes into the custom template? Well, you have the header and footer already, so now you put THE LOOP in between a div wrapper. The loop is where WP magic happens. It will display the content depending on the template of the page type. It will limit the number of posts shown on a page, include/exclude categories, list posts by author/category/tag, offset posts, order posts, etc.

Once you have your home page, you’ll want to build the interior pages. There are several strategies. You could let page.php power them, but if you have different interior page designs, then you’ll want to create custom page templates for each. But, that can become inefficient, so Lanier recommends using if/then statements for things like custom sidebars. A technique of awesomeness is using dynamic body IDs, which allows you to target content to specific pages using the body_class tag depending on any number of variables. Or, once again you can use an if/then statement. Other options for body classes.

Finish off your theme with the power of plugins. Basics: Akismet, All-In-One SEO, Google XML Sitemaps, Fast Secure Contact Form (or other contact form plugin), WPtouch iPhone theme. For blogs, you’ll want plugins like Author Highlight, Comment Timeout, SEO Slugs (shortens the URL to SEO-friendly), Thank Me Later (first-timer comments will get an email thanking them and links to other content), and WordPress Related Posts. For a CMS, these are good: Custom Excerpts, Search Permalink, Search Unleashed (or Better Search, since the default search is  bit lacking), WP-PageNavi (instead of older/newer it creates page numbering), and WP Super Cache (caches content pages as static HTML and reduces server load).

Questions:

What about multi-user installations? She used Daren Hoyt’s Mimbo theme because it was primarily a magazine site.

At what point do you have too many conditional statements in a template? It’s a balancing act between which is more efficient: conditional statements or lots of PHP files.

How do you keep track of new plugins and the reliability of programmers? Daren Hoyt & Elliot J. Stock are two designers she follows and will check out their recommendations.

What is your opinions of premium themes? For most people, that’s all they need. She would rather spend her time developing niche things that can’t be handled by standard themes.

How do you know when plugins don’t mesh well with each other? Hard to keep up with this as patches are released and updates to WP code.

Where can you find out how to do what you want to do? The codex can be confusing. It’s often easier to find a theme that does the element you are wanting to do, and then figure out how they designed the loop to handle it.

Are parent templates still necessary? Lanier hasn’t really used them.

Leave WP auto-P on or off? She turns them off. Essentially, WP automatically wraps paragraphs with a p tag, which can mess with your theme.

February reading

That’s right. Reading. Not plural. I finished only one book last month, at it was just the last few chapters I didn’t finish in January. I have a good excuse, though: my limited spare time last month was consumed with packing and moving and unpacking.

The book I finished was for the semi-annual book discussion group at work. We selected Nicholson Carr’s The Big Switch last fall, but weren’t able to meet to talk about it until early January. Here are my final thoughts on the book:

I found the parallels between the evolution of the delivery of electricity from self-contained generator systems to the modern-day grid and the evolution of personal computing applications from desktop to the cloud to be fascinating, and a good argument for cloud computing. However, once making that argument, the author proceeds to show his true colors as an anti-technology, privacy-focused, Matrix-fearing Luddite. Disappointing.

css.php