NASIG 2013: Collaboration in a Time of Change

CC BY 2.0 2013-06-10
“soccer practice” by woodleywonderworks

Speaker: Daryl Yang

Why collaborate?

Despite how popular Apple products are today, they almost went bankrupt in the 90s. Experts believe that despite their innovation, their lack of collaboration led to this near-downfall. iTunes, iPod, iPad — these all require working with many developers, and is a big part of why they came back.

Microsoft started off as very open to collaboration and innovation from outside of the company, but that is not the case now. In order to get back into the groove, they have partnered with Nokia to enter the mobile phone market.

Collaboration can create commercial success, innovation, synergies, and efficiencies.

What change?

The amount of information generated now is vastly more than has ever been collected in the past. It is beyond our imagination.

How has library work changed? We still manage collections and access to information, but the way we do so has evolved with the ways information is delivered. We have had to increase our negotiation skills as every transaction is uniquely based on our customer profile. We have also needed to reorganize our structures and workflows to meet changing needs of our institutions and the information environment.

Deloitte identified ten key challenges faced by higher education: funding (public, endowment, and tuition), rivalry (competing globally for the best students), setting priorities (appropriate use of resources), technology (infrastructure & training), infrastructure (classroom design, offices), links to outcomes (graduation to employment), attracting talent (and retaining them), sustainability (practicing what we preach), widening access (MOOC, open access), and regulation (under increasing pressure to show how public funding is being used, but also maintaining student data privacy).

Libraries say they have too much stuff on shelves, more of it is available electronically, and it keeps coming. Do we really need to keep both print and digital when there is a growing pressure on space for users?

The British Library Document Supply Centre plays an essential role in delivering physical content on demand, but the demand is falling as more information is available online. And, their IT infrastructure needs modernization.

These concerns sparked conversations that created UK Research Reserve, and the evaluation of print journal usage. Users prefer print for in-depth reading, and HSS still have a high usage of print materials compared to the sciences. At least, that was the case 5-6 years ago when UKRR was created.

Ithaka S+R, JISC, and RLUK sent out a survey to faculty about print journal use, and they found that this is still fairly true. They also discovered that even those who are comfortable with electronic journal collections, they would not be happy to see print collections discarded. There was clearly a demand that some library, if not their own, maintain a collection of hard copies of journals. Libraries don’t have to keep them, but SOMEONE has to.

It is hard to predict research needs in the future, so it is important to preserve content for that future demand, and make sure that you still own it.

UKRR’s initial objectives were to de-duplicate low-use journals and allow their members to release space and realize savings/efficiency, and to preserve research material and provide access for researchers. They also want to achieve cultural change — librarians/academics don’t like to throw away things.

So far, they have examined 60,700 holdings, and of that, only 16% has been retained. They intend to keep at least 3 copies among the membership, so there was a significant amount of overlap in holdings across all of the schools.

NASIG 2013: Adopting and Implementing an Open Access Policy — The Library’s Role

CC BY-NC-SA 2.0 2013-06-10
“Open Access promomateriaal” by biblioteekje

Speaker: Brian Kern

Open access policy was developed late last year and adopted/implemented in March. They have had it live for 86 days, so he’s not an expert, but has learned a lot in the process.

His college is small, and he expects less than 40 publications submitted a year, and they are using the institutional repository to manage this.

They have cut about 2/3 of their journal collections over the past decade, preferring publisher package deals and open access publications. They have identified the need to advocate for open access as a goal of the library. They are using open source software where they can, hosted and managed by a third party.

The policy borrowed heavily from others, and it is a rights-retention mandate in the style of Harvard. One piece of advice they had was to not focus on the specifics of implementation within the policy.

The policy states that it will be automatically granted, but waivers are available for embargo or publisher prohibitions. There are no restrictions on where they can publish, and they are encouraged to remove restrictive language from contracts and author addendum. Even with waivers, all articles are deposited to at least a “closed” archive. It stipulates that they are only interested in peer-reviewed articles, and are not concerned with which version of the article is deposited. Anything published or contracted to be published before the adoption date is not required to comply, but they can if they want to.

The funding, as one may expect, was left out. The library is going to cover the open access fees, with matching funds from the provost. Unused funds will be carried over year to year.

This was presented to the faculty as a way to ensure that their rights were being respected when they publish their work. Nothing was said about the library and our traditional concerns about saving money and opening access to local research output.

The web hub will include the policy, a FAQ, recommended author addendum based on publisher, funding information, and other material related to the process. The faculty will be self-depositing, with review/edit by Kern.

They have a monthly newsletter/blog to let the campus know about faculty and student publications, so they are using this to identify materials that should be submitted to the collection. He’s also using Stephen X. Flynn’s code to identify OA articles via SHERPA/RoMEO to find the ones already published that can be used to populate the repository.

They are keeping the senior projects closed in order to keep faculty/student collaborations private (and faculty research data offline until they publish).

They have learned that the policy is dependent on faculty seeing open access as a reality and library keeping faculty informed of the issues. They were not prepared for how fast they would get this through and that submissions would begin. Don’t expect faculty to be copyright lawyers. Keep the submission process as simple as possible, and allow them to use alternatives like email or paper.

NASIG 2012: Managing E-Publishing — Perfect Harmony for Serialists

Presenters: Char Simser (Kansas State University) & Wendy Robertson (University of Iowa)

Iowa looks at e-publishing as an extension of the central mission of the library. This covers not only text, but also multimedia content. After many years of ad-hoc work, they formed a department to be more comprehensive and intentional.

Kansas really didn’t do much with this until they had a strategic plan that included establishing an open access press (New Prairie). This also involved reorganizing personnel to create a new department to manage the process, which includes the institutional depository. The press includes not only their own publications, but also hosts publications from a few other sources.

Iowa went with BEPress’ Digital Commons to provide both the repository and the journal hosting. Part of why they went this route for their journals was because they already had it for their repository, and they approach it more as being a hosting platform than as being a press/publisher. This means they did not need to add staff to support it, although they did add responsibilities to exiting staff in addition to their other work.

Kansas is using Open Journal Systems hosted on a commercial server due to internal politics that prevented it from being hosted on the university server. All of their publications are Gold OA, and the university/library is paying all of the costs (~$1700/year, not including the .6 FTE staff hours).

Day in the life of New Prairie Press — most of the routine stuff at Kansas involves processing DOI information for articles and works-cited, and working with DOAJ for article metadata. The rest is less routine, usually involving journal setups, training, consultation, meetings, documentation, troubleshooting, etc.

The admin back-end of OJS allows Char to view it as if she is different types of users (editor, author, etc.) to be able to trouble-shoot issues for users. Rather than maintaining a test site, they have a “hidden” journal on the live site that they use to test functions.

A big part of her daily work is submitting DOIs to CrossRef and going through the backfile of previously published content to identify and add DOIs to the works-cited. The process is very manual, and the error rate is high enough that automation would be challenging.

Iowa does have some subscription-based titles, so part of the management involves keeping up with a subscriber list and IP addresses. All of the titles eventually fall into open access.

Most of the work at Iowa has been with retrospective content — taking past print publications and digitizing them. They are also concerned with making sure the content follows current standards that are used by both library systems and Google Scholar.

There is more. I couldn’t take notes and keep time towards the end.

reason #237 why JSTOR rocks

For almost two decades, JSTOR has been digitizing and hosting core scholarly journals across many disciplines. Currently, their servers store more than 1,400 journals from the first issue to a rolling wall of anywhere from 3-5 years ago (for most titles). Some of these journals date back several centuries.

They have backups, both digital and virtual, and they’re preserving metadata in the most convertible/portable formats possible. I can’t even imagine how many servers it takes to store all of this data. Much less how much it costs to do so.

And yet, in the spirit of “information wants to be free,” they are making the pre-copyright content open and available to anyone who wants it. That’s stuff from before 1923 that was published in the United States, and 1870 for everything else. Sure, it’s not going to be very useful for some researchers who need more current scholarship, but JSTOR hasn’t been about new stuff so much as preserving and making accessible the old stuff.

So, yeah, that’s yet another reason why I think JSTOR rocks. They’re doing what they can with an economic model that is responsible, and making information available to those who can’t afford it or are not affiliated with institutions that can purchase it. Scholarship doesn’t happen in a vacuum, and  innovators and great minds aren’t always found solely in wealthy institutions. This is one step towards bridging the economic divide.

ER&L: Here Comes Everybody ( a fishbowl conversation)

Organizers: Robb Waltner, Teresa Abaid, Rita Cauce, & Alice Eng

Usability of ERMS
Is a unified product better than several that do aspects well? Maybe we are trying to do too much with our data? Theoretically the same vendor products should talk to each other, but they don’t.

Ex Libris is folding in the ERMS tools into their new ILS. Interesting.

ERM is an evolving thing. You’ll always wish that there was more to your system. (Too true.)

Usefulness of Web-Scale Discovery
Some of the discovery layers don’t talk to the underlying databases or ILS very well. In many cases, the instruction librarians refuse to show it to users. They forget that the whole point of having these tools is so we don’t have to teach the users how to use them.

One institution did a wholesale replacement of the OPAC with the discovery tool, and they are now being invited to more classes and have a great deal of excitement about it around the campus.

Reality of Open Access
Some OA publishers are seeing huge increases in submissions from authors. Not the story that has been told in the past, but good to hear.

Librarians should be advocating for faculty to retain their own copyright, which is a good argument for OA. We can also be a resource for faculty who are creating content that can’t be contained by traditional publishing.

Integrating SERU
One publisher was willing to use it in lieu of not having a license at all.

Librarians need to keep asking for it to keep it in the minds of publishers and vendors. Look for the vendors in the registry.

Lawyers want to protect the institution. It’s what they do. Educate them about the opportunities and the unnecessary expense wasted on license negotiations for low risk items.

One limitation of SERU is that it references US law and terms.

NASIG 2010: It’s Time to Join Forces: New Approaches and Models that Support Sustainable Scholarship

Presenters: David Fritsch, JSTOR and Rachel Lee, University of California Press

JSTOR has started working with several university press and other small scholarly publishers to develop sustainable options.

UC Press is one of the largest university press in the US (36 journals in the humanities, biological & social sciences), publishing both UC titles and society titles. Their prices range from $97-422 for annual subscriptions, and they are SHERPA Green. One of the challenges they face on their own platform is keeping up with libraries expectations.

ITHAKA is a merger of JSTOR, ITHAKA, Portico, and Alkula, so JSTOR is now a service rather than a separate company. Most everyone here knows what the JSTOR product/service is, and that hasn’t changed much with the merger.

Scholar’s use of information is moving online, and if it’s not online, they’ll use a different resource, even if it’s not as good. And, if things aren’t discoverable by Google, they are often overlooked. More complex content is emerging, including multimedia and user-generated content. Mergers and acquisitions in publishing are consolidating content under a few umbrellas, and this threatens smaller publishers and university presses that can’t keep up with the costs on a smaller scale.

The serials crisis has impacted smaller presses more than larger ones. Despite good relationships with societies, it is difficult to retain popular society publications when larger publishers can offer them more. It’s also harder to offer the deep discounts expected by libraries in consortial arrangements. University presses and small publishers are in danger of becoming the publisher of last resort.

UC Press and JSTOR have had a long relationship, with JSTOR providing long-term archiving that UC Press could not have afforded to maintain on their own. Not all of the titles are included (only 22), but they are the most popular. They’ve also participated in Portico. JSTOR is also partnering with 18 other publishers that are mission-driven rather than profit-driven, with experience at balancing the needs of academia and publishing.

By partnering with JSTOR for their new content, UC Press will be able to take advantage of the expanded digital platform, sales teams, customer service, and seamless access to both archive and current content. There are some risks, including the potential loss of identity, autonomy, and direct communication with libraries. And then there is the bureaucracy of working within a larger company.

The Current Scholarship Program seeks to provide a solution to the problems outlined above that university presses and small scholarly publishers are facing. The shared technology platform, Portico preservation, sustainable business model, and administrative services potentially free up these small publishers to focus on generating high-quality content and furthering their scholarly communication missions.

Libraries will be able to purchase current subscriptions either through their agents or JSTOR (who will not be charging a service fee). However, archive content will be purchased directly from JSTOR. JSTOR will handle all of the licensing, and current JSTOR subscribers will simply have a rider adding title to their existing licenses. For libraries that purchase JSTOR collections through consortia arrangements, it will be possible to add title by title subscriptions without going through the consortia if a consortia agreement doesn’t make sense for purchase decisions. They will be offering both single-title purchases and collections, with the latter being more useful for large libraries, consortia, and those who want current content for titles in their JSTOR collections.

They still don’t know what they will do about post-cancellation access. Big red flag here for potential early adopters, but hopefully this will be sorted out before the program really kicks in.

Benefits for libraries: reasonable pricing, more efficient discovery, single license, and meaningful COUNTER-compliant statistics for the full run of a title. Renewal subscriptions will maintain access to what they have already, and new subscriptions will come with access to the first online year provided by the publisher, which may not be volume one, but certainly as comprehensive as what most publishers offer now.

UC Press plans to start transitioning in January 2011. New orders, claims, etc. will be handled by JSTOR (including print subscriptions), but UC Press will be setting their own prices. Their platform, Caliber, will remain open until June 30, 2011, but after that content will be only on the JSTOR platform. UC Press expects to move to online-only in the next few years, particularly as the number of print subscriptions are dwindling to the point where it is cost-prohibitive to produce the print issues.

There is some interest from the publishers to add monographic content as well, but JSTOR isn’t ready to do that yet. They will need to develop some significant infrastructure in order to handle the order processing of monographs.

Some in the audience are concerned that the cost of developing platform enhancements and other tools, mostly that these costs will be passed on in subscription prices. They will be, to a certain extent, only in that the publishers will be contributing to the developments and they set the prices, but because it is a shared system, the costs will be spread out and likely impact libraries no more than they have already.

One big challenge all will face is unlearning the mindset that JSTOR is only archive content and not current content.

ER&L 2010: Step Right Up! Planning, Pitfalls, and Performance of an E-Resources Fair

Speakers: Noelle Marie Egan & Nancy G. Eagan

This got started because they had some vendors come in to demonstrate their resources. Elsevier offered to do a demo for students with food. The library saw that several good resources were being under-used, so they decided to try to put together an eresources demo with Elsevier and others. It was also a good opportunity to get usability feedback about the new website.

They decided to have ten tables total, representing the whole fair. They polled the reference librarians to get suggestions for who to invite, and they ended up with resources that crossed most of the major disciplines at the school. The fair was held in a high-traffic location of the library (so that they could get walk-in participation) and publicized in the student paper, posted it in the blog, and the librarians shared it on Facebook with student and faculty friends.

They had a raffle to gather information about the participants, and in the end, they had 64 undergraduates, 19 graduates, 6 faculty, 5 staff, and 2 alumni attend the fair over the four hours. By having the users fill out the raffle information, they were able to interact with library staff in a different way that wasn’t just about them coming for information or help.

After the fair, they looked at the sessions and searches of the resources that were represented at the fair, and compared the monthly stats from the previous year. However, there is no way to determine whether the fair had a direct impact on increases (and the few decreases).

In and of itself, the event created publicity for the library. And, because it was free (minus staff time), they don’t really need to provide solid support for the success (or failure) of the event.

Some of the vendors didn’t take it seriously and showed up late. They thought that it was a waste of their time to talk about only the resources the library already purchases, rather than pushing new sales, and it’s doubtful those vendors will be invited back. It may be better to try to schedule it around the time of your state library conference, if that happens nearby, so the vendors may already be close and not making a special trip.

CIL 2009: Open Access: Green and Gold

Presenter: Shane Beers

Green open access (OA) is the practice of depositing and making available a document on the web. Most frequently, these are peer reviewed research and conference articles. This is not self-publishing! OA repositories allow institutions to store and showcase the research output of institutions, thus increasing their visibility within the academic community.

Institutional repositories are usually managed by either DSpace, Fedora, or EPrints, and there are third-party external options using these systems. There are also a few subject-specific repositories not affiliated with any particular institution.

The "serials crisis" results in most libraries not subscribing to every journal out there that their researchers need. OA eliminates this problem by making relevant research available to anyone who needs it, regardless of their economic barriers.

A 2008 study showed that less than 20% of all scientific articles published were made available in a green or gold OA repository. Self-archiving is at a low 15%, and incentives to do so increase it only by 30%. Researchers and their work habits are the greatest barriers that OA repository managers encounter. The only way to guarantee 100% self-archiving is with an institutional mandate.

Copyright complications are also barriers to adoption. Post-print archiving is the most problematic, particularly as publishers continue to resist OA and prohibit it in author contracts.

OA repositories are not self-sustaining. They require top-down dedication and support, not only for the project as a whole, but also the equipment/service and staff costs. A single "repository rat" model is rarely successful.

The future? More mandates, peer-reviewed green OA repositories, expanding repositories to encompass services, and integration of OA repositories into the workflow of researchers.

Presenter: Amy Buckland

Gold open access is about not having price or permission barriers. No embargos with immediate post-print archiving.

The Public Knowledge Project is an easy tool for creating an open journal that includes all the capabilities of online multi-media. For example, First Monday uses it.

Buckland wants libraries to become publishers of content by making the platforms available to the researchers. Editors and editorial boards can come from volunteers within the institution, and authors just need to do what they do.

Publication models are changing. May granting agencies are requiring OA components tied with funding. The best part: everyone in the world can see your institution’s output immediately!

Installation of the product is easy — it’s getting the word out that’s hard.

Libraries can make the MARC records freely available, and ensure that the journals are indexed in the Directory of Open Access Journals.

Doing this will build relationships between faculty and the library. Libraries become directly involved in the research output of faculty, which makes libraries more visible to administrators and budget decision-makers. University presses are struggling, but even though they are focused on revenue, OA journal publishing could enhance their visibility and status. Also, if you publish OA, the big G will find it (and other search engines).

more degrees for the same pay

In a recent Chronicle article, Todd Gilman complains about the lack of job postings for librarian subject specialists who have secondary master’s or doctoral degrees. While I think he makes valid points for why subject specialists should have post-graduate education in their fields of study, particularly if they are in tenure-track positions, I think he misses the mark as to why libraries are hiring folks without those degrees.

In that job posting and many others, the most attention paid to subject expertise (in the form of a master’s or Ph.D.) is a brief mention in the list of “preferred” qualifications. That is a strong indication that the hiring institution will settle for less — much less. In fact, I’m told that in a number of recent hires, Ph.D.’s and M.A.’s — some with years of professional experience working in top academic libraries in addition to having an MLIS — have been passed over in favor of candidates straight out of library school whose only previous degree was a bachelor’s.

Were they passed over because they asked for more compensation than what the institution was willing to pay? I suspect that may play a much larger role than what Mr. Gilman is giving it.

Libraries are usually the first target for budget cuts, and one of the biggest expenses in a library is staff salaries. Someone who has post-graduate degrees beyond the MLS will likely expect to be compensated for the additional skills and knowledge they bring to the job. University administrators either don’t understand or don’t care about the value that these folks add to collections and instruction, and as a result, they are unwilling to meet the compensation demands of these “better qualified” candidates. Recent graduates in any field will cost the university less in the salary department, and that short-term benefit is the only one that (mostly short-timer) administrators care about.

Given all that, would you go through the trouble of getting a second master’s degree or a doctoral degree, knowing that unless you are already in a tenure-track position with fair compensation, it is unlikely that you’ll be payed any more than you are already? Probably not, unless you were particularly passionate about research in your field of study.

Even so, that research might not help you with tenure, as some colleagues of mine discovered when their institution’s tenure requirements changed so that scholarship in their primary field (read: library science) alone counted towards tenure and post-tenure review. Nevermind that they focused most of their scholarly research in their secondary subject specialties.

All of the above is why I took myself out of the tenure-track world. I have no interest (at this time) in becoming a subject specialist in anything but what I do every day: librarianship. I’m happy to let others make decisions about content, so long as they let me focus on my areas of expertise, such as delivery platforms, access, and licensing issues.

NASIG 2008: Managing Divergence of Print and Online Journals

Presenters: Beth Weston and Deena Acton

The National Library of Medicine spent 2007 examined the impact of content differences between print and online journals on library operations and services. They then followed up on this in 2008. In evaluating the situation, the NLM team working on this project were tasked with locating the differences between print and online, noting them, and then determining their impact.

One thing that is worth noting here is that the NLM is an archival library, by which I mean they consider it a part of their mission to retain copies of everything they collect. And, their ILL service to other libraries is considered an essential function.

Because NLM is responsible for indexing content for MEDLINE, they were able to locate the differences through the indexing workflow. They have noticed that there is anecdotal evidence of an increase in online-only content. Aside from the indexing, which will be decreasing over time, differences between print and online are discovered by patrons and reference librarians, as well as interlibrary loan staff.

The working group recommends that publishers take responsibility for identifying the version of record, and develop and implement a standard for communicating that version to subscribers. However, that’s only a start. Libraries will then need to determine how they will note that in their records, as well as workflows for following up on it.

The set that the working group looked at included 149 titles from 58 publishers, in both print and online formats, but which had additional online-only content. Data was collected for a specific set of these journals on: number of complete articles in each edition, editorials, commentary/letters, book/media reviews, advertisements, announcements/calendar items, and continuing education materials. Notifications about new issues, author correspondence information, and other extraneous content that is format-specific was not considered.

Approximately 13% of the articles were online-only, and 18% of the articles contained article-level online-only supplementary materials. Based on the one year sampling, they estimate that 12,739 articles from these 149 titles could be online-only.

One reason why there may be an increase in the divergence is due to the volume of content publishers want to provide versus the cost of printing all of it. It is likely that as the cost of publishing ejournals decreases in relation to the cost of print publishing, we will see more of this divergence.

[Side note: I really wish we would move away from the “presenting the data from my study” sessions to “here’s how I applied the data from my study” sessions.]

css.php