NASIG 2012: Managing E-Publishing — Perfect Harmony for Serialists

Presenters: Char Simser (Kansas State University) & Wendy Robertson (University of Iowa)

Iowa looks at e-publishing as an extension of the central mission of the library. This covers not only text, but also multimedia content. After many years of ad-hoc work, they formed a department to be more comprehensive and intentional.

Kansas really didn’t do much with this until they had a strategic plan that included establishing an open access press (New Prairie). This also involved reorganizing personnel to create a new department to manage the process, which includes the institutional depository. The press includes not only their own publications, but also hosts publications from a few other sources.

Iowa went with BEPress’ Digital Commons to provide both the repository and the journal hosting. Part of why they went this route for their journals was because they already had it for their repository, and they approach it more as being a hosting platform than as being a press/publisher. This means they did not need to add staff to support it, although they did add responsibilities to exiting staff in addition to their other work.

Kansas is using Open Journal Systems hosted on a commercial server due to internal politics that prevented it from being hosted on the university server. All of their publications are Gold OA, and the university/library is paying all of the costs (~$1700/year, not including the .6 FTE staff hours).

Day in the life of New Prairie Press — most of the routine stuff at Kansas involves processing DOI information for articles and works-cited, and working with DOAJ for article metadata. The rest is less routine, usually involving journal setups, training, consultation, meetings, documentation, troubleshooting, etc.

The admin back-end of OJS allows Char to view it as if she is different types of users (editor, author, etc.) to be able to trouble-shoot issues for users. Rather than maintaining a test site, they have a “hidden” journal on the live site that they use to test functions.

A big part of her daily work is submitting DOIs to CrossRef and going through the backfile of previously published content to identify and add DOIs to the works-cited. The process is very manual, and the error rate is high enough that automation would be challenging.

Iowa does have some subscription-based titles, so part of the management involves keeping up with a subscriber list and IP addresses. All of the titles eventually fall into open access.

Most of the work at Iowa has been with retrospective content — taking past print publications and digitizing them. They are also concerned with making sure the content follows current standards that are used by both library systems and Google Scholar.

There is more. I couldn’t take notes and keep time towards the end.

ER&L 2012: Lightening Talks

Shellharbour; Lightening
photo by Steven

Due to a phone meeting, I spent the first 10 min snarfing down my lunch, so I missed the first presenters.

Jason Price: Libraries spend a lot of time trying to get accurate lists of the things we’re supposed to have access to. Publisher lists are marketing lists, and they don’t always include former titles. Do we even need these lists anymore? Should we be pushing harder to get them? Can we capture the loss from inaccurate access information and use that to make our case? Question: Isn’t it up to the link resolver vendors? No, they rely on the publishers/sources like we do. Question: Don’t you think something is wrong with the market when the publisher is so sure of sales that they don’t have to provide the information we want? Question: Haven’t we already done most of this work in OCLC, shouldn’t we use that?

Todd Carpenter: NISO recently launched the Open Discovery Initiative, which is trying to address the problems with indexed discovery services. How do you know what is being indexed in a discovery service? What do things like relevance ranking mean? What about the relationships between organizations that may impact ranking? The project is ongoing and expect to hear more in the fall (LITA, ALA Midwinter, and beyond).

Title change problem — uses xISSN service from OCLC to identify title changes through a Python script. If the data in OCLC isn’t good enough, and librarians are creating it, then how can we expect publishers to do better.

Dani Roach: Anyone seeing an unusual spike in use for 2011? Have you worked with them about it? Do you expect a resolution? They believe our users are doing group searches across the databases, even though we are sending them to specific databases, so they would need to actively choose to search more than one. Caution everyone to check their stats. And how is their explanation still COUNTER compliant.

Angel Black: Was given a mission at ER&L to find out what everyone is doing with OA journals, particularly those that come with traditional paid packages. They are manually adding links to MARC records, and use series fields (830) to keep track of them. But, not sure how to handle the OA stuff, particularly when you’re using a single record. Audience suggestion to use 856 subfield x. “Artesian, handcrafted serials cataloging”

Todd Carpenter part 2: How many of you think your patrons are having trouble finding the OA in a mixed access journal that is not exposed/labeled? KBs are at the journal or volume/issue level. About 1/3 of the room thinks it is a problem.

Has anyone developed their own local mobile app? Yes, there is a great way to do that, but more important to create a mobile-friendly website. PhoneGap will write an app for mobile OS that will wrap your web app in an app, and include some location services. Maybe look to include library in a university-wide app?

Adam Traub: Really into PPV/demand-driven. Some do an advance purchase model with tokens, and some of them will expire. Really wants to make it an unmediated process, but it opens up the library to increasing and spiraling costs. They went unmediated for a quarter, and the use skyrocketed. What’s a good way to do this without spending a ton of money? CCC’s Get It Now drives PPV usage through the link resolver. Another uses a note to indicate that the journal is being purchased by the library.

Kristin Martin: Temporarily had two discovery services, and they don’t know how to display this to users. Prime for some usability testing. Have results from both display side by side and let users “grade” them.

Michael Edwards: Part of a NE consortia, and thinks they should be able to come up with consortial pressure on vendors, and they’re basically telling them to take a leap. Are any of the smaller groups in pressuring vendors in making concessions to consortial acquisitions. Orbis-Cascade and Connect NY have both been doing good things for ebook pricing and reducing the multiplier for SU. Do some collection analysis on the joint borrowing/purchasing policies? The selectors will buy what they buy.

reason #237 why JSTOR rocks

For almost two decades, JSTOR has been digitizing and hosting core scholarly journals across many disciplines. Currently, their servers store more than 1,400 journals from the first issue to a rolling wall of anywhere from 3-5 years ago (for most titles). Some of these journals date back several centuries.

They have backups, both digital and virtual, and they’re preserving metadata in the most convertible/portable formats possible. I can’t even imagine how many servers it takes to store all of this data. Much less how much it costs to do so.

And yet, in the spirit of “information wants to be free,” they are making the pre-copyright content open and available to anyone who wants it. That’s stuff from before 1923 that was published in the United States, and 1870 for everything else. Sure, it’s not going to be very useful for some researchers who need more current scholarship, but JSTOR hasn’t been about new stuff so much as preserving and making accessible the old stuff.

So, yeah, that’s yet another reason why I think JSTOR rocks. They’re doing what they can with an economic model that is responsible, and making information available to those who can’t afford it or are not affiliated with institutions that can purchase it. Scholarship doesn’t happen in a vacuum, and  innovators and great minds aren’t always found solely in wealthy institutions. This is one step towards bridging the economic divide.

ER&L: Here Comes Everybody ( a fishbowl conversation)

Organizers: Robb Waltner, Teresa Abaid, Rita Cauce, & Alice Eng

Usability of ERMS
Is a unified product better than several that do aspects well? Maybe we are trying to do too much with our data? Theoretically the same vendor products should talk to each other, but they don’t.

Ex Libris is folding in the ERMS tools into their new ILS. Interesting.

ERM is an evolving thing. You’ll always wish that there was more to your system. (Too true.)

Usefulness of Web-Scale Discovery
Some of the discovery layers don’t talk to the underlying databases or ILS very well. In many cases, the instruction librarians refuse to show it to users. They forget that the whole point of having these tools is so we don’t have to teach the users how to use them.

One institution did a wholesale replacement of the OPAC with the discovery tool, and they are now being invited to more classes and have a great deal of excitement about it around the campus.

Reality of Open Access
Some OA publishers are seeing huge increases in submissions from authors. Not the story that has been told in the past, but good to hear.

Librarians should be advocating for faculty to retain their own copyright, which is a good argument for OA. We can also be a resource for faculty who are creating content that can’t be contained by traditional publishing.

Integrating SERU
One publisher was willing to use it in lieu of not having a license at all.

Librarians need to keep asking for it to keep it in the minds of publishers and vendors. Look for the vendors in the registry.

Lawyers want to protect the institution. It’s what they do. Educate them about the opportunities and the unnecessary expense wasted on license negotiations for low risk items.

One limitation of SERU is that it references US law and terms.

NASIG 2009: Registration Ruminations

Presenters: Kristina Krusmark and Mary Throumoulos

More than 60% of all content purchased has an electronic component. This is continually increasing, requiring more things that need to be registered.

Last summer, Ebsco commissioned a study to identify challenges in online content purchases. About 455 participants, mostly from North America, and they identified registration and activation as the primary issue. The survey found that the process is too complicated. There isn’t a standard model, and often the instructions/information are incomplete. Another challenge the survey found was with a lack of sufficient staffing to properly manage the process. This results in delays in access or titles not being registered at all.

If users don’t have access to content, then they won’t use the content, even if it had been paid for. When librarians look at usage to make collection development decisions, the lack or delay in activation could have a huge impact on whether or not to retain the subscription. And, as one audience member noted, after having bad or frustrating experiences with registering for access, librarians might be hesitant to subscribe to online journals that are difficult to “turn on.”

Recently, Throumoulos’s library decided to convert as much as possible to online-only. They canceled print journals that were also available through aggregators like Project Muse, and made decisions about whether to retain print-only titles. Then they began the long process of activating those online subscriptions.

For online-only, most of the time the license process results in access without registration. For print+online titles, the registration process can be more complicated, and sometimes involving information from mailing labels, which may or may not be retained in processing.

Agents would like to be able to register on behalf of libraries, and most do so when they are able to. However, many publishers want the customer, not the agent, to register access. When agents can’t register for the customer, they do try to provide as much information about the process (links, instructions, customer numbers, basic license terms, etc.).

Opportunities for improvement: standardization of registration models, greater efficiencies between agents and publishers, and industry initiatives like SERU.

CIL 2009: Open Access: Green and Gold

Presenter: Shane Beers

Green open access (OA) is the practice of depositing and making available a document on the web. Most frequently, these are peer reviewed research and conference articles. This is not self-publishing! OA repositories allow institutions to store and showcase the research output of institutions, thus increasing their visibility within the academic community.

Institutional repositories are usually managed by either DSpace, Fedora, or EPrints, and there are third-party external options using these systems. There are also a few subject-specific repositories not affiliated with any particular institution.

The "serials crisis" results in most libraries not subscribing to every journal out there that their researchers need. OA eliminates this problem by making relevant research available to anyone who needs it, regardless of their economic barriers.

A 2008 study showed that less than 20% of all scientific articles published were made available in a green or gold OA repository. Self-archiving is at a low 15%, and incentives to do so increase it only by 30%. Researchers and their work habits are the greatest barriers that OA repository managers encounter. The only way to guarantee 100% self-archiving is with an institutional mandate.

Copyright complications are also barriers to adoption. Post-print archiving is the most problematic, particularly as publishers continue to resist OA and prohibit it in author contracts.

OA repositories are not self-sustaining. They require top-down dedication and support, not only for the project as a whole, but also the equipment/service and staff costs. A single "repository rat" model is rarely successful.

The future? More mandates, peer-reviewed green OA repositories, expanding repositories to encompass services, and integration of OA repositories into the workflow of researchers.

Presenter: Amy Buckland

Gold open access is about not having price or permission barriers. No embargos with immediate post-print archiving.

The Public Knowledge Project is an easy tool for creating an open journal that includes all the capabilities of online multi-media. For example, First Monday uses it.

Buckland wants libraries to become publishers of content by making the platforms available to the researchers. Editors and editorial boards can come from volunteers within the institution, and authors just need to do what they do.

Publication models are changing. May granting agencies are requiring OA components tied with funding. The best part: everyone in the world can see your institution’s output immediately!

Installation of the product is easy — it’s getting the word out that’s hard.

Libraries can make the MARC records freely available, and ensure that the journals are indexed in the Directory of Open Access Journals.

Doing this will build relationships between faculty and the library. Libraries become directly involved in the research output of faculty, which makes libraries more visible to administrators and budget decision-makers. University presses are struggling, but even though they are focused on revenue, OA journal publishing could enhance their visibility and status. Also, if you publish OA, the big G will find it (and other search engines).

Harvard & the Open Access movement

A colleague called the Harvard faculty’s decision on making all of their works available in an institutional repository a “bold step towards online scholarship and open access.” I thought about this for a bit, and I’m not so sure it’s the right step, depending on how this process is done. Initially, I thought the resolution called for depositing articles before they are published, which would be difficult to enforce and likely result in the non-publication of said articles. However, upon further reflection and investigation, it seems that the resolution simply limits the outlets for faculty publication to those journals that allow for pre- or post-publication versions to be deposited in institutional repositories. Many publishers are moving in that direction, but it’s still not universal, and is unlikely to be so in the near future.

I am concerned that the short-term consequences will be increased difficulty in junior faculty getting their work published, thus creating another unnecessary barrier to tenure. I like the idea of a system that retains the scholarship generated at an institution, but I’m not sure if this is the right way to do it. Don’t get me wrong — repositories are a great way to collect the knowledge of an institution’s researchers, but they aren’t the holy grail solution to the scholarly communication crisis. Unless faculty put more of a priority on making their scholarship readily available to the world than on the prestige of the journal in which it is published, there will be little incentive to exclusively submit articles to publishers that allow them to be deposited in institutional repositories beyond mandatory participation. There are enough hungry junior faculty in the world to keep the top-shelf journal publishers in the black for years to come.

recent articles read

I’ve been catching up on some professional reading.

I’ve read a few articles recently that I’ve found quite interesting and would like to share some thoughts on them.

Van de Sompel, Herbert, et. al. “Rethinking Scholarly Communication: Building the System that Scholars Deserve.” D-Lib Magazine. 10:9 (2004), doi:10.1045/september2004-vandesompel [open access]

I was immediately intrigued by what the creator of OpenURL (and his co-authors) might suggest as a technological solution to the current problems with scholarly communication. I couldn’t follow all of the technological details (they lost me at the flow charts and diagrams), but I was pleased to read this in the conclusion: “The NSF has recently recommended funding the authors of this paper to investigate these problems, building on our collective research and development. In a future article we will discuss our current work in moving toward a network overlay that promotes interoperability among heterogeneous data models and system implementations. We will describe our architectural vision for addressing the fundamental technical requirements of a next generation system for scholarly communication.”

Antelman, Kristin. “Do Open-Access Articles Have a Greater Research Impact?.” College & Research Libraries. 65:5, 372-382. [open access]

The author set out to find data to confirm or debunk the common assumption that open access articles have a greater research impact than those which are not open access. She looks at four disciplines in different stages of open access development, and all of them have had a history with the use of pre-print articles. The data she gathers leads her to conclude that open access articles do have a greater research impact than those which are not freely available. I would like to see these types of studies extended to other disciplines, but I am pleased to see that someone out there is gathering data for the rest of us to share with the teaching/research faculty in the discussions about scholarly communication we should all be having.

Siebenberg, Tammy R., Betty Galbraith, and Eileen E. Brady. “Print versus Electronic Journal Use in Three Sci/Tech Disciplines: What

nasig day 3 & 4

I ended up having only one more chance to get online while at the conference, and that was during the closing session at Centennial Hall. It didn’t seem to be appropriate to blog while listening to the final vision session, so I decided to wait until I returned home.

I ended up having only one more chance to get online while at the conference, and that was during the closing session at Centennial Hall. It didn’t seem to be appropriate to blog while listening to the final vision session, so I decided to wait until I returned home.

Continue reading “nasig day 3 & 4”

css.php