CIL 2011: EBook Publishing – Practices & Challenges

Speaker: Ken Breen (EBSCO)

In 1997, ebooks were on CD-ROM and came with large paper books to explain how to use them, along with the same concerns about platforms we have today.

Current sales models involve purchase by individual libraries or consortia, patron-driven acquisition models, and subscriptions. Most of this presentation is a sales pitch for EBSCO and nothing you don’t already know.

Speaker: Leslie Lees (ebrary)

Ebrary was founded a year after NetLibrary and was acquired by ProQuest last year. They have similar models, with one slight difference: short term loans, which will be available later this spring.

With no longer a need to acquire books because they may be hard to get later, do we need to be building collections, or can we move to an on-demand model?

He thinks that platforms will move towards focusing more on access needs than on reselling content.

Speaker: Bob Nardini (Coutts)

They are working with a variety of incoming files and outputting them in any format needed by the distributors they work with, both ebook and print on demand.

A recent study found that academic libraries have significant number of overlap with their ebook and print collections.

They are working on approval plans for print and ebooks. The timing of the releases of each format can complicate things, and he thinks their model mediates that better. They are also working on interlibrary loan of ebooks and local POD.

Because they work primarily with academic libraries, they are interested in models for archiving ebooks. They are also looking into download models.

Speaker: Mike (OverDrive)

He sees the company as an advocate for libraries. Promises that there will be more DRM-free books and options for self-published authors. He recommends their resource for sharing best practices among librarians.

Questions:

What is going on with DRM and ebooks? What mechanism does your products use?

Adobe Digital Editions is the main mechanism for OverDrive. Policies are set by the publishers, so all they can do is advocate for libraries. Ebrary and NetLibrary have proprietary software to manage DRM. Publishers are willing to give DRM-free access, but not consistently, and not for their “best” content.

It is hard to get content onto devices. Can you agree on a single standard content format?

No response, except to ask if they can set prices, too.

Adobe became the de facto solutions, but it doesn’t work with all devices. Should we be looking for a better solution?

That’s why some of them are working on their own platforms and formats. ePub has helped the growth of ebook publishing, and may be the direction.

Public libraries need full support for these platforms – can you do that?

They try the best they can. OverDrive offers secondary support. They are working on front-line tech support and hope to offer it soon.

Do publishers work with all platforms or are there exclusive arrangements?

It varies.

Do you offer more than 10 pages at a time for downloads of purchased titles?

Ebrary tries to do it at the chapter level, and the same is probably true of the rest. EBSCO is asking for the right to print up to 60 pages at a time.

When will we be able to loan ebooks?

Coutts is working on ILL.

ER&L: Buzz Session – Usage Data and Assessment

What are the kinds of problems with collecting COUNTER and other reports? What do you do with them when you have them?

What is a good cost per use? Compare it to the alternative like ILL. For databases, trends are important.

Non-COUNTER stats can be useful to see trends, so don’t discount them.

Do you incorporate data about the university in makings decisions? Rankings in value from faculty or students (using star rating in LibGuides or something else)?

When usage is low and cost is high, that may be the best thing to cancel in budget cuts, even if everyone thinks it’s important to have the resource just in case.

How about using stats for low use titles to get out of a big deal package? Comparing the cost per use of core titles versus the rest, then use that to reconfigure the package as needed.

How about calculating the cost per use from month to month?

ER&L 2010: Adventures at the Article Level

Speaker: Jamene Brooks-Kieffer

Article level, for those familiar with link resolvers, means the best link type to give to users. The article is the object of pursuit, and the library and the user collaborate on identifying it, locating it, and acquiring it.

In 1980, the only good article-level identification was the Medline ID. Users would need to go through a qualified Medline search to track down relevant articles, and the library would need the article level identifier to make a fast request from another library. Today, the user can search Medline on their own; use the OpenURL linking to get to the full text, print, or ILL request; and obtain the article from the source or ILL. Unlike in 1980, the user no longer needs to find the journal first to get to the article. Also, the librarian’s role is more in providing relevant metadata maintenance to give the user the tools to locate the articles themselves.

In thirty years, the library has moved from being a partner with the user in pursuit of the article to being the magician behind the curtain. Our magic is made possible by the technology we know but that our users do not know.

Unique identifiers solve the problem of making sure that you are retrieving the correct article. CrossRef can link to specific instances of items, but not necessarily the one the user has access to. The link resolver will use that DOI to find other instances of the article available to users of the library. Easy user authentication at the point of need is the final key to implementing article-level services.

One of the library’s biggest roles is facilitating access. It’s not as simple as setting up a link resolver – it must be maintained or the system will break down. Also, document delivery service provides an opportunity to generate goodwill between libraries and users. The next step is supporting the users preferred interface, through tools like LibX, Papers, Google Scholar link resolver integration, and mobile devices. The latter is the most difficult because much of the content is coming from outside service providers and the institutional support for developing applications or web interfaces.

We also need to consider how we deliver the articles users need. We need to evolve our acquisitions process. We need to be ready for article-level usage data, so we need to stop thinking about it as a single-institutional data problem. Aggregated data will help spot trends. Perhaps we could look at the ebook pay-as-you-use model for article-level acquisitions as well?

PIRUS & PIRUS 2 are projects to develop COUNTER-compliant article usage data for all article-hosting entities (both traditional publishers and institutional repositories). Projects like MESUR will inform these kinds of ventures.

Libraries need to be working on recommendation services. Amazon and Netflix are not flukes. Demand, adopt, and promote recommendation tools like bX or LibraryThing for Libraries.

Users are going beyond locating and acquiring the article to storing, discussing, and synthesizing the information. The library could facilitate that. We need something that lets the user connect with others, store articles, and review recommendations that the system provides. We have the technology (magic) to make it available right now: data storage, cloud applications, targeted recommendations, social networks, and pay-per-download.

How do we get there? Cover the basics of identify>locate>acquire. Demand tools that offer services beyond that, or sponsor the creation of desired tools and services. We also need to stay informed of relevant standards and recommendations.

Publishers will need to be a part of this conversation as well, of course. They need to develop models that allow us to retain access to purchased articles. If we are buying on the article level, what incentive is there to have a journal in the first place?

For tenure and promotion purposes, we need to start looking more at the impact factor of the article, not so much the journal-level impact. PLOS provides individual article metrics.

ER&L 2010: Patron-driven Selection of eBooks – three perspectives on an emerging model of acquisitions

Speaker: Lee Hisle

They have the standard patron-driven acquisitions (PDA) model through Coutts’ MyiLibrary service. What’s slightly different is that they are also working on a pilot program with a three college consortia with a shared collection of PDA titles. After the second use of a book, they are charged 1.2-1.6% of the list price of the book for a 4-SU, perpetual access license.

Issues with ebooks: fair use is replaced by the license terms and software restrictions; ownership has been replaced by licenses, so if Coutts/MyiLibrary were to go away, they would have to renegotiate with the publishers; there is a need for an archiving solution for ebooks much like Portico for ejournals; ILL is not feasible for permissible; potential for exclusive distribution deals; device limitations (computer screens v. ebook readers).

Speaker: Ellen Safley

Her library has been using EBL on Demand. They are only buying 2008-current content within specific subjects/LC classes (history and technology). They purchase on the second view. Because they only purchase a small subset of what they could, the number of records they load fluxuates, but isn’t overwhelming.

After a book has been browsed for more than 10 minutes, the play-per-view purchase is initiated. After eight months, they found that more people used the book at the pay-per-view level than at the purchase level (i.e. more than once).

They’re also a pilot for an Ebrary program. They had to deposit $25,000 for the 6 month pilot, then select from over 100,000 titles. They found that the sciences used the books heavily, but there were also indications that the humanities were popular as well.

The difficulty with this program is an overlap between selector print order requests and PDA purchases. It’s caused a slight modification of their acquisitions flow.

Speaker: Nancy Gibbs

Her library had a pilot with Ebrary. They were cautious about jumping into this, but because it was coming from their approval plan vendor, it was easier to match it up. They culled the title list of 50,000 titles down to 21,408, loaded the records, and enabled them in SFX. But, they did not advertise it at all. They gave no indication of the purchase of a book on the user end.

Within 14 days of starting the project, they had spent all $25,000 of the pilot money. Of the 347 titles purchased, 179 of the purchased titles were also owned in print, but those print only had 420 circulations. The most popularly printed book is also owned in print and has had only two circulations. The purchases leaned more towards STM, political science, and business/economics, with some humanities.

The library tech services were a bit overwhelmed by the number of records in the load. The MARC records lacked OCLC numbers, which they would need in the future. They did not remove the records after the trial ended because of other more pressing needs, but that caused frustration with the users and they do not recommend it.

They were surprised by how quickly they went through the money. If they had advertised, she thinks they may have spent the money even faster. The biggest challenge they had was culling through the list, so in the future running the list through the approval plan might save some time. They need better match routines for the title loads, because they ended up buying five books they already have in electronic format from other vendors.

Ebrary needs to refine circulation models to narrow down subject areas. YBP needs to refine some BISAC subjects, as well. Publishers need to communicate better about when books will be made available in electronic format as well as print. The library needs to revise their funding models to handle this sort of purchasing process.

They added the records to their holdings on OCLC so that they would appear in Google Scholar search results. So, even though they couldn’t loan the books through ILL, there is value in adding the holdings.

They attempted to make sure that the books in the list were not textbooks, but there could have been some, and professors might have used some of the books as supplementary course readings.

One area of concern is the potential of compromised accounts that may result in ebook pirates blowing through funds very quickly. One of the vendors in the room assured us they have safety valves for that in order to protect the publisher content. This has happened, and the vendor reset the download number to remove the fraudulent downloads from the library’s account.

IL2009: Cloud Computing in Practice: Creating Digital Services & Collections

Speakers: Amy Buckland, Kendra K. Levine, & Laura Harris (icanhaz.com/cloudylibs)

Cloud computing is a slightly complicated concept. Everyone approaches defining it from different perspectives. It’s about data and storage. For the purposes of this session, they mean any service that is on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service.

Cloud computing frees people to collaborate in many ways. Infrastructure is messy, so let someone else take care of that so you can focus on what you really need to do. USB sticks can do a lot of that, but they’re easy to lose, and data in the cloud will hopefully be migrated to new formats.

The downside of cloud computing is that it is so dependent upon constant connection and uptime. If your cloud computing source or network goes down, you’re SOL until it get fixed. Privacy can also be a legitimate concern, and the data could be vulnerable to hacking or leaks. Nothing lasts forever — for example, today, Geocities is closing.

Libraries are already in the cloud. We often store our ILS data, ILL, citation management, resource guides, institutional repositories, and electronic resource management tools on servers and services that do not live in the library. Should we be concerned about our vendors making money from us on a "recurring, perpetual basis" (Cory Doctorow)? Should we be concerned about losing the "face" of the library in all of these cloud services? Should we be concerned about the reliability of the services we are paying for?

Libraries can use the cloud for data storage (i.e. DuraSpace, Dropbox). They could also replace OS services & programs, allowing patron-access computers to b run using cloud applications.

Presentation slides are available at icanhaz.com/cloudylibs.

Speaker: Jason Clark

His library is using four applications to serve video from the library, and one of them is TerraPod, which is for students to create, upload, and distribute videos. They outsourced the player to Blip.tv. This way, they don’t have to encode files or develop a player.

The way you can do mashups of cloud applications and locally developed applications is through the APIs that defines the rules for talking to the remote server. The cloud becomes the infrastructure that enables webscaling of projects. Request the data, receive it in some sort of structured format, and then parse it out into whatever you want to do with it.

Best practices for cloud computing: use the cloud architecture do the heavy lifting (file conversion, storage, distribution, etc.), archive locally if you must, and outsource conversion. Don’t be afraid. This is the future.

Presentation slides will be available later on his website.