ER&L 2013: Ebooks — Their Use and Acceptance by Undergraduates and Faculty

“Kali, Avatar of the eBook” by Javier Candeira

Speaker: Deborah Lenares, Wellesley College

Libraries have been relatively quietly collecting ebooks for years, but it wasn’t until the Kindle came out that public interest in ebooks was aroused. Users exposure and expectations for ebooks has been raised, with notable impact on academic libraries. From 2010-2011, the number of ebooks in academic libraries doubled.

Wellesley is platform agnostic — they look for the best deal with the best content. Locally, they have seen an overall increase in unique titles viewed, a dramatic increase in pages viewed, a modest decrease in pages printed, and a dramatic increase in downloads.

In February 2012, they sent a survey to all of their users, with incentives (iPad, gift cards, etc.) and a platform (Zoomerang) provided by Springer. They had a 57% response rate (likely iPad-influenced), and 71% have used ebooks (51% used ebooks from the Wellesley College Library). If the survey respondent had not used ebooks, they were skipped to the end of the survey, because they were only interested in data from those who have used ebooks.

A high percent of the non-library ebooks were from free sources like Google Books, Project Gutenberg, Internet Archive, etc. Most of the respondents ranked search within the text and offline reading or download to device among the most important functionality aspects, even higher than printing.

Most of the faculty respondents found ebooks to be an acceptable option, but prefer to use print. Fewer students found ebooks an acceptable option, and preferred print more than faculty. There is a reason that will be aparent later in the talk.

The sciences preferred ebooks more than other areas, and found them generally more acceptable than other areas, but the difference is slight. Nearly all faculty who used ebooks would continue to, ranging from preferring them to reluctant acceptance.

Whether they love or hate ebooks, most users skimmed/search and read a small number of consecutive pages or a full chapter. However, ebooks haters almost never read an entire book, and most of the others infrequently did so. Nearly everyone read ebooks on a computer/laptop. Ebook lovers used devices, and ebook haters were more likely to have printed it out. Most would prefer to not use their computer/laptop, and the ebook lovers would rather use their devices.

Faculty are more likely to own or plan to purchase a device than students, which may be why faculty find ebooks more acceptable than students. Maybe providing devices to them would be helpful?

For further research:

  • How does the robustness of ebook collections effect use and attitudes?
  • Is there a correlation between tablet/device use and attitudes?
  • Are attitudes toward shared ebooks (library) different from attitudes toward personal ebooks?

The full text of the white paper is available from Springer.

NASIG 2010: What Counts? Assessing the Value of Non-Text Resources

Presenters: Stephanie Krueger, ARTstor and Tammy S. Sugarman, Georgia State University

Anyone who does anything with use statistics or assessment knows why use statistics are important and the value of standards like COUNTER. But, how do we count the use of non-text content that doesn’t fit in the categories of download, search, session, etc.? What does it mean to “use” these resources?

Of the libraries surveyed that collect use stats for non-text resources, they mainly use them to report to administrators and determine renewals. A few use it to evaluate the success of training or promote the resource to the user community. More than a third of the respondents indicated that the stats they have do not adequately meet the needs they have for the data.

ARTstor approached COUNTER and asked that the technical advisory group include representatives from vendors that provide non-text content such as images, video, etc. Currently, the COUNTER reports are either about Journals or Databases, and do not consider primary source materials. One might think that “search” and “sessions” would be easy to track, but there are complexities that are not apparent.

Consider the Database 1 report. With a primary source aggregator like ARTstor, who is the “publisher” of the content? For ARTstor, search is only 27% of the use of the resource. 47% comes from image requests (includes thumbnail, full-size, printing, download, etc.) and the rest is from software utilities within the resource (creation of course folders, passwords creation, organizing folders, annotations of images, emailing content/URLs, sending information to bibliographic management tools, etc.).

The missing metric is the non-text full content unit request (i.e. view, download, print, email, stream, etc.). There needs to be some way of measuring this that is equivalent to the full-text download of a journal article. Otherwise, cost per use analysis is skewed.

What is the equivalent of the ISSN? Non-text resources don’t even have DOIs assigned to them.

On top of all of that, how do you measure the use of these resources beyond the measurable environment? For example, once an image is downloaded, it can be included in slides and webpages for classroom use more than once, but those uses are not counted. ARTstor doesn’t use DRM, so they can’t track that way.

No one is really talking about how to assess this kind of usage, at least not in the professional library literature. However, the IT community is thinking about this as well, so we may be able to find some ideas/solutions there. They are being asked to justify software usage, and they have the same lack of data and limitations. So, instead of going with the traditional journal/database counting methods, they are attempting to measure the value of the services provided by the software. The IT folk identify services, determine the cost of those services, and identify benchmarks for those costs.

A potential report could have the following columns: collection (i.e. an art collection within ARTstor, or a university collection developed locally), content provider, platform, and then the use numbers. This is basic, and can increase in granularity over time.

There are still challenges, even with this report. Time-based objects need to have a defined value of use. Resources like data sets and software-like things are hard to define as well (i.e. SciFinder Scholar). And, it will be difficult to define a report that is one size fits all.

english major wannabe

Perhaps some latent English major talent has finally decided to surface, and it took six years away from the Mathematics textbooks for it to bubble up?

The route of my walk to and from home (for the time being) passes by a printing business. Before, I have only glimpse the inner operations through a window, but today when I passed by on my way home from lunch, the garage door entrance was open. A distinctive scent wafted out through the opening, and for some reason, my mind started coming up with a description for the scent. Here it is:

the metalic tang of ink mingling with the pungent bleach scent of new paper

Perhaps some latent English major talent has finally decided to surface, and it took six years away from the Mathematics textbooks for it to bubble up? Or maybe I’ve been listening to A Prairie Home Companion too much.

css.php