reason #237 why JSTOR rocks

For almost two decades, JSTOR has been digitizing and hosting core scholarly journals across many disciplines. Currently, their servers store more than 1,400 journals from the first issue to a rolling wall of anywhere from 3-5 years ago (for most titles). Some of these journals date back several centuries.

They have backups, both digital and virtual, and they’re preserving metadata in the most convertible/portable formats possible. I can’t even imagine how many servers it takes to store all of this data. Much less how much it costs to do so.

And yet, in the spirit of “information wants to be free,” they are making the pre-copyright content open and available to anyone who wants it. That’s stuff from before 1923 that was published in the United States, and 1870 for everything else. Sure, it’s not going to be very useful for some researchers who need more current scholarship, but JSTOR hasn’t been about new stuff so much as preserving and making accessible the old stuff.

So, yeah, that’s yet another reason why I think JSTOR rocks. They’re doing what they can with an economic model that is responsible, and making information available to those who can’t afford it or are not affiliated with institutions that can purchase it. Scholarship doesn’t happen in a vacuum, and  innovators and great minds aren’t always found solely in wealthy institutions. This is one step towards bridging the economic divide.

NASIG 2010: What Counts? Assessing the Value of Non-Text Resources

Presenters: Stephanie Krueger, ARTstor and Tammy S. Sugarman, Georgia State University

Anyone who does anything with use statistics or assessment knows why use statistics are important and the value of standards like COUNTER. But, how do we count the use of non-text content that doesn’t fit in the categories of download, search, session, etc.? What does it mean to “use” these resources?

Of the libraries surveyed that collect use stats for non-text resources, they mainly use them to report to administrators and determine renewals. A few use it to evaluate the success of training or promote the resource to the user community. More than a third of the respondents indicated that the stats they have do not adequately meet the needs they have for the data.

ARTstor approached COUNTER and asked that the technical advisory group include representatives from vendors that provide non-text content such as images, video, etc. Currently, the COUNTER reports are either about Journals or Databases, and do not consider primary source materials. One might think that “search” and “sessions” would be easy to track, but there are complexities that are not apparent.

Consider the Database 1 report. With a primary source aggregator like ARTstor, who is the “publisher” of the content? For ARTstor, search is only 27% of the use of the resource. 47% comes from image requests (includes thumbnail, full-size, printing, download, etc.) and the rest is from software utilities within the resource (creation of course folders, passwords creation, organizing folders, annotations of images, emailing content/URLs, sending information to bibliographic management tools, etc.).

The missing metric is the non-text full content unit request (i.e. view, download, print, email, stream, etc.). There needs to be some way of measuring this that is equivalent to the full-text download of a journal article. Otherwise, cost per use analysis is skewed.

What is the equivalent of the ISSN? Non-text resources don’t even have DOIs assigned to them.

On top of all of that, how do you measure the use of these resources beyond the measurable environment? For example, once an image is downloaded, it can be included in slides and webpages for classroom use more than once, but those uses are not counted. ARTstor doesn’t use DRM, so they can’t track that way.

No one is really talking about how to assess this kind of usage, at least not in the professional library literature. However, the IT community is thinking about this as well, so we may be able to find some ideas/solutions there. They are being asked to justify software usage, and they have the same lack of data and limitations. So, instead of going with the traditional journal/database counting methods, they are attempting to measure the value of the services provided by the software. The IT folk identify services, determine the cost of those services, and identify benchmarks for those costs.

A potential report could have the following columns: collection (i.e. an art collection within ARTstor, or a university collection developed locally), content provider, platform, and then the use numbers. This is basic, and can increase in granularity over time.

There are still challenges, even with this report. Time-based objects need to have a defined value of use. Resources like data sets and software-like things are hard to define as well (i.e. SciFinder Scholar). And, it will be difficult to define a report that is one size fits all.

NASIG 2010: It’s Time to Join Forces: New Approaches and Models that Support Sustainable Scholarship

Presenters: David Fritsch, JSTOR and Rachel Lee, University of California Press

JSTOR has started working with several university press and other small scholarly publishers to develop sustainable options.

UC Press is one of the largest university press in the US (36 journals in the humanities, biological & social sciences), publishing both UC titles and society titles. Their prices range from $97-422 for annual subscriptions, and they are SHERPA Green. One of the challenges they face on their own platform is keeping up with libraries expectations.

ITHAKA is a merger of JSTOR, ITHAKA, Portico, and Alkula, so JSTOR is now a service rather than a separate company. Most everyone here knows what the JSTOR product/service is, and that hasn’t changed much with the merger.

Scholar’s use of information is moving online, and if it’s not online, they’ll use a different resource, even if it’s not as good. And, if things aren’t discoverable by Google, they are often overlooked. More complex content is emerging, including multimedia and user-generated content. Mergers and acquisitions in publishing are consolidating content under a few umbrellas, and this threatens smaller publishers and university presses that can’t keep up with the costs on a smaller scale.

The serials crisis has impacted smaller presses more than larger ones. Despite good relationships with societies, it is difficult to retain popular society publications when larger publishers can offer them more. It’s also harder to offer the deep discounts expected by libraries in consortial arrangements. University presses and small publishers are in danger of becoming the publisher of last resort.

UC Press and JSTOR have had a long relationship, with JSTOR providing long-term archiving that UC Press could not have afforded to maintain on their own. Not all of the titles are included (only 22), but they are the most popular. They’ve also participated in Portico. JSTOR is also partnering with 18 other publishers that are mission-driven rather than profit-driven, with experience at balancing the needs of academia and publishing.

By partnering with JSTOR for their new content, UC Press will be able to take advantage of the expanded digital platform, sales teams, customer service, and seamless access to both archive and current content. There are some risks, including the potential loss of identity, autonomy, and direct communication with libraries. And then there is the bureaucracy of working within a larger company.

The Current Scholarship Program seeks to provide a solution to the problems outlined above that university presses and small scholarly publishers are facing. The shared technology platform, Portico preservation, sustainable business model, and administrative services potentially free up these small publishers to focus on generating high-quality content and furthering their scholarly communication missions.

Libraries will be able to purchase current subscriptions either through their agents or JSTOR (who will not be charging a service fee). However, archive content will be purchased directly from JSTOR. JSTOR will handle all of the licensing, and current JSTOR subscribers will simply have a rider adding title to their existing licenses. For libraries that purchase JSTOR collections through consortia arrangements, it will be possible to add title by title subscriptions without going through the consortia if a consortia agreement doesn’t make sense for purchase decisions. They will be offering both single-title purchases and collections, with the latter being more useful for large libraries, consortia, and those who want current content for titles in their JSTOR collections.

They still don’t know what they will do about post-cancellation access. Big red flag here for potential early adopters, but hopefully this will be sorted out before the program really kicks in.

Benefits for libraries: reasonable pricing, more efficient discovery, single license, and meaningful COUNTER-compliant statistics for the full run of a title. Renewal subscriptions will maintain access to what they have already, and new subscriptions will come with access to the first online year provided by the publisher, which may not be volume one, but certainly as comprehensive as what most publishers offer now.

UC Press plans to start transitioning in January 2011. New orders, claims, etc. will be handled by JSTOR (including print subscriptions), but UC Press will be setting their own prices. Their platform, Caliber, will remain open until June 30, 2011, but after that content will be only on the JSTOR platform. UC Press expects to move to online-only in the next few years, particularly as the number of print subscriptions are dwindling to the point where it is cost-prohibitive to produce the print issues.

There is some interest from the publishers to add monographic content as well, but JSTOR isn’t ready to do that yet. They will need to develop some significant infrastructure in order to handle the order processing of monographs.

Some in the audience are concerned that the cost of developing platform enhancements and other tools, mostly that these costs will be passed on in subscription prices. They will be, to a certain extent, only in that the publishers will be contributing to the developments and they set the prices, but because it is a shared system, the costs will be spread out and likely impact libraries no more than they have already.

One big challenge all will face is unlearning the mindset that JSTOR is only archive content and not current content.

Learning 2009: Image Resources for Teaching

Presenters: Jeannine Keefer and Crista LaPrade

Keefer provided the attendees with a brief overview of licensed images. Specifically, ArtStor and why it would be used in the classroom (mainly art historians). There are also many free or Creative Commons licensed resources for images:

  • Flickr – range from amateur to professional, free to fully copyrighted
  • Picasa – similar to Flickr, but less communal
  • Google Images – search across the web
  • Google Earth – geotagged photos for specific locations
  • Creative Commons – search across several sites
  • MorgueFile – stock photography
  • OpenPhoto – stock photography
  • TinEye – reverse image search engine for finding more like the one you have
  • Cooliris – browser plugin for quickly flipping through images on various sites
  • Social networks like Facebook & MySpace

LaPrade and Boatwright Library’s Digital Production Services does all of the digitization and scanning for the library as well as scanning for faculty who need to convert analog images to digital for non-art classroom purposes. Non-presentation uses for this service (ideas beyond PowerPoint) include creating reference posters for students and images supplementing faculty publications (within copyright). Unfortunately, faculty will have to find their own storage (DVDs, flash drives, etc.) and delivery options, as DPS currently does not have a server for storage and delivery.

There are many resources you could use to share images in the classroom, including Blackboard and ArtStor, but also free image storage/sharing resources or your own web pages or blog. However, there are several factors to consider, since these can also be tools for managing the images: purpose, platform, ownership, collection size, image manipulation, metadata, budget, tech support, data integrity, file types, and presentation tool. Some possible solutions include Adobe Bridge (with the full version of Photoshop), Extensis Portfolio, Flickr, and Picasa.

(Side note: I think that many of the folks in the room were expecting to have a discussion of how faculty are actually using images in their teaching, and perhaps less about the tools that can be used to do so.)


One of the big projects I’ve been working on at MPOW is preparing to shift the bound journal collection, which also includes some systematic deselection. I don’t mean cancelling subscriptions. I’m talking about weeding the journals.

We’re about to run out of space in the building with no prospects of anything new on the horizon, so for the first time in forty years, the books are being weeded. The same thing has to happen to the journals, or we’ll be out of room for them, too. As it is, some areas are so tight that several sections of a range need to be shifted in order to add a new bound volume.

We started by pulling everything that is in JSTOR. This has freed up some significant space, but there is still a bit of dead wood in the collection. With online access, we’ve noticed a precipitous drop in print usage. Whereas we use to have an entire range of shelving for reshelve-prep, we now use a single book truck, which is rarely filled. Sure, we still need the journals that are not online in some fashion, but our students would prefer to use the electronic journals with free printing than get up from the computer, find the volume, and make a not-free photocopy of an article.

Sometimes I wonder why we continue to buy print journals at all, and the answer usually is that the publisher doesn’t have a good platform for their ejournals (if they have them), or for whatever reason, they seem kind of sketchy. Still, we’ve made a lot of transitions to online only in the past couple of years, and I think that will pan out well for slowing the collection growth to maximum capacity.