NASIG 2010 reflections

When I was booking my flights and sending in my registration during the snow storms earlier this year, Palm Springs sounded like a dream. Sunny, warm, dry — all the things that Richmond was not. This would also be my first visit to Southern California, so I may be excused for my ignorance of the reality, and more specifically, the reality in early June. Palm Springs was indeed sunny, but not as dry and far hotter than I expected.

Despite the weather, or perhaps because of the weather, NASIGers came together for one of the best conferences we’ve had in recent years. All of the sessions were held in rooms that emptied out into the same common area, which also held the coffee and snacks during breaks. The place was constantly buzzing with conversations between sessions, and many folks hung back in the rooms, chatting with their neighbors about the session topics. Not many were eager to skip the sessions and the conversations in favor of drinks/books by the pools, particularly when temperatures peaked over 100°F by noon and stayed up there until well after dark.

As always, it was wonderful to spend time with colleagues from all over the country (and elsewhere) that I see once a year, at best. I’ve been attending NASIG since I was a wee serials librarian in 2002, and this conference/organization has been hugely instrumental in my growth as a librarian. Being there again this year felt like a combination of family reunion and summer camp. At one point, I choked up a little over how much I love being with all of them, and how much I was going to miss them until we come together again next year.

I’ve already blogged about the sessions I attended, so I won’t go into those details so much here. However, there were a few things that stood out to me and came up several times in conversations over the weekend.

One of the big things is a general trend towards publishers handling subscriptions directly, and in some cases, refusing to work with subscription agents. This is more prevalent in the electronic journal subscription world than in print, but that distinction is less significant now that so many libraries are moving to online-only subscriptions. I heard several librarians express concern over the potential increase in their workload if we go back to the era of ordering directly from hundreds of publishers rather than from one (or a handful) of subscription agents.

And then there’s the issue of invoicing. Electronic invoices that dump directly into a library acquisition system have been the industry standard with subscription agents for a long time, but few (and I can’t think of any) publishers are set up to deliver invoices to libraries using this method. In fact, my assistant who processes invoices must manually enter each line item of a large invoice of one of our collections of electronic subscriptions every year, since this publisher refuses to invoice through our agent (or will do so in a way that increases our fees to the point that my assistant would rather just do it himself). I’m not talking about mom & pop society publisher — this is one of the major players. If they aren’t doing EDI, then it’s understandable that librarians are concerned about other publishers following suit.

Related to this, JSTOR and UC Press, along with several other society and small press publishers have announced a new partnership that will allow those publishers to distribute their electronic journals on the JSTOR platform, from issue one to the current. JSTOR will handle all the hosting, payments, and library technical support, leaving the publishers to focus on generating the content. Here’s the kicker: JSTOR will also be handling billing for print subscriptions of these titles.

That’s right – JSTOR is taking on the role of subscription agent for a certain subset of publishers. They say, of course, that they will continue to accept orders through existing agents, but if libraries and consortia are offered discounts for going directly to JSTOR, with whom they are already used to working directly for the archive collections, then eventually there will be little incentive to use a traditional subscription agent for titles from these publishers. On the one hand, I’m pleased to see some competition emerging in this aspect of the serials industry, particularly as the number of players has been shrinking in recent years, but on the other hand I worry about the future of traditional agents.

In addition to the big picture topics addressed above, I picked up a few ideas to add to my future projects list:

  • Evaluate the “one-click” rankings for our link resolver and bump publisher sites up on the list. These sources “count” more when I’m doing statistical reports, and right now I’m seeing that our aggregator databases garner more article downloads than from the sources we pay for specifically. If this doesn’t improve the stats, then maybe we need to consider whether or not access via the aggregator is sufficient. Sometimes the publisher site interface is a deterrent for users.
  • Assess the information I currently provide to liaisons regarding our subscriptions and discuss with them what additional data I could incorporate to make the reports more helpful in making collection development decisions. Related to this is my ongoing project of simplifying the export/import process of getting acquisitions data from our ILS and into our ERMS for cost per use reports. Once I’m not having to do that manually, I can use that time/energy to add more value to the reports.
  • Do an inventory of our holdings in our ERMS to make sure that we have turned on everything that should be turned on and nothing that shouldn’t. I plan to start with the publishers that are KBART participants and move on from there (and yes, Jason Price, I will be sure to push for KBART compliance from those who are not already in the program).
  • Begin documenting and sharing workflow, SQL, and anything else that might help other electronic resource librarians who use our ILS or our ERMS, and make myself available as a resource. This stood out to me during the user group meeting for our ERMS, where I and a handful of others were the experts of the group, and by no means do I feel like an expert, but clearly there are quite a few people who could learn from my experience the way I learned from others before me.

I’m probably forgetting something, but I think those are big enough to keep me busy for quite a while.

If you managed to make it this far, thanks for letting me yammer on. To everyone who attended this year and everyone who couldn’t, I hope to see you next year in St. Louis!

NASIG 2010: Serials Management in the Next-Generation Library Environment

Panelists: Jonathan Blackburn, OCLC; Bob Bloom (?), Innovative Interfaces, Inc.; Robert McDonald, Kuali OLE Project/Indiana University

Moderator: Clint Chamberlain, University of Texas, Arlington

What do we really mean when we are talking about a “next-generation ILS”?

It is a system that will need to be flexible enough to accommodate increasingly changing and complex workflows. Things are changing so fast that systems can’t wait several years to release updates.

It also means different things to different stakeholders. The underlying thing is being flexible enough to manage both print and electronic, as well as better reporting tools.

How are “next-generation ILS” interrelated to cloud computing?

Most of them have components in the cloud, and traditional ILS systems are partially there, too. Networking brings benefits (shared workloads).

What challenges are facing libraries today that could be helped by the emerging products you are working on?

Serials is one of the more mature items in the ILS. Automation as a result of standardization of data from all information sources is going to keep improving.

One of the key challenges is to deal with things holistically. We get bogged down in the details sometimes. We need to be looking at things on the collection/consortia level.

We are all trying to do more with less funding. Improving flexibility and automation will offer better services for the users and allow libraries to shift their staff assets to more important (less repetitive) work.

We need better tools to demonstrate the value of the library to our stakeholders. We need ways of assessing resource beyond comparing costs.

Any examples of how next-gen ILS will improve workflow?

Libraries are increasing spending on electronic resources, and many are nearly eliminating their print serials spending. Next gen systems need reporting tools that not only provide data about electronic use/cost, but also print formats, all in one place.

A lot of workflow comes from a print-centric perspective. Many libraries still haven’t figured out how to adjust that to include electronic without saddling all of that on one person (or a handful). [One of the issues is that the staff may not be ready/willing/able to handle the complexities of electronic.]

Every purchase should be looked at independently of format and more on the cost/process for acquiring and making it available to the stakeholders.

[Not taking as many notes from this point on. Listening for something that isn’t fluffy pie in the sky. Want some sold direction that isn’t pretty words to make librarians happy.]

NASIG 2010: What Counts? Assessing the Value of Non-Text Resources

Presenters: Stephanie Krueger, ARTstor and Tammy S. Sugarman, Georgia State University

Anyone who does anything with use statistics or assessment knows why use statistics are important and the value of standards like COUNTER. But, how do we count the use of non-text content that doesn’t fit in the categories of download, search, session, etc.? What does it mean to “use” these resources?

Of the libraries surveyed that collect use stats for non-text resources, they mainly use them to report to administrators and determine renewals. A few use it to evaluate the success of training or promote the resource to the user community. More than a third of the respondents indicated that the stats they have do not adequately meet the needs they have for the data.

ARTstor approached COUNTER and asked that the technical advisory group include representatives from vendors that provide non-text content such as images, video, etc. Currently, the COUNTER reports are either about Journals or Databases, and do not consider primary source materials. One might think that “search” and “sessions” would be easy to track, but there are complexities that are not apparent.

Consider the Database 1 report. With a primary source aggregator like ARTstor, who is the “publisher” of the content? For ARTstor, search is only 27% of the use of the resource. 47% comes from image requests (includes thumbnail, full-size, printing, download, etc.) and the rest is from software utilities within the resource (creation of course folders, passwords creation, organizing folders, annotations of images, emailing content/URLs, sending information to bibliographic management tools, etc.).

The missing metric is the non-text full content unit request (i.e. view, download, print, email, stream, etc.). There needs to be some way of measuring this that is equivalent to the full-text download of a journal article. Otherwise, cost per use analysis is skewed.

What is the equivalent of the ISSN? Non-text resources don’t even have DOIs assigned to them.

On top of all of that, how do you measure the use of these resources beyond the measurable environment? For example, once an image is downloaded, it can be included in slides and webpages for classroom use more than once, but those uses are not counted. ARTstor doesn’t use DRM, so they can’t track that way.

No one is really talking about how to assess this kind of usage, at least not in the professional library literature. However, the IT community is thinking about this as well, so we may be able to find some ideas/solutions there. They are being asked to justify software usage, and they have the same lack of data and limitations. So, instead of going with the traditional journal/database counting methods, they are attempting to measure the value of the services provided by the software. The IT folk identify services, determine the cost of those services, and identify benchmarks for those costs.

A potential report could have the following columns: collection (i.e. an art collection within ARTstor, or a university collection developed locally), content provider, platform, and then the use numbers. This is basic, and can increase in granularity over time.

There are still challenges, even with this report. Time-based objects need to have a defined value of use. Resources like data sets and software-like things are hard to define as well (i.e. SciFinder Scholar). And, it will be difficult to define a report that is one size fits all.

NASIG 2010: Publishing 2.0: How the Internet Changes Publications in Society

Presenter: Kent Anderson, JBJS, Inc

Medicine 0.1: in dealing with the influenza outbreak of 1837, a physician administered leeches to the chest, James’s powder, and mucilaginous drinks, and it worked (much like take two aspirin and call in the morning). All of this was written up in a medical journal as a way to share information with peers. Journals have been the primary source of communicating scholarship, but what the journal is has become more abstract with the addition of non-text content and metadata. Add in indexes and other portals to access the information, and readers have changed the way they access and share information in journals. “Non-linear” access of information is increasing exponentially.

Even as technology made publishing easier and more widespread, it was still producers delivering content to consumers. But, with the advent of Web 2.0 tools, consumers now have tools that in many cases are more nimble and accessible than the communication tools that producers are using.

Web 1.0 was a destination. Documents simply moved to a new home, and “going online” was a process separate from anything else you did. However, as broadband access increases, the web becomes more pervasive and less a destination. The web becomes a platform that brings people, not documents, online to share information, consume information, and use it like any other tool.

Heterarchy: a system of organization replete with overlap, multiplicity, mixed ascendandacy and/or divergent but coextistent patterns of relation

Apomediation: mediation by agents not interposed between users and resources, who stand by to guide a consumer to high quality information without a role in the acquisition of the resources (i.e. Amazon product reviewers)

NEJM uses terms by users to add related searches to article search results. They also bump popular articles from searches up in the results as more people click on them. These tools improved their search results and reputation, all by using the people power of experts. In addition, they created a series of “results in” publications that highlight the popular articles.

It took a little over a year to get to a million Twitter authors, and about 600 years to get to the same number of book authors. And, these are literate, savvy users. Twitter & Facebook count for 1.45 million views of the New York Times (and this is a number from several years ago) — imagine what it can do for your scholarly publication. Oh, and NYT has a social media editor now.

Blogs are growing four times as fast as traditional media. The top ten media sites include blogs and the traditional media sources use blogs now as well. Blogs can be diverse or narrow, their coverage varies (and does not have to be immediate), they are verifiably accurate, and they are interactive. Blogs level that media playing field, in part by watching the watchdogs. Blogs tend to investigate more than the mainstream media.

It took AOL five times as long to get to twenty million users than it did for the iPhone. Consumers are increasingly adding “toys” to their collection of ways to get to digital/online content. When the NEJM went on the Kindle, more than just physicians subscribed. Getting content into easy to access places and on the “toys” that consumers use will increase your reach.

Print digests are struggling because they teeter on the brink of the daily divide. Why wait for the news to get stale, collected, and delivered a week/month/quarter/year later? People are transforming. Our audiences don’t think of information as analogue, delayed, isolated, tethered, etc. It has to evolve to something digital, immediate, integrated, and mobile.

From the Q&A session:

The article container will be here for a long time. Academics use the HTML version of the article, but the PDF (static) version is their security blanket and archival copy.

Where does the library as source of funds when the focus is more on the end users? Publishers are looking for other sources of income as library budgets are decreasing (i.e. Kindle, product differentiation, etc.). They are looking to other purchasing centers at institutions.

How do publishers establish the cost of these 2.0 products? It’s essentially what the market will bear, with some adjustments. Sustainability is a grim perspective. Flourishing is much more positive, and not necessarily any less realistic. Equity is not a concept that comes into pricing.

The people who bring the tremendous flow of information under control (i.e. offer filters) will be successful. One of our tasks is to make filters to help our users manage the flow of information.