NASIG 2009: Managing Electronic Resource Statistics

Presenter: Nancy Beals

We have the tools and the data, now we need to use them to the best advantage. Statistics, along with other data, can create a picture of how our online resources are being used.

Traditionally, we have gathered stats by counting when re-shelving, ILL, gate counts, circulation, etc. Do these things really tell us anything? Stats from eresources can tell us much more, in conjunction with information about the paths we create to them.

Even with standards, we can run into issues with collecting data. Data can be “unclean” or incorrectly reported (or late). And, not all publishers are using the standards (i.e. COUNTER).

After looking at existing performance indicators, applying them to electronic resources, then we can look at trends with our electronic resources. This can help us with determining the return on investment in these resources.

Keep a master list of stats in order to plan out how and when to gather them. Keep the data in a shared location. Be prepared to supply data in a timely fashion for collection development decision-making.

When you are comparing resources, it’s up to individual institutions to determine what is considered low or high use. Look at how the resources stack up within the over-all collection.

When assessing the value of a resource, Beals and her colleagues are looking at 2-3 years of use data, 10% cost inflation, and the cost of ILL. In addition, they make use of overlap analysis tools to determine where they have multiple formats or sources that could be eliminated based on which platforms are being used.

Providing readily accessible data in a user-friendly format empowers selectors to do analysis and make decisions.

NASIG 2009: Ambient Findability

Libraries, Serials, and the Internet of Things

Presenter: Peter Morville

He’s a librarian that fell in love with the web and moved into working with information architecture. When he first wrote the book Information Architecture, he and his co-author didn’t include a definition of information architecture. With the second edition, they had four definitions: the structural design of shared information environments; the combination of organization, labeling, search, and navigation systems in webs sites and intranet; the art and science of shaping information products and experiences to support usability and finadability; an emerging discipline and community of practice focused on bringing principles of designing and architecture to the digital landscape.

[at this point, my computer crashed, losing all the lovely notes I had taken so far]

Information systems need to use a combination of categories (paying attention to audience and taxonomy), in-text linking, and alphabetical indexes in order to make information findable. We need to start thinking about the information systems of the future. If we examine the trends through findability, we might have a different perspective. What are all the different ways someone might find ____? How do we describe it to make it more findable?

We are drowning in information. We are suffering from information anxiety. Nobel Laureate Economist Herbert Simon said, “A wealth of information creates a poverty of attention.”

Ambient devices are alternate interfaces that bring information to our attention, and Moreville thinks this is a direction that our information systems are moving towards. What can we now do when our devices know where we are? Now that we can do it, how do we want to use it, and in what contexts?

What are our high-value objects, and is it important to make them more findable? RFID can be used to track important but easily hidden physical items, such as wheelchairs in a hospital. What else can we do with it besides inventory books?

In a world where for every object there are thousands of similar objects, how do we describe the uniqueness of each one? Who’s going to do it? Not Microsoft, and not Donald Rumsfeld and his unknown unknown. It’s librarians, of course. Nowadays, metadata is everywhere, turning everyone who creates it into librarians and information architects of sorts.

One of the challenges we have is determine what aspects of our information systems can evolve quickly and what aspects need more time.

In five to ten years from now, we’ll still be starting by entering a keyword or two into a box and hitting “go.” This model is ubiquitous and it works because it acknowledges human psychology of just wanting to get started. Search is not just about the software. It’s a complex, adaptive system that requires us to understand our users so that they not only get started, but also know how to take the next step once they get there.

Some example of best and worse practices for search are on his Flickr. Some user-suggested improvements to search are auto-compete search terms, suggested links or best bets, and for libraries, federated search helps users know where to begin. Faceted navigation goes hand in hand with federated search, which allows users to formulate what in the past would have been very sophisticated Boolean queries. It also helps them to understand the information space they are in by presenting a visual representation of the subset of information.

Morville referenced last year’s presentation by Mike Kuniavsky regarding ubiquitous computing, and he hoped that his presentation has complemented what Kuniavsky had to say.

Libraries are more than just warehouses of materials — they are cathedrals of information that inspire us.

PDF of his slides

CIL 2009: What Have We Learned Lately About Academic Library Users

Speakers: Daniel Wendling & Neal K. Kaske, University of Maryland

How should we describe information-seeking behavior?

A little over a third of the students interviewed reported that they use Google in their last course-related search, and it’s about the same across all classes and academic areas. A little over half of the same students surveyed used ResearchPort (federated search – MetaLib), with a similar spread between classes and academic areas, although social sciences clearly use it more than the other areas. (survey tool: PonderMatic – copy of survey form in the conference book).

Their methodology was a combination of focus-group interviews and individual interviews, conducted away from the library to avoid bias. They used a coding sheet to standardize the responses for input into a database.

This survey gathering & analysis tool is pretty cool – I’m beginning to suspect that the presentation is more about it than about the results, which are also rather interesting.

 

Speaker: Ken Varnum

Will students use social bookmarking on a library website?

MTagger is a library-based tagging tool, borrowing concepts from resources like Delicious or social networking sites, and intended to be used to organize academic bookmarks. In the long term, the hope is that this will create research guides in addition to those supported by the librarians, and to improve the findability of the library’s resources.

Behind the scenes, they have preserved the concept of collections, which results in users finding similar items more easily. This is different from the commercial tagging tools that are not library-focused. Most tagging systems are tagger-centric (librarians are the exception). As a result, tag clouds are less informative, since most of the tags are individualized and there isn’t enough overlap to make them more visible.

From usability interviews, they found that personal motivations are stronger than social motivations, and that they wanted to have the tags displays alongside traditional search results. They don’t know why, but many users perceived tagging to be a librarian thing and not something they can do themselves.

One other thing that stood out in the usability interviews was the issue of privacy. Access is limited to network login, which has its benefits (your tags and you) and its problems (inappropriate terminology, information living in the system beyond your tenure etc.).

They are redesigning the website to focus on outcomes (personal motivation) rather than on tagging as such.

gathering statistics

For the past couple of weeks, the majority of my work day has been spent on tracking down and massaging usage statistics reports from the publishers of the online products we purchase. I am nearly half-way through the list, and I have a few observations based on this experience:

1. There are more publishers not following the COUNTER code of practice than those who are. Publishers in traditionally library-dominated (and in particular, academic library-dominated) markets are more likely to provide COUNTER-compliant statistics, but that is not a guarantee.

2. Some publishers provide usage statistics, and even COUNTER-compliant usage statistics, but only for the past twelve months or some other short period of time. This would be acceptable only if a library had been saving the reports locally. Otherwise, a twelve month period is not long enough to use the data to make informed decisions.

3. We are not trying to use these statistics to find out which resources to cancel. On the contrary, if I can find data that shows an increase in use over time, then my boss can use it to justify our annual budget request and maybe even ask for more money.

Update: It seems that the conversation regarding my observations is happening over on FriendFeed. Please feel free to join in there or leave your thoughts here.

acrl northwest 2006 – photos

I didn’t take any pictures at ACRL Northwest because my camera is currently being fixed by Canon. However, there is a Flickr tag for the photos other people took. Right now Jessamyn is the only one who has uploaded and tagged photos from the conference, but hopefully the other photographers I saw there will add … Continue reading “acrl northwest 2006 – photos”

I didn’t take any pictures at ACRL Northwest because my camera is currently being fixed by Canon. However, there is a Flickr tag for the photos other people took. Right now Jessamyn is the only one who has uploaded and tagged photos from the conference, but hopefully the other photographers I saw there will add theirs soon.

usage statistics

The following is an email conversation between myself and the representative of a society publisher who is hosting their journals on their own website. Can I access the useage information for my institution? We subscribe to both the print and online [Journal Name]. Anna Creech Dear Ms. Creech, At the most recent meeting of the … Continue reading “usage statistics”

The following is an email conversation between myself and the representative of a society publisher who is hosting their journals on their own website.


Can I access the useage information for my institution? We subscribe to both the print and online [Journal Name].

Anna Creech


Dear Ms. Creech,

At the most recent meeting of the [Society] Board of Directors, the topic of usage statistics was discussed at length. As I am sure you are aware, usage statistics are a very coarse measure of the use of a web resource. As just one example, there is no particular relationship between the number of downloads of an article and the number of times it is read or the number of times it is cited. An article download could represent anything from glancing at the abstract, to careful reading. Once downloaded, articles can be saved locally, re-read and redistributed to others. Given the lack of any evidence that downloads of professional articles have any relationship to their effective audience size or their value to readers, the Board decided that [Society] will not provide potentially misleading usage statistics. We do periodically publish the overall usage of the [Society] website, about 10 million hits per year.

Regards,

[Name Removed]
[Society] Web Editor


Dear Mr. [Name Removed],

Your Board of Directors are certainly a group of mavericks in this case. Whether they think the data is valuable or not, libraries around the world use it to aid in collection development decisions. Without usage data, we have no idea if an online resource is being used by our faculty and students, which makes it an easy target for cancellation in budget crunch times. I suggest they re-think this decision, for their own sakes.

We all know that use statistics do not fully represent the way an online journal is used by researchers, but that does not mean they are without value. No librarian would ever make decisions base on usage data alone, but it does contribute valuable information to the collection development process.

Hits on a website mean even less than article downloads. Our library website gets millions of hits just from being the home page for all of the browsers in the building. I would never use website hits to make any sort of a decision about an online resource.

Provide the statistics using the COUNTER standard and let the professionals (i.e. librarians) decide if they are misleading.

Anna Creech


UPDATE: The conversation continues….


Dear Ms. Creech,

Curiously, the providers of usage statistics are primarily commercial publishing houses. Few science societies that publish research journals are providing download statistics. In part, this is a matter of resources that the publisher can dedicate to providing statistics-on-demand: commercial publishing houses have the advantage of an economy of scale. They are also happy to provide COUNTER-compliant statistics in part because they are relatively immune to journal cancellation, as a result of mandatory journal bundling.

In any event, after careful consideration and lengthy discussion with a librarian-consultant, the Board concluded that usage statistics are easy to acquire and tempting to use, but are in effect “bad data”. I certainly respect your desire to make the most of a tight library budget, but also respectfully disagree that download statistics are an appropriate tool to make critical judgements about journals. Other methods to learn about the use of a particular journal are available- for example, asking faculty and students to rate the importance of journals to their work, or using impact factors. I am sure you take these into account as well.

I will copy this reply to the [Society] Board so that they are aware of your response. No doubt the Board will revisit the topic of usage statistics in future meetings.

Regards,

[Name Removed]


Dear Mr. [Name Removed],

I never ment to imply that we exclusively use statistics for collection development decisions. We also talk with faculty and students about their needs. However, the numbers are often a good place to begin the discussions. As in, “I see that no one has downloaded any articles from this journal in the past year. Are you still finding it relevant to your research?” Even prior to online subscriptions, librarians have looked at re-shelve counts and the layer of dust on the tops of materials as indicators that a conversation is warranted.

I suggest your Board take a look at the American Chemical Society. They provide COUNTER statistics and are doing quite well despite the “bad data.”

Anna Creech

tagging

So, I’m finally hopping on the blog tag bandwagon. I thought my categories were enough, and I didn’t know how to make the keyword field show in the entry creation process. But now that I have a brand new plugin, I’ve started adding keywords to my posts with Technorati links. I tagged the last however … Continue reading “tagging”

So, I’m finally hopping on the blog tag bandwagon. I thought my categories were enough, and I didn’t know how to make the keyword field show in the entry creation process. But now that I have a brand new plugin, I’ve started adding keywords to my posts with Technorati links. I tagged the last however many entries just now and I will tag future posts, but at 457 entries, I don’t plan to do any retrospective tagging. Heck, I think some of my earlier entries aren’t categorized, either. Probably for the best. There are some things I’d like to forget.

Oh! I had a brainstorm yesterday evening for an article topic, so maybe I’ll get cracking on that soon. After all, I just have to have stuff submitted. If it gets published, well, so much the better.