NASIG 2012: Results of Web-scale discovery — Data, discussions and decisions

Speakers: Jeff Daniels, Grand Valley State University

GVSU has had Summon for almost three years — longer than most any other library.

Whether you have a web-scale discovery system or are looking at getting one, you need to keep asking questions about it to make sure you’re moving in the right direction.

1. Do we want web-scale discovery?
Federated searching never panned out, and we’ve been looking for an alternative ever since. Web-scale discovery offers that alternative, to varying degrees.

2. Where do we want it?
Searching at GVSU before Summon — keyword (Encore), keyword (classic), title, author, subject, journal title
Searching after Summon — search box is the only search offered on their website now, so users don’t have to decide first what they are searching
The heat map of clicks indicates the search box was the most used part of the home page, but they still had some confusion, so they made the search box even more prominent.

3. Who is your audience?
GVSU focused on 1st and 2nd year students as well as anyone doing research outside their discipline — i.e. people who don’t know what they are looking for.

4. Should we teach it? If so, how?
What type of class is it? If it’s a one-off instruction session with the audience you are directing to your web-scale discovery, then teach it. If not, then maybe don’t. You’re teaching the skill-set more than the resource.

5. Is it working?
People are worried that known item searches will get lost (i.e. catalog items). GVSU found that the known items make up less than 1% of Summon, but over 15% of items selected from searches come from that pool.
Usage statistics from publisher-supplied sources might be skewed, but look at your link resolver stats for a better picture of what is happening.

GVSU measured use before and after Summon, and they expected searches to go down for A&I resources. They did, but ultimately decided to keep them because they were needed for accreditation, they had been driving advanced users to them via Summon, and publishers were offering bundles and lower pricing. For the full-text aggregator databases, they saw a decrease in searching, but an increase in full-text use, so they decided to keep them.

Speaker: Laura Robinson, Serials Solutions

Libraries need information that will help us make smart decisions, much like what we provide to our users.

Carol Tenopir looked at the value gap between the amount libraries spend on materials and the perceived value of the library. Collection size matters less these days — it’s really about access. Traditional library metrics fail to capture the value of the library.

tl;dr — Web-scale discovery is pretty awesome and will help your users find more of your stuff, but you need to know why you are implementing it and who you are doing it for, and ask those questions regularly even after you’ve done so.

ER&L 2010: Electronic Access and Research Efficiencies – Some Preliminary Findings from the U of TN Library’s ROI Analysis

Speaker: Gayle Baker, University of Tennessee – Knoxville

Phase one: Demonstrate the role of the library information in generating research grand incomes for the institution (i.e. the university spends X amount of money on the library which generates X amount of money in grant research and support).

To do this, they sent out emails to faculty with surveys that included incentives to respond (quantitative and qualitative questions). They gathered university-supplied data about grant proposals and income, and included library budget information. They also interviewed administrators to get a better picture of the priorities of the university.

UIUC’s model: Faculty with grant proposals using the library times the percentage of award success rate times the average grant income, then multiplied that by the grants expended and divided by the total library budget. The end result was that the model showed $4.38 grant income for every dollar invested in the library.

Phase two: Started by testing UIUC’s methodology across eight institutions in eight countries. Speaker didn’t elaborate, but went on to describe the survey they used and examples of survey responses. Interesting, but hard to convey relevance in this format, particularly since it’s so dependent on individual institutions. (On the up side, she has amusing anecdotes.) They used the ROI formula suggested by Carol Tenopir, which is slightly different than described above.

Phase three: IMLS grant for the next three years, headed by Tenopir and Paula Kaufman, and ARL and Syracuse will also be involved. They are trying to put a dollar value on things that are hard to do, such as student retention and success.

css.php