ER&L 2013: Truth or Dare — Discovery Systems Secrets Unveiled
Moderator: Dan Tonkery
Panel: Roger Schonfeld (ITHAKA S + R), Jon Law (ProQuest), Amira Aaron (Northeastern University), Brian Duncan (EBSCO), & Susan Stearns (Ex Libris)
What features of discovery services do students prefer? What ones do they dislike?
Law: The search box is intuitive and familiar, and their expectations of speed are set by web search engines. Being able to quickly scan the abstract to see if it is relevant, and then quickly retrieve the content when they want it.
Stearns: Needs to be flexible and reflective of different user types and the environment they are in. Contextual searching based on who they are and how they look for information. Students also expect to access related content about their relationship with libraries (i.e. materials checked out, notices).
Duncan: Finding the results on the first page, and at least the second page. Metadata and relevancy are important.
What impact is open access having on discovery?
Aaron: Depends on the model of OA. Not really sure if it has an impact on discovery systems yet. It has and will have an impact on discovery in general, but not sure if it’s impacting library discovery systems any more or less than open web searches.
Law: Our customers are turning OA links on in the discovery service.
Stearns: It’s easy to make the OA content available, but are you managing it? How does this impact back-office workflows?
Will discovery services replace the online catalog?
Stearns: It’s been painful for some libraries, but yes. There is no OPAC in next generation library systems, it’s all about discovery. And we need to get over it. Discovery services need to have the functionality of the OPAC (things librarians like). This is an opportunity to rethink workflows and what you do with metadata in a discovery environment.
What are the advantages of selling both a family of databases and a discovery service?
Duncan: Users have automatic full-text because it’s built into the system and doesn’t need to go through OpenURL. Thinking a lot about how to make this simpler for students and integrating high-quality metadata from A&I sources along with the full-text.
Aaron: That’s fine for the vendor, but it takes away the choice for the librarian as to where to send the user. It’s taking away choice.
Law: We want our discovery service to be content-provider neutral.
What impact can libraries reasonably expect discovery services to have on traffic patterns?
Schonfeld: We see the majority of traffic coming from Google and Google Scholar, at least for JSTOR. If the objective is to change where users are starting their research, then we need different ways of measuring that and determining success.
Stearns: Our customers are thinking about not only having the one search box on the web page, but also where else can you embed linking and making sure the connections work, particularly when users come in from different sources.
Aaron: Success is not measured by how many people come to your website and start there, it’s how they get to the content from wherever they go.
What metrics do librarians expect from discovery services?
Aaron: Search statistics aren’t very meaningful in the context of discovery services. Click-through, content sources — those are the important metrics.
Schonfeld: This is not just a new product – it replaces old products, so we need to think about it differently. Libraries might want to know what share of their users is coming from what sources (i.e. discovery services, Wikipedia, Google, etc.). It’s still early days to be able to come to any strong conclusions.
Duncan: Need to measure searches that don’t result in any click-throughs as well.
Does your discovery product provide title-level information to the user community and how often is it updated?
Law: How do you measure your collection? We need some definition around this in order to know how to tell libraries how much of it is indexed in our discovery service. We are starting to do more collection analysis for libraries.
Duncan: The title list doesn’t equate to the deep metadata of an A&I database. If we don’t have the deep metadata, we don’t say we have the same coverage as that database. Full text searching is not a replacement for controlled vocabulary and metadata, it’s just a component of it.
Stearns: We also want to make sure the collections we expose are actually the ones the users access, by looking at historical usage information.
Aaron: It’s important to have the deep metadata, and it’s troubling that the content providers aren’t playing well together. I should be able to display content we purchase to our users in whatever interface I want. If I can’t, I may not continue to purchase or lease that content. It’s the same problem we had with link resolvers years ago. If you really care about the user and libraries, then start playing together.
[Missed the last question because I was still flying high from Aaron’s call-out, but it was something dull about how much customization is available in the discovery system, or something like that. Couldn’t tell from the responses. Go read product information for the answers.]