ER&L 2012: Trials by Juries — Suggested Practices for Database Trials

IBM System/370 Model 145
photo by John Keogh

Speakers: Annis Lee Adams, Jon Ritterbush, & Christine E. Ryan

The discussion topic began as an innocent question on ERIL-L listserv about tools or techniques used in gathering feedback on database trials, whether from librarians or library users.

Trials can come from many request sources — subject librarians, faculty, students, and the electronic resources or acquisitions librarian. Adams evaluates the source and their access points. Also, they try to trial things they are really interested in early enough in the year to be able to request funding for the next year if they choose to purchase it. She says they don’t include faculty in the evaluation unless they think they can afford the product.

Criteria for evaluation: content, ease of use/functionality, cost, and whether or not a faculty member requested it. One challenge is how to track and keep an institutional memory of the outcome. They use an internal blog on WordPress to house the information (access, cost, description, and evaluation comments) with password protection on each entry. After the trial ends, the blog entry is returned to draft status so it’s not there, and a note is added with the final decision.

The final thing Adams does is create a spreadsheet that tracks every trial over a year, and it includes some renewals of existing subscriptions.

Ritterbush… lots of no-brainer stuff. Is it relevant to your collection development policy? Can you afford it? Who is requesting it? And so on.

Avoid scheduling more than three trials simultaneously to avoid “trial fatigue.” Ritterbush says they only publicize extended trials (>3 months) — the rest are kept internal or only shared with targeted faculty.

For feedback, they found that email is a mediocre solution, in part because the responses weren’t very helpful. The found that short web forms have worked better, incorporating a mix of Likert scale and free-text questions. The tool they use is Qualtrics, but most survey products would be fine.

Ritterbush tries to compose the trial information as a press release, making it easy for librarians and faculty to share with colleagues. A webinar or live demonstration of the product can increase interest and participation in the evaluation.

Ryan says you need to know why you are doing the trial, because that tells you who it will impact and then what approach you’ll need to take. Understand your audience in order to reach them.

Regardless of who is setting up the trials, it would be good to have a set of guidelines for trials that spells out responsibilities.

Kind of tuning out, since it seems like Ryan doesn’t really do anything directly with trials — just gives all that over to the subject liaisons. This would be disastrous at my library. Also, really not happy about her negative attitude towards public trials. If it’s IP-based, then who cares if you post it on your website? I’ve received invaluable feedback from users that would never see the trials if I followed Ryan’s method.

Questions:
What about trials to avoid expensive subscriptions? Some libraries will do it, but some have policies that prohibit it. [We have had sales agents recommend it to us, which I’ve never understood.]

How do you have trials for things when you don’t know if you have funding for them? Manage expectations and keep a healthy wishlist. [We will also use trials to justify funding increases or for replacing existing subscriptions with something new.]

css.php