IL 2012: Sensible Library Website Development

Jakob Lodwick
“Jakob Lodwick” by Zach Klein

Speaker: Amanda Etches

Asked some folks on Twitter why their library has a website. A few of the responses: to link to online resources, to allow access to the catalog, to support research needs, to provide access to resources & services, to teach, to help, to provide access to account function, to post events, to post policies & hours, it’s the primary way our patrons interact with us, and as a two-way communication tool between the library and the community they serve. Audience member noted that marketing your library is missing.

While we are all unique little snowflakes, we aren’t all that unique in our motivations for having a library website. So, how can we learn from each other?

Website planning needs to have a clear understanding of scope. Since most of us have a website, this talk will focus more on redesign than from building from scratch. Most people tend to skip the scoping step when doing a redesign because we assume that it will cover the same stuff we already have.

Sadly, most libraries are like a big, messy junk drawer of stuff. We tend to take a “just in case” approach to designing sites. Less is not more, less is actually less, and that’s a good thing. Consider the signal to noise ratio of your website. What users don’t need is too much noise drowning out the signal. Pay attention to how much you are putting on the site that meets your needs rather than your user’s needs. It’s better for half of your website to be amazing than all of it to be bland.

Think about your website like a pyramid, where the bottom half is the basics, followed by destination information, then participatory components, and finally a community portal. Think of it like Maslow’s hierarchy of needs — the basic stuff has to be good or you can’t get to the participatory level.

Etches and some colleagues created a website experiment that is an entire library site on one page called the One-Pager. Freehold Public Library has taken this and ran with it, if you want to see it working in the real world.

Designing for mobility requires you to pare back to what you consider to be essential functionality, and a great way to help scope your website. If you wouldn’t put it on your mobile version, think about why you should put it on your desktop website. Recommend the book Mobile First as an inspiration for scope.

How do you determine critical tasks of a website? As your users. A simple one-page survey, interviews, focus groups, and heat maps. Asking staff is the least useful way to do it.

Web users don’t read content, they skim/scan it. People don’t want to read your website; they want to find information on it. When writing copy for your website, pare it down, and then pare it down again. Your website should be your FAQ, not your junk drawer. Think about your website as bite-sized chunks of information, not documentation. Adopt the inverted pyramid style for writing copy. If you have a lot of text, bold key concepts to catch skimming eyes. Eye-catching headers work well in conjunction with the inverted pyramid and bolded key concepts.

Treat your website like a conversation between you and your users/audience. Pages not be written by passive voiced writers. Write in the active voice, all of the time, every time. Library = we; User = you

It is not easy to redo the navigation on a website. Bad navigation makes you think, good navigation is virtually invisible. Navigation needs to serve the purposes of telling the user: site name, page name, where they are, whey they can go, and how they can search. Salt Lake City Public Library and Vancouver Public Library do this very well, if you want some real-world examples.

It’s very important to match navigation labels to page names. Also keep in mind that your navigation is not your org chart, so don’t design navigation along that. Do not, ever (and I’m surprised we still have to talk about this 15 years after I learned it), use “click here”. Links should be descriptive.

Why test websites at all? A lack of information is at the root of all bad design decisions. Usability testing runs the gamut from short & easy to long & hard. Watch people use your site. It can take just five minutes to do that.

We are not our patrons, so don’t test librarians and library staff. They are also not your primary user group and not the ones you need to worry about the most. Five testers are usually enough for any given test, more than that and you’ll get repetition. No test is too small; don’t test more than three things at once. Make iterative changes as you go along. Test early and often. The best websites do iterative changes over time based on constant testing.

Have a script when you are testing. You want to ensure that all testers receive the same instructions and makes it a little more comfortable for the test giver. Provide testers with an outline of what they will be doing, and also give them a paper list of tasks they will be doing. Remind them that they aren’t the ones being tested, the website is. Don’t tell them where to go and what to do (i.e. “search a library database for an article on x topic”).

From Q&A section:
All of your navigation items should be in one place and consistent across the site.

What do you do when use and usability says that you should remove a page a librarian is keen to keep? One suggestion is to put it in a LibGuide. Then LibGuides become the junk drawer. One way to keep that from happening is to standardizing the look and feel of LibGuides.

For policies, you could put a summary on the website and then link to the full document.

ER&L: You’ve Flipped – the implications of ejournals as your primary format

Speaker: Kate Seago

In 2005, her institution’s were primarily print-based, but now they are mostly electronic. As a graduate of the University of Kentucky’s MLIS program, this explains so much. I stopped paying attention when I realized this presentation was all about what changed in the weird world of the UK Serials Dept, which has little relevance to my library’s workflows/decisions. I wish she had made this more relatable for others, as this is a timely and important topic.

ER&L: Buzz Session – Usage Data and Assessment

What are the kinds of problems with collecting COUNTER and other reports? What do you do with them when you have them?

What is a good cost per use? Compare it to the alternative like ILL. For databases, trends are important.

Non-COUNTER stats can be useful to see trends, so don’t discount them.

Do you incorporate data about the university in makings decisions? Rankings in value from faculty or students (using star rating in LibGuides or something else)?

When usage is low and cost is high, that may be the best thing to cancel in budget cuts, even if everyone thinks it’s important to have the resource just in case.

How about using stats for low use titles to get out of a big deal package? Comparing the cost per use of core titles versus the rest, then use that to reconfigure the package as needed.

How about calculating the cost per use from month to month?

recommended reading: The Loris in the Library

No, it’s not a new children’s book. Rather, it’s a wonderful essay by Sarah Glassmeyer that was recently published in VoxPopuLII. Here are a few tasty quotes that I quite enjoyed:

…if an overly cautious, slow moving, non-evolving primate that responds to threats by a poison tongue or hiding and pretending the threat isn’t there didn’t remind you of anything, well then I guess you haven’t spent much time around librarians.

and

…librarians don’t cling to print materials out of some romantic notion of the superiority of books, nor do they make repeated demands for stable, authenticated archives of electronic materials just to make you crazy. When one is tasked with the preservation of information – on behalf not just of those looking for it ten years from now, but also of those looking hundreds if not thousands of years from now – and no one else is really in the information distribution or storage business, it pays to take one’s time and be cautious when determining what container to put that information in, especially when what you’ve been doing for the past 1,000 or so years has been working for you.

and

…with librarians this risk aversion has grown like a cancer and now manifests itself as a fear of failure. This fear has become so ingrained in the culture that innovation and progress are inhibited.

and

As it stands now, librarian participation in a multidisciplinary project is often regarded as more of a hindrance than a help. If librarians don’t change, they will eventually stop being invited to the conversation.

ER&L 2010: Comparison Complexities – the challenges of automating cost-per-use data management

Speakers: Jesse Koennecke & Bill Kara

We have the use reports, but it’s harder to pull in the acquisitions information because of the systems it lives in and the different subscription/purchase models. Cornell had a cut in staffing and an immediate need to assess their resources, so they began to triage statistics cost/use requests. They are not doing systematic or comprehensive reviews of all usage and cost per use.

In the past, they have tried doing manual preparation of reports (merging files, adding data), but that’s time-consuming. They’ve had to set up processes to consistently record data from year to year. Some vendor solutions have been partially successful, and they are looking to emerging options as well. Non-publisher data such as link resolver use data and proxy logs might be sufficient for some resources, or for adding a layer to the COUNTER information to possibly explain some use. All of this has required certain skill sets (databases, spreadsheets, etc.)

Currently, they are working on managing expectations. They need to define the product that their users (selectors, administrators) can expect on a regular basis, what they can handle on request, and what might need a cost/benefit decision. In order to get accurate time estimates for the work, they looked at 17 of their larger publisher-based accounts (not aggregated collections) to get an idea of patterns and unique issues. As an unfortunate side effect, every time they look at something, they get an idea of even more projects they will need to do.

The matrix they use includes: paid titles v. total titles, differences among publishers/accounts, license period, cancellations/swaps allowed, frontfile/backfile, payment data location (package, title, membership), and use data location and standard. Some of the challenges with usage data include non-COUNTER compliance or no data at all, multiple platforms for the same title, combined subscriptions and/or title changes, titles transferred between publishers, and subscribed content v. purchased content. Cost data depends on the nature of the account and the nature of the package.

For packages, you can divide the single line item by the total use, but that doesn’t help the selectors assess the individual subset of titles relevant to their areas/budgets. This gets more complicated when you have packages and individual titles from a single publisher.

Future possibilities: better automated matching of cost and use data, with some useful data elements such as multiple cost or price points, and formulas for various subscription models. They would also like to consolidate accounts within a single publisher to reduce confusion. Also, they need more documentation so that it’s not just in the minds of long-term staff. 

css.php