Charleston 2014 – Crowd Sourcing of Library Services

Speaker: John Dove, Credo Reference

Lots of words about what crowd sourcing is and why we should care. This is why I’m not a scholar. Just get to the point and don’t spend so much time convincing me that the point is the point.


Speaker: Tim Spalding, LibraryThing

It’s a personal cataloging tool that becomes social with more people doing it. Personal cataloging is the basis, and it was started with the idea that it would only be that.

Users can add tags to categories their books, and there are over 112 million tags from users. Users can add cover images for their own books, creating a vast collection of book covers.

The next level of engagement is exhibitionism and voyeurism, followed by self-expression via reviews. Reviews happen after a person reads the book, not when they are looking it up in the library catalog.

Users can add their own series information, including sub-series, which is often more information than what librarians are able to add. Other common knowledge content includes characters, author information, etc. Members also manage the “authority control” — FRBRization, author disambiguation, tag disambiguation.

Policing (get off my lawn) and helping (here’s how to be on my lawn) — dealing with spam, trolls, etc. and also assisting newer users via forums.

The final level of engagement comes with collaborative cataloging of books by dead people or that have shown up in mass media (i.e. Dr. Horrible’s Sing Along Blog).

Lessons: secure the bottom of the ladder, build it rung by rung or at least think about it that way, and finally, crowd sourcing is not a feature. It’s not about what you get, it’s about what you give.


Speaker: Scott Johnson, ChiliFresh

If the internet in the 70s and 80s, waterbeds would probably have not reached the 20% penetration due to online reviews.

The wisdom of crowd source information is also the madness.

Rather than having a closed database of reviews from local patrons only, they have a collaborative database of reviews from library users across the world that local libraries can choose to participate in it or not. The reviews themselves are written by patrons, but they are moderated by librarians.


Speaker: Ilana Stonebraker, Purdue University

How is my library like the Vlog Brothers 54 jokes video? There is a huge network and community doing important things that are not visible in just that video. We are icebergs.

Most reference questions are lower-level, even online questions. What’s supposed to happen is at a much higher level, but the reality is that isn’t most of what happens. The traditional reference service model also assumes that the librarian is the only one who can give the answer. For example, sometimes students who have had a similar problem and found a solution can help each other.

CrowdAsk is similar to StackOverflow for gamification and badging. It’s open source on Git Hub. You can ask a question, and assign a bounty using your points to get a faster answer. They use it in lower level courses to allow the students to work together. Users can vote on answers and questions. Students who are really good at answering each other’s questions gain more power/authority in the system.

There is a good level of participation so far, and there are quite a number of lurkers, with the average time spent at over 6 minutes. They did some usability tests and found that often the motivation is reciprocity — they were helped and they want to help others.

The goal is to create a sustainable user engagement and community involvement as a part of the library’s website, not just to triage late-night reference questions.