NASIG 2010: Linked Data and Libraries

Presenter: Eric Miller, Zepheira, LCC

Nowadays, we understand what the web is and the impact it has had on information sharing, but before it was developed, it was in a “vague but exciting” stage and few understood it. When we got started with the web, we really didn’t know what we were doing, but more importantly, the web was being developed so that it was flexible enough for smarter and more creative people to do amazing things.

“What did your website look like when you were in the fourth grade?” Kids are growing up with the web and it’s hard for them to comprehend life without it. [Dang, I’m old.]

This talk will be about linked data, its legacy, and how libraries can lead linked data. We have a huge opportunity to weave libraries into the fabric of libraries, and vice versa.

About five years ago, the BBC started making their content available in a service that allowed others to use and remix the delivery of the content in new ways. Rather than developing alternative platforms and creating new spaces, they focus on generating good content and letting someone else frame it. Other sources like NPR, the World Bank, and Data.gov are doing the same sorts of things. Within the library community, these things are happening, as well. OCLC’s APIs are getting easier to use, and several national libraries are putting their OPACs on the web with APIs.

Obama’s open government initiative is another one of those “vague but exciting” things, and it charged agencies to come up with their own methods of making their content available via the web. Agencies are now struggling with the same issues and desires that libraries have been tackling for years. We need to recognize our potential role in moving this forward.

Linked data is a best practice for sharing data, connecting data, and uses the semantic web. Rather than leaving the data in their current formats, let’s put them together in ways they can be used on the wider web. It’s not the databases that make the web possible, it’s the web that makes the databases usable.

Human computation can be put to use in ways that assist computers to make information more usable. Captcha systems are great for blocking automated programs when needed, and by using human computation to decipher scanned text that is undecipherable by computers, ReCaptcha has been able to turn unusable data into a fantastic digital repository of old documents.

LEGOs have been around for decades, and their simple design ensures that new blocks work with old blocks. Most kids end up dumping all of their sets into one bucket, so no matter where the individual building blocks come from, they can be put together and rebuild in any way you can imagine. We could do this with our blocks of data, if they are designed well enough to fit together universally.

Our current applications, for the most part, are not designed to allow for the portability of data. We need to rethink application design so that the data becomes more portable. Web applications have, by neccesity, had to have some amount of portability. Users are becoming more empowered to use the data provided to them in their own way, and if they don’t get that from your service/product, then they go elsewhere.

Digital preservation repositories are discussing ways to open up their data so that users can remix and mashup data to meet their needs. This requires new ways of archiving, cataloging, and supplying the content. Allow users to select the facets of the data that they are interested in. Provide options for visualizing the raw data in a systematic way.

Linked data platforms create identifiers for every aspect of the data they contain, and these are the primary keys that join data together. Other content that is created can be combined to enhance the data generated by agencies and libraries, but we don’t share the identifiers well enough to allow others to properly link their content.

Web architecture starts with web identifiers. We can use URLs to identify things other than just documents, but we need to be consistent and we can’t change the URL structures if we want it to be persistent. A lack of trust in identifiers is slowing down linked data. Libraries have the opportunity to leverage our trust and data to provide control points and best practices for identifier curation.

A lot of work is happening in W3C. Libraries should be more involved in the conversation.

Enable human computation by providing the necessary identifiers back to data. Empower your users to use your data, and build a community around it. Don’t worry about creating the best system — wrap and expose your data using the web as a platform.

ER&L 2010: Where are we headed? Tools & Technologies for the future

Speakers: Ross Singer & Andrew Nagy

Software as a service saves the institution time and money because the infrastructure is hosted and maintained by someone else. Computing has gone from centralized, mainframe processing to an even mix of personal computers on an networked enterprise to once again a very centralized environment with cloud applications and thin clients.

Library resource discovery is, to a certain extent, already in the cloud. We use online databases and open web search, WorldCat, and next gen catalog interfaces. The next gen catalog places the focus on the institution’s resources, but it’s not the complete solution. (People see a search box and they want to run queries on it – doesn’t matter where it is or what it is.) The next gen catalog is only providing access to local resources, and while it looks like modern interfaces, the back end is still old-school library indexing that doesn’t work well with keyword searching.

Web-scale discovery is a one-stop shop that provides increased access, enhances research, and provides and increase ROI for the library. Our users don’t use Google because it’s Google, they use it because it’s simple, easy, and fast.

How do we make our data relevant when administration doesn’t think what we do is as important anymore? Linked data might be one solution. Unfortunately, we don’t do that very well. We are really good at identifying things but bad at linking them.

If every component of a record is given identifiers, it’s possible to generate all sorts of combinations and displays and search results via linking the identifiers together. RDF provides a framework for this.

Also, once we start using common identifiers, then we can pull in data from other sources to increase the richness of our metadata. Mashups FTW!

IL2009: Mashups for Library Data

Speakers: Nicole Engard

Mashups are easy ways to provide better services for our patrons. They add value to our websites and catalogs. They promote our services in the places our patrons frequent. And, it’s a learning experience.

We need to ask our vendors for APIs. We’re putting data into our systems, so we should be able to get it out. Take that data and mash it up with popular web services using RSS feeds.

Yahoo Pipes allows you to pull in many sources of data and mix it up to create something new with a clean, flow chart like interface. Don’t give up after your first try. Jody Fagan wrote an article in Computers in Libraries that inspired Engard to go back and try again.

Reading Radar takes the NYT Bestseller lists and merges it with data from Amazon to display more than just sales information (ratings, summaries, etc.). You could do that, but instead of having users go buy the book, link it to your library catalog. The New York Times has opened up a tremendous amount of content via APIs.

Bike Tours in CA is a mashup of Google Maps and ride data. Trulia, Zillow, and HousingMaps use a variety of sources to map real estate information. This We Know pulls in all sorts of government data about a location. Find more mashups at ProgrammableWeb.

What mashups should libraries be doing? First off, if you have multiple branches, create a Google Maps mashup of library locations. Share images of your collection on Flickr and pull that into your website (see Access Ceramics), letting Flickr do the heavy lifting of resizing the images and pulling content out via machine tags. Delicious provides many options for creating dynamically updating lists with code snippets to embed them in your website.

OPAC mashups require APIs, preferably those that can generate JavaScript, and finally you’ll need a programmer if you can’t get the information out in a way you can easily use it. LexisNexis Academic, WorldCat, and LibraryThing all have APIs you can use.

Ideas from Librarians: Mashup travel data from circulation data and various travel sources to provide better patron services. Grab MARC location data to plot information on a map. Pull data about media collection and combine it with IMDB and other resources. Subject RSS feeds from all resources for current articles (could do that already with a collection of journals with RSS feeds and Yahoo Pipes).

Links and more at her book website.

Pandora Town Hall (Richmond, VA)

Open question/answer forum with Tim Westergren, the founder of the Music Genome Project and Pandora Internet Radio.

June 29, 2009
approx 100 attending
free t-shirts! free burritos from Chipotle!

Tim Westergren, founder of Pandora

His original plan was to get in a car & drive across country to find local music to add to Pandora, but it wasn’t quite as romantic as he thought it would be. On the way home, he planned a meetup on the fly using the Pandora blog, and since then, whenever he visits a new city, he organizes get together like this one.

Tim is a Stanford graduate and a musician, although he didn’t study it specifically. He spent most of his 20s playing in bands, touring around the country, but not necessarily as a huge commercial success. It’s hard to get on the radio, and radio is the key to professional longevity. Eventually, he shifted to film score composition, which required him to analyze music and break it down into components that represent what is happening on the screen. This generated the idea of a musical genome.

The Music Genome Project was launched in 2000 with some seed money that lasted about a year. Eventually, they ran out of money and couldn’t pay their 45 employees. They tried several different ways to raise money, but nothing worked until some venture investors put money into it in 2004. At that point, they took the genome and repurposed it into a radio (Pandora) in 2005.

They have never advertised — it has all been word of mouth. They now add about 65,000 new listeners per day! They can see profitability on the horizon. Pandora is mainly advertising supported. The Amazon commissions provide a little income, but not as much as you might think they would.

There are about 75,000 artists on the site, and about 70% are not on a major label. The song selection is not based on popularity, like most radio, but rather on the elements of the songs and how they relate to what the user has selected.

Playlists are initially created by the song or artists musical proximity to begin with, and then is refined as the user thumbs up or down songs. Your thumbs up and down effect only the station you are listening to, and it effects whatever the rest of the playlist was going to be. They use the over-all audience feedback to adjust across the site, but it’s not as immediate or personalized.

They have had some trouble with royalties. They pay both publishing and performer royalties per song. They operate under the DMCA, including the royalty structure. Every five years, a committee determines what the rate will be for the next year. In July 2007, the committee decided to triple the ratings and made it retroactive. It essentially bankrupted the company.

Pandora called upon the listeners to help them by contacting their congressional representative to voice opposition to the decision. Congress received 400,000 faxes in three days, breaking the structure on the Hill for a week! Their phones were ringing all day long! Eventually, they contacted Pandora to make it stop. They are now finishing up what needs to be done to bring the royalty back to something more reasonable. (Virtually all the staffers on Capitol Hill are Pandora users — made it easy to get appointments with congress members.)

Music comes to Pandora from a variety of sources. They get a pile of physical and virtual submissions from artists. They also pay attention to searches that don’t result in anything in their catalog, as well as explicit suggestions from listeners.

They have a plan to offer musicians incentives to participate. For example, if someone thumbs up something, there would be a pop-up that suggest checking out a similar (or the same) band that is playing locally. Most of the room would opt into emails that let them know when bands they like are coming to town. Musicians could see what songs are being thumbed up or down and where the listeners are located.

Listener suggestion: on the similar artists pages, provide more immediate sampling of recommendations.

What is the cataloging backlog? It takes about 8-10 weeks, and only about 30% of what is submitted makes it in. They select based on quality: for what a song is trying to do, does it do it well? They know when they’ve made a wrong decision if they don’t include something and a bunch of people search for it.

Pandora is not legal outside of the US, but many international users fake US zip codes. However, in order to avoid lawsuits, they started blocking by IP. As soon as they implemented IP blocking, they received a flood of messages, including one from a town that would have “Pandora night” at a local club. (The Department of Defense called up and asked them to block military IP ranges because Pandora was hogging the bandwidth!)

Why are some songs quieter than others? Tell them. They should be correcting for that.

The music genome is used by a lot of scorers and concert promoters to find artists and songs that are similar to the ones they want.

Could the users be allowed more granular ratings rather than thumbing up or down whole songs? About a third of the room would be interested in that.

Mobile device users are seeing fewer advertisements, and one listener is concerned that this will impact revenue. Between the iPhone, the Blackberry, and the Palm Pre, they have about 45,000 listeners on mobile devices. This is important to them, because these devices will be how Pandora will get into listener’s cars. And, in actuality, mobile listeners interact with advertisements four times as much as web listeners.

Tim thinks that eventually Pandora will host local radio. I’m not so sure how that would work.

Subscription Pandora is 192kbps, which sounds pretty good (and it comes with a desktop application). It’s not likely to get to audiophile level until the pipes are big enough to handle the bandwidth.

Variety and repetition is their biggest areas where they get feedback from listeners. The best way to get variety is to add different artists. If you thumb down an artist three times, they should be removed from the station.

They stream about 1/3 of the data that YouTube streams daily, with around 100 servers. Tim is not intimately familiar with the tech that goes into make Pandora work.

[The questions kept coming, but I couldn’t stay any longer, unfortunately. If you have a chance to attend a Pandora Town Hall, do it!]

where I spend my time online

While I was at the reference desk this quiet afternoon, I attempted to catch up on scanning through Lifehacker. Their article about the Geek Chart app caught my eye. Microblogging, or at the very least, in the moment stream of consciousness sharing, has taken over a good portion of my online presence, leaving this venue for slightly more substantial (and infrequent) commentary. So, I decided to fill out the details needed to build my Geek Chart.


Anna’s Geek Chart

Looks like those of you who want a more regular dose of Anna will need to be following my Twitter and Flickr feeds (with some Delicious thrown in). For the rest of you, enjoy the lighter load on your feed reader.