Speakers: Ross Singer & Andrew Nagy
Software as a service saves the institution time and money because the infrastructure is hosted and maintained by someone else. Computing has gone from centralized, mainframe processing to an even mix of personal computers on an networked enterprise to once again a very centralized environment with cloud applications and thin clients.
Library resource discovery is, to a certain extent, already in the cloud. We use online databases and open web search, WorldCat, and next gen catalog interfaces. The next gen catalog places the focus on the institution’s resources, but it’s not the complete solution. (People see a search box and they want to run queries on it – doesn’t matter where it is or what it is.) The next gen catalog is only providing access to local resources, and while it looks like modern interfaces, the back end is still old-school library indexing that doesn’t work well with keyword searching.
Web-scale discovery is a one-stop shop that provides increased access, enhances research, and provides and increase ROI for the library. Our users don’t use Google because it’s Google, they use it because it’s simple, easy, and fast.
How do we make our data relevant when administration doesn’t think what we do is as important anymore? Linked data might be one solution. Unfortunately, we don’t do that very well. We are really good at identifying things but bad at linking them.
If every component of a record is given identifiers, it’s possible to generate all sorts of combinations and displays and search results via linking the identifiers together. RDF provides a framework for this.
Also, once we start using common identifiers, then we can pull in data from other sources to increase the richness of our metadata. Mashups FTW!
In a recent phone/web town hall discussion with Peter Shepherd, Project Director for COUNTER, mused about why publishers (and libraries) have not embraced the COUNTER Code of Practice for Books and Reference Works as quickly as they have the Code of Practice for Journals and Databases. His approach is that we are paying customers and should have that information. My perspective: meh.
I would like to see ebook usage for items that we purchase as a subscription, but for items we own (i.e. one-time purchase with perpetual access), it’s less of a concern for collection development. Licensed ebooks with annual subscriptions (like regularly updating encyclopedias or book packages) are more like online databases or ejournals than traditional paper books, so in that regard, it shouldn’t be difficult for publishers to implement the COUNTER Code of Practice for Books and Reference Works and provide use information to customers.
For books that are static and don’t have any annual cost attached to them, there isn’t much of a regular need to know what is being used. We keep track of re-shelving stats for the purposes of managing a physical collection with space limitations, and those problems are not replicated in an online environment. Where the usage of owned ebooks comes into play is when we are justifying:
a. The purchase of those specific ebooks.
b. The purchase of future ebooks from that publisher.
c. The amount of money in the ebook budget.
Hopefully Mr. Shepherd, Project COUNTER, and vocal librarians will be able to convince the publishers of the value of providing usage information. When budgets are as tight as they are these days, having detailed information about the use of your subscription-based collection is essential for making decisions about what must be kept and what can be let go (or should be promoted more to the users). Of course, in less desperate times, knowing this information is also important for making adjustments to the library’s collection emphasis in order to meet the needs of the users.