CIL 2010: Google Wave

Presenters: Rebecca Jones & Bob Keith

Jones was excited to have something that combined chat with cloud applications like Google Docs. Wave is a beginning, but still needs work. Google is not risk-averse, so they put it out and let us bang on it to shape it into something useful.

More people joined Google Wave and abandoned it than those who stuck with it (less than 10% of the room). We needed something that would push us over to incorporating it into our workflows, and we didn’t see that happen.

The presenters created a public wave, which you can find by searching “with:public tag:cil2010”. Ironically, they had to close Wave in order to have enough virtual memory to play the video about Wave.

Imagine that! Google Wave works better in Google Chrome than in other browsers (including Firefox with the Gears extension).

Gadgets add functionality to waves. [note: I’ve also seen waves that get bogged down with too many gadgets, so use them sparingly.] There are also robots that can do tasks, but it seems to be more like text-based games, which have some retro-chic, but no real workflow application.

Wave is good for managing a group to-do list or worklog, planning events, taking and sharing meeting notes, and managing projects. However, all participants need to be Wave users. And, it’s next to impossible to print or otherwise archive a Wave.

The thing to keep in mind with Wave is that it’s not a finished product and probably shouldn’t be out for public consumption yet.

The presentation (available at the CIL website and on the wave) also includes links to a pile of resources for Wave.

ER&L 2010: Where are we headed? Tools & Technologies for the future

Speakers: Ross Singer & Andrew Nagy

Software as a service saves the institution time and money because the infrastructure is hosted and maintained by someone else. Computing has gone from centralized, mainframe processing to an even mix of personal computers on an networked enterprise to once again a very centralized environment with cloud applications and thin clients.

Library resource discovery is, to a certain extent, already in the cloud. We use online databases and open web search, WorldCat, and next gen catalog interfaces. The next gen catalog places the focus on the institution’s resources, but it’s not the complete solution. (People see a search box and they want to run queries on it – doesn’t matter where it is or what it is.) The next gen catalog is only providing access to local resources, and while it looks like modern interfaces, the back end is still old-school library indexing that doesn’t work well with keyword searching.

Web-scale discovery is a one-stop shop that provides increased access, enhances research, and provides and increase ROI for the library. Our users don’t use Google because it’s Google, they use it because it’s simple, easy, and fast.

How do we make our data relevant when administration doesn’t think what we do is as important anymore? Linked data might be one solution. Unfortunately, we don’t do that very well. We are really good at identifying things but bad at linking them.

If every component of a record is given identifiers, it’s possible to generate all sorts of combinations and displays and search results via linking the identifiers together. RDF provides a framework for this.

Also, once we start using common identifiers, then we can pull in data from other sources to increase the richness of our metadata. Mashups FTW!

ER&L 2010: Adventures at the Article Level

Speaker: Jamene Brooks-Kieffer

Article level, for those familiar with link resolvers, means the best link type to give to users. The article is the object of pursuit, and the library and the user collaborate on identifying it, locating it, and acquiring it.

In 1980, the only good article-level identification was the Medline ID. Users would need to go through a qualified Medline search to track down relevant articles, and the library would need the article level identifier to make a fast request from another library. Today, the user can search Medline on their own; use the OpenURL linking to get to the full text, print, or ILL request; and obtain the article from the source or ILL. Unlike in 1980, the user no longer needs to find the journal first to get to the article. Also, the librarian’s role is more in providing relevant metadata maintenance to give the user the tools to locate the articles themselves.

In thirty years, the library has moved from being a partner with the user in pursuit of the article to being the magician behind the curtain. Our magic is made possible by the technology we know but that our users do not know.

Unique identifiers solve the problem of making sure that you are retrieving the correct article. CrossRef can link to specific instances of items, but not necessarily the one the user has access to. The link resolver will use that DOI to find other instances of the article available to users of the library. Easy user authentication at the point of need is the final key to implementing article-level services.

One of the library’s biggest roles is facilitating access. It’s not as simple as setting up a link resolver – it must be maintained or the system will break down. Also, document delivery service provides an opportunity to generate goodwill between libraries and users. The next step is supporting the users preferred interface, through tools like LibX, Papers, Google Scholar link resolver integration, and mobile devices. The latter is the most difficult because much of the content is coming from outside service providers and the institutional support for developing applications or web interfaces.

We also need to consider how we deliver the articles users need. We need to evolve our acquisitions process. We need to be ready for article-level usage data, so we need to stop thinking about it as a single-institutional data problem. Aggregated data will help spot trends. Perhaps we could look at the ebook pay-as-you-use model for article-level acquisitions as well?

PIRUS & PIRUS 2 are projects to develop COUNTER-compliant article usage data for all article-hosting entities (both traditional publishers and institutional repositories). Projects like MESUR will inform these kinds of ventures.

Libraries need to be working on recommendation services. Amazon and Netflix are not flukes. Demand, adopt, and promote recommendation tools like bX or LibraryThing for Libraries.

Users are going beyond locating and acquiring the article to storing, discussing, and synthesizing the information. The library could facilitate that. We need something that lets the user connect with others, store articles, and review recommendations that the system provides. We have the technology (magic) to make it available right now: data storage, cloud applications, targeted recommendations, social networks, and pay-per-download.

How do we get there? Cover the basics of identify>locate>acquire. Demand tools that offer services beyond that, or sponsor the creation of desired tools and services. We also need to stay informed of relevant standards and recommendations.

Publishers will need to be a part of this conversation as well, of course. They need to develop models that allow us to retain access to purchased articles. If we are buying on the article level, what incentive is there to have a journal in the first place?

For tenure and promotion purposes, we need to start looking more at the impact factor of the article, not so much the journal-level impact. PLOS provides individual article metrics.

IL2009: Cloud Computing in Practice: Creating Digital Services & Collections

Speakers: Amy Buckland, Kendra K. Levine, & Laura Harris (icanhaz.com/cloudylibs)

Cloud computing is a slightly complicated concept. Everyone approaches defining it from different perspectives. It’s about data and storage. For the purposes of this session, they mean any service that is on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service.

Cloud computing frees people to collaborate in many ways. Infrastructure is messy, so let someone else take care of that so you can focus on what you really need to do. USB sticks can do a lot of that, but they’re easy to lose, and data in the cloud will hopefully be migrated to new formats.

The downside of cloud computing is that it is so dependent upon constant connection and uptime. If your cloud computing source or network goes down, you’re SOL until it get fixed. Privacy can also be a legitimate concern, and the data could be vulnerable to hacking or leaks. Nothing lasts forever — for example, today, Geocities is closing.

Libraries are already in the cloud. We often store our ILS data, ILL, citation management, resource guides, institutional repositories, and electronic resource management tools on servers and services that do not live in the library. Should we be concerned about our vendors making money from us on a "recurring, perpetual basis" (Cory Doctorow)? Should we be concerned about losing the "face" of the library in all of these cloud services? Should we be concerned about the reliability of the services we are paying for?

Libraries can use the cloud for data storage (i.e. DuraSpace, Dropbox). They could also replace OS services & programs, allowing patron-access computers to b run using cloud applications.

Presentation slides are available at icanhaz.com/cloudylibs.

Speaker: Jason Clark

His library is using four applications to serve video from the library, and one of them is TerraPod, which is for students to create, upload, and distribute videos. They outsourced the player to Blip.tv. This way, they don’t have to encode files or develop a player.

The way you can do mashups of cloud applications and locally developed applications is through the APIs that defines the rules for talking to the remote server. The cloud becomes the infrastructure that enables webscaling of projects. Request the data, receive it in some sort of structured format, and then parse it out into whatever you want to do with it.

Best practices for cloud computing: use the cloud architecture do the heavy lifting (file conversion, storage, distribution, etc.), archive locally if you must, and outsource conversion. Don’t be afraid. This is the future.

Presentation slides will be available later on his website.

thing 19: best of web 2.0

This assignment asks us to look at the Web 2.0 Awards and pick a site/tool to play with. I looked at both this year’s and last year’s lists and couldn’t find anything that interested me that I hadn’t already tried or am using on a regular basis. I guess that’s one of the benefits (hazards?) of having a lot of twopointopian friends — I may not be on the bleeding edge of shiny new technology, but I can at least see the contrails.

thing 18: web applications

It has been a while since I seriously looked at Zoho Writer, preferring Google Docs mainly for the convenience (I always have Gmail open in a tab, so it’s easy to one-click open Google Docs from there). Zoho Writer seems to have more editing and layout tools, or at least, displays them more like MS Word.

I have been dabbling with web applications like document editors and spreadsheet creators mostly because I don’t like the ones that I purchased with my iMac. I probably would like the Mac versions more if I were more familiar with their quirks, but I’m so used to Microsoft Office products that remembering what I can and can’t do in the Mac environment is too frustrating. While Google Docs isn’t quite the same as Microsoft Office, it’s more-so than iWork ’08.

Playing with Zoho Writer, however, reminded me that I need to work around my Google bias. Particularly since the Zoho products seem to have the productivity functions that make my life easier.

css.php