IL 2010: Dashboards, Data, and Decisions

[I took notes on paper because my netbook power cord was in my checked bag that SFO briefly lost on the way here. This is an edited transfer to electronic.]

presenter: Joseph Baisano

Dashboards pull information together and make it visible in one place. They need to be simple, built on existing data, but expandable.

Baisano is at SUNY Stonybrook, and they opted to go with Microsoft SharePoint 2010 to create their dashboards. The content can be made visible and editable through user permissions. Right now, their data connections include their catalog, proxy server, JCR, ERMS, and web statistics, and they are looking into using the API to pull license information from their ERMS.

In the future, they hope to use APIs from sources that provide them (Google Analytics, their ERMS, etc.) to create mashups and more on-the-fly graphs. They’re also looking at an open source alternative to SharePoint called Pentaho, which already has many of the plugins they want and comes in free and paid support flavors.

presenter: Cindi Trainor

[Trainor had significant technical difficulties with her Mac and the projector, which resulted in only 10 minutes of a slightly muddled presentation, but she had some great ideas for visualizations to share, so here’s as much as I captured of them.]

Graphs often tell us what we already know, so look at it from a different angle to learn something new. Gapminder plots data in three dimensions – comparing two components of each set over time using bubble graphs. Excel can do bubble graphs as well, but with some limitations.

In her example, Trainor showed reference transactions along the x-axis, the gate count along the y-axis, and the size of the circle represented the number of circulation transactions. Each bubble represented a campus library and each graph was for the year’s totals. By doing this, she was able to suss out some interesting trends and quirks to investigate that were hidden in the traditional line graphs.

openurl, firefox, and google scholar

Peter Brinkley of the University of Alberta Libraries has developed a Firefox extension that adds an OpenURL button to Google Scholar search results.[web4lib]

“The purpose is to enable users at an institution that has an OpenURL link-resolver to use that resolver to locate the full text of articles found in Google Scholar, instead of relying on the links to publishers’ websites provided by Google. This is important because it solves the “appropriate copy problem”: the link to a publisher’s site is useless if you don’t have a subscription that lets you into that site, and your library may provide access to the same article in an aggregator’s package or elsewhere.”

From all appearances, this is a fantastic tool that embraces Google while still providing even more of that useful service that librarians do. If you have an OpenURL link resolver that you are able to tweak like SFX, go for it! (Next step, educate your users about Firefox….)

Update: One of the library coding gods, Art Rhyno, has developed a bookmarklet that prepends your library’s proxy server URL string to the links in the Google Scholar results. That’s another work-around if you don’t have an OpenURL link resolver. If it’s something your library gets, then you’ll get passed through authenticated to the full-text content. If not, then you can obtain access or the content some other way.

One snag I seen in all of this is that depending on how your proxy server is set up, this may not work. Some libraries *cough*UofKY*cough* use a proxy server that requires the user to make modifications to their web browser before authenticating them. I’m not sure whether or not this would cause confusion for the users who haven’t done that modification.

not all proxies are the same

No, I don’t know everything there is to know about proxy servers.

A while back, I panned a book on e-serials collection management. One of the contributors found my review and wrote a response, which I will quote here:

As the person who wrote the essay regarding IP versus proxy access for the E-Serials Collection Management book that you reviewed on your website, I feel the need to respond. First of all, I agree that the amount of time it took between the writing of the chapters and actual publication was a serious concern, particularly since the focus of this book was technology. However, I should point out that the problems encountered using proxy servers have not become a moot point because of the presence of EZproxy and similar products. We have had EZproxy access and an alternative proxy method available on our website (the University of South Florida Libraries) for several years. Unfortunately, this has NOT meant the end of proxy-user problems. With multiple campuses and users in several cities, many problems are still reported each week by users having difficulty connecting. The reasons for the problems are as varied as our users. Personally, I prefer this type of IP access to the use of ID/password but, as with most things, ONLY when it works. Keeping this in mind, I now have a second self-created job title – Cyberjanitor.

My apologies to the author. I was not aware of the difficulties with proxy servers and multiple campuses. My former place of work (EKU) has only one IP range for the main campus and all of the extended campuses, so setting up IP access with vendors is very easy. They use the same login and password required for campus email to authenticate our users, and everyone gets an email account, with the exception perhaps of some adjunct faculty. For that campus, EZProxy works 99.5% of the time, which is far better than having to hand out new passwords to everyone each semester.