a response to rewarding conference speakers

As I was sitting in a CIL2009 session that was essentially something that could have easily been a blog post with a bunch of annotated links, I wondered to myself why this was chosen to be a session over something else, and why I had chosen to attend it rather than something else. I concluded that sometimes I need to have something whack me upside the head to “get it,” and a good presentation is often the best tool to do it.

Kathryn Greenhill writes, “I suspect it’s not that I *know* it all, but that I know how to find out at point of need and that I am more likely to use my human networks than to look back at conference notes or handouts to find out.” I rely heavily on my human networks, both in person and online, to keep me informed of the things I need to know — much more so than professional literature and formal presentations. However, sometimes even those things can spark an idea or clarify something that was previously muddy in my mind. I’m happy to reap the benefits of shared information, regardless of what format is used to deliver it.

That’s all fine and good for me, someone who is only moderately on the side of information creator and more on the side of information consumer, but what about those “shovers and makers” out there who are generating new ideas and, well, shoving and making in libraryland? Greenhill notes that she has “found much, much more value hanging about talking to other presenters than in attending the formal sessions,” and she suggests that rather than cheesy speakers’ gifts, they could instead be given “something to stimulate the presenters’ brains and challenge them.”

I like the idea of this, but I also worry that it has the potential to widen the gap between creators and consumers. I benefit greatly from being able to listen in on the discussions between the speakers in LobbyCon/CarpetCon settings. And, even when I am in sessions that challenge my skill set, I am motivated to expand that skill set, or at the very least, I know more about what I don’t know. I’d rather have that than continue in ignorance.

Greenhill, along with Cindi Trainor and John Blyberg, spent many hours during Computers in Libraries secluded away while crafting The Darien Statements on the Library and Librarians manifesto. The end result is available to us, but I wonder how much more we consumers would have learned by being able to listen in on the process of its creation? Isn’t that part of what the unconference movement is about?

CIL 2009: ERM… What Do You Do With All That Data, Anyway?

This is the session that I co-presented with Cindi Trainor (Eastern Kentucky University). The slides don’t convey all of the points we were trying to make, so I’ve also included a cleaned-up version of those notes.

  1. Title
  2. In 2004, the Digital Library Federation (DLF) Electronic Resources Management Initiative (ERMI) published their report on the electronic resource management needs of libraries, and provided some guidelines for what data needed to be collected in future systems and how that data might be organized. The report identifies over 340 data elements, ranging from acquisitions to access to assessment.

    Libraries that have implemented commercial electronic resource management systems (ERMS) have spent many staff hours entering data from old storage systems, or recording those data for the first time, and few, if any, have filled out each data element listed in the report. But that is reasonable, since not every resource will have relevant data attached to it that would need to be captured in an ERMS.

    However, since most libraries do not have an infinite number of staff to focus on this level of data entry, the emphasis should instead be placed upon capturing data that is neccessary for managing the resources as well as information that will enhance the user experience.

  3. On the staff side, ERM data is useful for: upcoming renewal notifications, generating collection development reports that explain cost-per-use, based on publisher-provided use statistics and library-maintained, acquisitions data, managing trials, noting Electronic ILL & Reserves rights, and tracking the uptime & downtime of resources.
  4. Most libraries already have access management systems (link resolvers, A-Z lists, Marc records).
  5. User issues have shifted from the multiple copy problem to a “which copy?” problem. Users have multiple points of access, including: journal packages (JSTOR, Muse); A&I databases, with and without FT (which constitute e-resources in themselves); Library website (particularly “Electronic Resources” or “Databases” lists); OPAC; A-Z List (typically populated by an OpenURL link resolver); Google/gScholar; article/paper references/footnotes; course reserves; course management systems (Blackboard, Moodle, WebCT, Angel,Sakai); citation management software (RefWorks, EndNote, Zotero); LibGuides / course guides; bookmarks
  6. Users want…
  7. Google
  8. Worlds collide! What elements from the DLF ERM spec could enhance the user experience, and how? Information inside an ERMS can enhance access management systems or discovery: subject categorization within the ERM that would group similar resources and allow them to be presented alongside the resource that someone is using; using statuses to group & display items, such as a trialset within the ERM to automatically populate a page of new resources or an RSS feed to make it easy for the library to group and publicize even 30 day trial. ERMS’s need to do a better job of helping to manage the resource lifecycle by being built to track resources through that lifecycle so that discovery is updated by extension because resources are managed well, increasing uptime and availability and decreasing the time from identification above potential new resource to accessibility of that resource to our users
  9. How about turning ERM data into a discovery tool? Information about accessibility of resources to reference management systems like Endnote, RefWorks, or Zotero, and key pieces of information related to using those individual resources with same, could at least enable more sophisticated use of those resources if not increased discovery.

    (You’ve got your ERM in my discovery interface! No, you got your discovery interface in my ERM! Er… guess that doesn’t quite translate.)

  10. Flickr Mosaic: Phyllotaxy (cc:by-nc-sa); Librarians-Haunted-Love (cc:by-nc-sa); Square Peg (cc:by-nc-sa); The Burden of Thought (cc:by-nc)

CIL 2009: What’s Hot in RSS

Speaker: Steven M. Cohen

  • Zoho – somewhat more popular than Google Docs
  • YouTube RSS search – only the top 10 of any search, however, if you use youtube/rss/search/<search term>.rss, you will get it all
  • X – nothing
  • WwwhatsNew – Spanish language cool tools
  • Votes Database – Washington Post hosted profiles of congressional members including RSS feed of current voting record
  • JD Supra (it has a U in it) – documents that lawyers are putting up online to be shared by anyone (marketing/social tool)
  • Tic Tocs – table of contents RSS for any journal that has it, aggregated in one location
  • Scribd – YouTube for documents
  • Ravelry – social networking for knitters
  • QuestionPoint
  • Page2RSS – creates a feed of daily changes to web pages
  • Open Congress – feeds of congressional action about people, committees, issues, bills, etc.
  • N – nothing
  • Mashable – top tech trends in social networking
  • LibraryThing – "the future of what catalogs will look like"
  • KillerStartups – description and evaluation of new websites & applications
  • Justia Dockets – Federal District Court Filings & Documents, RSS feeds for search results
  • I Want To – …missed this one…
  • Hunch – new site, hasn’t been launched yet, created by the Flickr woman
  • Google Reader – taking over Bloglines; can share items with comments, but the comments are private (I think) instead of the notes, which still exist
  • Facebook – yeah
  • E-Hub – not sure what makes this cool, but he listed it
  • Deepest Sender – Firefox extension for blogging links
  • Compfight – CC/Flickr image search (can limit to CC only)
  • Backup URL – creates a backup of any URL in case it goes down — great for presentations
  • Awesome Highlighter – highlight text on web pages and then get a link to it

CIL 2009: CM Tools: Drupal, Joomla, & Rumba

Speaker: Ryan Deschamps

In the end, you will install and play with several different content management systems until you find the right one for your needs. A good CMS will facilitate the division of labor, support the overall development of the site, and ensure best practices/standards. It’s not about the content, it’s about the cockpit. You need something that will make your staff happy so that it’s easy to build the right site for your users.

Joomla was the #1 in market share with good community support when Halifax went with it. Ultimately, it wasn’t working, so they switched to MODx. Joomla, unfortunately, gets in the way of creative coding.

ModX, unlike Joomla, has fine-grain user access. Templates are plain HTML, so no need to learn code specific to the CMS. The community was smaller, but more engaged.

One feature that Deschamps is excited about is the ability to create a snippet with pre-set options that can be inserted in a page and changed as needed. An example of how this would be used is if you want to put specific CC licenses on pages or have certain images displayed.

The future: "application framework" rather than "content management system"

Speaker: John Blyberg

Drupal has been named open source CMS of the year for the past two years in part due to the community participation. It scales well, so it can go from being a small website to a large and complex one relatively easily. However, it has a steep learning curve. Joomla is kind of like Photoshop Elements, and Drupal is more like the full Photoshop suite.

Everything you put into Drupal is a node, not a page. It associates bits of information with that node to flesh out a full page. Content types can be classified in different ways, with as much diversity as you want. The taxonomies can be used to create the structure of your website.

[Blyberg showed some examples of things that he likes about Drupal, but the detail and significance are beyond me, so I did not record them here. You can probably find out more when/if he posts his presentation.]

CIL 2009: Open Access: Green and Gold

Presenter: Shane Beers

Green open access (OA) is the practice of depositing and making available a document on the web. Most frequently, these are peer reviewed research and conference articles. This is not self-publishing! OA repositories allow institutions to store and showcase the research output of institutions, thus increasing their visibility within the academic community.

Institutional repositories are usually managed by either DSpace, Fedora, or EPrints, and there are third-party external options using these systems. There are also a few subject-specific repositories not affiliated with any particular institution.

The "serials crisis" results in most libraries not subscribing to every journal out there that their researchers need. OA eliminates this problem by making relevant research available to anyone who needs it, regardless of their economic barriers.

A 2008 study showed that less than 20% of all scientific articles published were made available in a green or gold OA repository. Self-archiving is at a low 15%, and incentives to do so increase it only by 30%. Researchers and their work habits are the greatest barriers that OA repository managers encounter. The only way to guarantee 100% self-archiving is with an institutional mandate.

Copyright complications are also barriers to adoption. Post-print archiving is the most problematic, particularly as publishers continue to resist OA and prohibit it in author contracts.

OA repositories are not self-sustaining. They require top-down dedication and support, not only for the project as a whole, but also the equipment/service and staff costs. A single "repository rat" model is rarely successful.

The future? More mandates, peer-reviewed green OA repositories, expanding repositories to encompass services, and integration of OA repositories into the workflow of researchers.

Presenter: Amy Buckland

Gold open access is about not having price or permission barriers. No embargos with immediate post-print archiving.

The Public Knowledge Project is an easy tool for creating an open journal that includes all the capabilities of online multi-media. For example, First Monday uses it.

Buckland wants libraries to become publishers of content by making the platforms available to the researchers. Editors and editorial boards can come from volunteers within the institution, and authors just need to do what they do.

Publication models are changing. May granting agencies are requiring OA components tied with funding. The best part: everyone in the world can see your institution’s output immediately!

Installation of the product is easy — it’s getting the word out that’s hard.

Libraries can make the MARC records freely available, and ensure that the journals are indexed in the Directory of Open Access Journals.

Doing this will build relationships between faculty and the library. Libraries become directly involved in the research output of faculty, which makes libraries more visible to administrators and budget decision-makers. University presses are struggling, but even though they are focused on revenue, OA journal publishing could enhance their visibility and status. Also, if you publish OA, the big G will find it (and other search engines).