a response to rewarding conference speakers

As I was sitting in a CIL2009 session that was essentially something that could have easily been a blog post with a bunch of annotated links, I wondered to myself why this was chosen to be a session over something else, and why I had chosen to attend it rather than something else. I concluded that sometimes I need to have something whack me upside the head to “get it,” and a good presentation is often the best tool to do it.

Kathryn Greenhill writes, “I suspect it’s not that I *know* it all, but that I know how to find out at point of need and that I am more likely to use my human networks than to look back at conference notes or handouts to find out.” I rely heavily on my human networks, both in person and online, to keep me informed of the things I need to know — much more so than professional literature and formal presentations. However, sometimes even those things can spark an idea or clarify something that was previously muddy in my mind. I’m happy to reap the benefits of shared information, regardless of what format is used to deliver it.

That’s all fine and good for me, someone who is only moderately on the side of information creator and more on the side of information consumer, but what about those “shovers and makers” out there who are generating new ideas and, well, shoving and making in libraryland? Greenhill notes that she has “found much, much more value hanging about talking to other presenters than in attending the formal sessions,” and she suggests that rather than cheesy speakers’ gifts, they could instead be given “something to stimulate the presenters’ brains and challenge them.”

I like the idea of this, but I also worry that it has the potential to widen the gap between creators and consumers. I benefit greatly from being able to listen in on the discussions between the speakers in LobbyCon/CarpetCon settings. And, even when I am in sessions that challenge my skill set, I am motivated to expand that skill set, or at the very least, I know more about what I don’t know. I’d rather have that than continue in ignorance.

Greenhill, along with Cindi Trainor and John Blyberg, spent many hours during Computers in Libraries secluded away while crafting The Darien Statements on the Library and Librarians manifesto. The end result is available to us, but I wonder how much more we consumers would have learned by being able to listen in on the process of its creation? Isn’t that part of what the unconference movement is about?

CIL 2009: ERM… What Do You Do With All That Data, Anyway?

This is the session that I co-presented with Cindi Trainor (Eastern Kentucky University). The slides don’t convey all of the points we were trying to make, so I’ve also included a cleaned-up version of those notes.

  1. Title
  2. In 2004, the Digital Library Federation (DLF) Electronic Resources Management Initiative (ERMI) published their report on the electronic resource management needs of libraries, and provided some guidelines for what data needed to be collected in future systems and how that data might be organized. The report identifies over 340 data elements, ranging from acquisitions to access to assessment.

    Libraries that have implemented commercial electronic resource management systems (ERMS) have spent many staff hours entering data from old storage systems, or recording those data for the first time, and few, if any, have filled out each data element listed in the report. But that is reasonable, since not every resource will have relevant data attached to it that would need to be captured in an ERMS.

    However, since most libraries do not have an infinite number of staff to focus on this level of data entry, the emphasis should instead be placed upon capturing data that is neccessary for managing the resources as well as information that will enhance the user experience.

  3. On the staff side, ERM data is useful for: upcoming renewal notifications, generating collection development reports that explain cost-per-use, based on publisher-provided use statistics and library-maintained, acquisitions data, managing trials, noting Electronic ILL & Reserves rights, and tracking the uptime & downtime of resources.
  4. Most libraries already have access management systems (link resolvers, A-Z lists, Marc records).
  5. User issues have shifted from the multiple copy problem to a “which copy?” problem. Users have multiple points of access, including: journal packages (JSTOR, Muse); A&I databases, with and without FT (which constitute e-resources in themselves); Library website (particularly “Electronic Resources” or “Databases” lists); OPAC; A-Z List (typically populated by an OpenURL link resolver); Google/gScholar; article/paper references/footnotes; course reserves; course management systems (Blackboard, Moodle, WebCT, Angel,Sakai); citation management software (RefWorks, EndNote, Zotero); LibGuides / course guides; bookmarks
  6. Users want…
  7. Google
  8. Worlds collide! What elements from the DLF ERM spec could enhance the user experience, and how? Information inside an ERMS can enhance access management systems or discovery: subject categorization within the ERM that would group similar resources and allow them to be presented alongside the resource that someone is using; using statuses to group & display items, such as a trialset within the ERM to automatically populate a page of new resources or an RSS feed to make it easy for the library to group and publicize even 30 day trial. ERMS’s need to do a better job of helping to manage the resource lifecycle by being built to track resources through that lifecycle so that discovery is updated by extension because resources are managed well, increasing uptime and availability and decreasing the time from identification above potential new resource to accessibility of that resource to our users
  9. How about turning ERM data into a discovery tool? Information about accessibility of resources to reference management systems like Endnote, RefWorks, or Zotero, and key pieces of information related to using those individual resources with same, could at least enable more sophisticated use of those resources if not increased discovery.

    (You’ve got your ERM in my discovery interface! No, you got your discovery interface in my ERM! Er… guess that doesn’t quite translate.)

  10. Flickr Mosaic: Phyllotaxy (cc:by-nc-sa); Librarians-Haunted-Love (cc:by-nc-sa); Square Peg (cc:by-nc-sa); The Burden of Thought (cc:by-nc)

CIL 2009: What’s Hot in RSS

Speaker: Steven M. Cohen

  • Zoho – somewhat more popular than Google Docs
  • YouTube RSS search – only the top 10 of any search, however, if you use youtube/rss/search/<search term>.rss, you will get it all
  • X – nothing
  • WwwhatsNew – Spanish language cool tools
  • Votes Database – Washington Post hosted profiles of congressional members including RSS feed of current voting record
  • JD Supra (it has a U in it) – documents that lawyers are putting up online to be shared by anyone (marketing/social tool)
  • Tic Tocs – table of contents RSS for any journal that has it, aggregated in one location
  • Scribd – YouTube for documents
  • Ravelry – social networking for knitters
  • QuestionPoint
  • Page2RSS – creates a feed of daily changes to web pages
  • Open Congress – feeds of congressional action about people, committees, issues, bills, etc.
  • N – nothing
  • Mashable – top tech trends in social networking
  • LibraryThing – "the future of what catalogs will look like"
  • KillerStartups – description and evaluation of new websites & applications
  • Justia Dockets – Federal District Court Filings & Documents, RSS feeds for search results
  • I Want To – …missed this one…
  • Hunch – new site, hasn’t been launched yet, created by the Flickr woman
  • Google Reader – taking over Bloglines; can share items with comments, but the comments are private (I think) instead of the notes, which still exist
  • Facebook – yeah
  • E-Hub – not sure what makes this cool, but he listed it
  • Deepest Sender – Firefox extension for blogging links
  • Compfight – CC/Flickr image search (can limit to CC only)
  • Backup URL – creates a backup of any URL in case it goes down — great for presentations
  • Awesome Highlighter – highlight text on web pages and then get a link to it

CIL 2009: CM Tools: Drupal, Joomla, & Rumba

Speaker: Ryan Deschamps

In the end, you will install and play with several different content management systems until you find the right one for your needs. A good CMS will facilitate the division of labor, support the overall development of the site, and ensure best practices/standards. It’s not about the content, it’s about the cockpit. You need something that will make your staff happy so that it’s easy to build the right site for your users.

Joomla was the #1 in market share with good community support when Halifax went with it. Ultimately, it wasn’t working, so they switched to MODx. Joomla, unfortunately, gets in the way of creative coding.

ModX, unlike Joomla, has fine-grain user access. Templates are plain HTML, so no need to learn code specific to the CMS. The community was smaller, but more engaged.

One feature that Deschamps is excited about is the ability to create a snippet with pre-set options that can be inserted in a page and changed as needed. An example of how this would be used is if you want to put specific CC licenses on pages or have certain images displayed.

The future: "application framework" rather than "content management system"

Speaker: John Blyberg

Drupal has been named open source CMS of the year for the past two years in part due to the community participation. It scales well, so it can go from being a small website to a large and complex one relatively easily. However, it has a steep learning curve. Joomla is kind of like Photoshop Elements, and Drupal is more like the full Photoshop suite.

Everything you put into Drupal is a node, not a page. It associates bits of information with that node to flesh out a full page. Content types can be classified in different ways, with as much diversity as you want. The taxonomies can be used to create the structure of your website.

[Blyberg showed some examples of things that he likes about Drupal, but the detail and significance are beyond me, so I did not record them here. You can probably find out more when/if he posts his presentation.]

CIL 2009: Open Access: Green and Gold

Presenter: Shane Beers

Green open access (OA) is the practice of depositing and making available a document on the web. Most frequently, these are peer reviewed research and conference articles. This is not self-publishing! OA repositories allow institutions to store and showcase the research output of institutions, thus increasing their visibility within the academic community.

Institutional repositories are usually managed by either DSpace, Fedora, or EPrints, and there are third-party external options using these systems. There are also a few subject-specific repositories not affiliated with any particular institution.

The "serials crisis" results in most libraries not subscribing to every journal out there that their researchers need. OA eliminates this problem by making relevant research available to anyone who needs it, regardless of their economic barriers.

A 2008 study showed that less than 20% of all scientific articles published were made available in a green or gold OA repository. Self-archiving is at a low 15%, and incentives to do so increase it only by 30%. Researchers and their work habits are the greatest barriers that OA repository managers encounter. The only way to guarantee 100% self-archiving is with an institutional mandate.

Copyright complications are also barriers to adoption. Post-print archiving is the most problematic, particularly as publishers continue to resist OA and prohibit it in author contracts.

OA repositories are not self-sustaining. They require top-down dedication and support, not only for the project as a whole, but also the equipment/service and staff costs. A single "repository rat" model is rarely successful.

The future? More mandates, peer-reviewed green OA repositories, expanding repositories to encompass services, and integration of OA repositories into the workflow of researchers.

Presenter: Amy Buckland

Gold open access is about not having price or permission barriers. No embargos with immediate post-print archiving.

The Public Knowledge Project is an easy tool for creating an open journal that includes all the capabilities of online multi-media. For example, First Monday uses it.

Buckland wants libraries to become publishers of content by making the platforms available to the researchers. Editors and editorial boards can come from volunteers within the institution, and authors just need to do what they do.

Publication models are changing. May granting agencies are requiring OA components tied with funding. The best part: everyone in the world can see your institution’s output immediately!

Installation of the product is easy — it’s getting the word out that’s hard.

Libraries can make the MARC records freely available, and ensure that the journals are indexed in the Directory of Open Access Journals.

Doing this will build relationships between faculty and the library. Libraries become directly involved in the research output of faculty, which makes libraries more visible to administrators and budget decision-makers. University presses are struggling, but even though they are focused on revenue, OA journal publishing could enhance their visibility and status. Also, if you publish OA, the big G will find it (and other search engines).

CIL 2009: Open Source Library Implementations

Speakers: Karen Kohn and Eric McCloy

Preparing for moving from a traditional ILS to Koha.

They were frustrated with not being able to get at the data in their system, and it was cost-prohibitive to gain access. The user interface was locked into a particular design that was difficult to modify or make ADA-compliant. Staff clients had to be updated on each computer, which was a time-consuming process.

They have a strong IT-library partnership, which meant they knew they could work with a system that needs that kind of support.

How they did it (Fall 2008): dropped "discovery layer" product from their ILS, use the savings to get their federated search working, started doing nightly dumps of records from ILS to Koha (using ILS as back-end, Koha as the user interface), designed web interface (Drupal), and set up a Z39.50 interface to search Koha. Eventually they will be putting Koha out for native searching. Currently, they are testing and de-bugging, and have plans to roll out the migration this summer (2009).

Every once in a while, librarians need to be reminded that [insert tech glitch] is only temporary and everything will be corrected eventually, so don’t fret over the details to the point of getting distracted from the goal.

[Commentary: My library looked at Koha last year, but we need something that can handle acquisitions and serials, and Koha wasn’t at that point yet. It did not occur to me, and I don’t think it occurred to anyone else, that we could use our current ILS for staff and administrative things and make Koha our user interface, which is where it currently excels.]

CI 2009: Unconferences

Presenters: Steve Lawson, Stephen Francoeur, John Blyberg, and Kathryn Greenhill

KG began by asking the audience to share what questions they have about unconferences while SL took notes on a flip chart. Lots of good questions covering a variety of aspects, including all the questions I have.

Keep in mind: who ever comes are the right people; whatever happens is the only thing that could have happened; whenever it starts is the right time. Also keep in mind the law of two feet. If you are in a situation where you are neither getting nor receiving anything of value, then change that or leave.

Many people indicate that they get a lot out of the space inbetween sessions at traditional conferences, and this is what unconferences try to capture. Libraries can also host general unconferences, such as what Deschamps did in Halifax. It doesn’t have to be just library stuff.

You can’t prepare for every aspect of an unconference. You can prepare the space and request that specific people be there, but in the end, its success is based on the engagement of the participants.

Unconferences are casual. You do need to decide the level of casual, such as how much you want to borrow from traditional conference amenities and structure. Organizational sponsorships should be limited to affiliation and financial support — avoid letting them dictate what will happen.

Keynote sessions can influence the conversations that happen afterwards, so be deliberate about whether or not to have one.

The less you have to deal with money the better, and there are pros and cons to having fees. Keep the threshold low to encourage participation. Every day is a bad day for somebody, so just choose a date and time.

Tip: organize an unconference the day before or after a national conference. Folks are coming and going, and it’s easier to schedule that time in and to get institutional funding.

Make use of social software for promoting and organizing the unconference (wikis are good), and also use it for continuing the conversation.

Free as in beer, free as in kittens, and now free as in someone else is paying. Make use of the resources of the participants institutions.

Swag keeps the connection, and if you’re creative, they’re also useful. SF showed the notebooks that they handed out at LibCampNYC, which were branded versions of something like Moleskine notebooks. Hand out the swag at the beginning, along with notes about how the unconference was going to work and an outline of a schedule (if you have one).

You can build communities through unconferences that then are agile enough to continue the interaction and spontaneous gatherings.

"If you feed them they will come. If you give them liquor they will come the next time." — John Blyberg

CIL 2009: Social Network Profile Management

Speaker: Greg Schwartz

Who are you online? Identity is what you say about you and what others say about you. However, it’s more than just that. It includes the things you buy, the tools you use, the places you spend your time, etc.

You do not own your online identity. You can’t control what people find out about you, but you can influence it.

  1. Own your user name. Pick one and stick to it. Even better if you can use your real name. (checkusernames.com)
  2. Join the conversation. Develop your identity by participating in social networks.
  3. Listen. Pay attention to what other people are saying about you.
  4. Be authentic. Ultimately, social networking is about connecting your online identity to your in-person identity.

Speaker: Michael Porter

MP was the project manager for the social tools on WebJunction. It’s designed to be for librarians and library staff.

If you are representing your organization online, be yourself, but also be sensitive to how that could be perceived. Share your library success stories!

Speaker: Sarah Houghton-Jan

Library online identities should be created with a generic email address, should be up-to-date, and should allow comment and interaction with users. Keep the tone personable.

Don’t use multiple identities. Make sure that someone is checking the contact points. You’ll get better results if you disperse the responsibility for library online identities across your institution rather than relying on one person to manage it all.

Speaker: Amanda Clay Powers

People have been telling their stories for a long time, and online social networks are just another tool for doing that. Some people are more comfortable with this than others. It’s our role to educate people about how to manage their online identities, however, our users don’t always know that librarians can help them.

On Facebook, you can manage social data by creating friends lists. This functionality is becoming more important as online social networks grow and expand.

CIL 2009: New Strategies for Digital Natives

Speaker: Helene Blowers

Begins with a video of a 1yo. unlocking and starting up a Preschool Adventure game on an iPhone, and then paging through images in the photo gallery. Joey is a digital native and the future of library users.

Digital natives are those born after 1980. When they were 1, IBM distributed the first commercial PC. Cellular phones were introduced at the age of 3. By the time they were 14, the internet was born.

Web 1.0 was built on finding stuff, Web 2.0 was built on connecting with other users and share information. Digital natives are used to not only having access to the world through the internet, but also engaging with it.

Business Week categorized users by how they interact with the internet and their generation. This clearly lays out the differences between how the generations use this tool, and it should inform the way we approach library services to them.

Digital native realities:

  • Their identity online is the same as their in-person identity. They grew up with developing both at the same time, as oppose to those who came before. Facebook, MySpace, Twitter, Flixster, and LinkedIn are the top five online social networks, according to a report in January. How many of them do you have an identity in?
  • The ability to create and leave your imprint somewhere is important to digital natives. According to the Pew Internet & American Life, those who participate in social networks are more likely to create unique content than those who do not.
  • We are seeing a shift from controlled information to collaborative information, so digital information quality has become important and a personal responsibility to digital natives. After a study showed that Wikipedia was as accurate as Britannica resulted in EB adding a wiki layer to their online presence.
  • Digital natives have grown up in a world they believe to be safe, whether it is or not. Less than 0.08% of students say that they have met someone online without their parents knowledge, and about 65% say that they ignored or deleted strangers that tried to contact them online. However, that doesn’t stop them from intentionally crossing that line in order to rebel against rules.
  • Digital opportunity is huge. There are no barriers, the playing field has been leveled, access is universal, connection ubiquitous, and it’s all about me.
  • Digital sharing is okay. It’s just sharing. They aren’t concerned with copyright or ownership. Fanfic, mashups, remixes, parodies… Creative Commons has changed the way we look at ownership and copyright online.
  • Privacy online and in their social networks is not much of a concern. Life streams aggregate content from several social networks, providing the big picture of someone’s online life.
  • What you do online makes a difference — digital advocacy. This was clear during the US presidential election last year.

What does this mean for libraries? How do we use this to support the information needs of our users?

Think about ways to engage with virtual users — what strategies do we need in order to connect library staff and services with users in meaningful ways? Think about ways to enrich the online experience of users that then enhances their experiences in the physical library and their daily lives. Think about ways to empower customers to personalize and add value to their library experience so that they feel good about themselves and their community.

CIL 2009: What Have We Learned Lately About Academic Library Users

Speakers: Daniel Wendling & Neal K. Kaske, University of Maryland

How should we describe information-seeking behavior?

A little over a third of the students interviewed reported that they use Google in their last course-related search, and it’s about the same across all classes and academic areas. A little over half of the same students surveyed used ResearchPort (federated search – MetaLib), with a similar spread between classes and academic areas, although social sciences clearly use it more than the other areas. (survey tool: PonderMatic – copy of survey form in the conference book).

Their methodology was a combination of focus-group interviews and individual interviews, conducted away from the library to avoid bias. They used a coding sheet to standardize the responses for input into a database.

This survey gathering & analysis tool is pretty cool – I’m beginning to suspect that the presentation is more about it than about the results, which are also rather interesting.

 

Speaker: Ken Varnum

Will students use social bookmarking on a library website?

MTagger is a library-based tagging tool, borrowing concepts from resources like Delicious or social networking sites, and intended to be used to organize academic bookmarks. In the long term, the hope is that this will create research guides in addition to those supported by the librarians, and to improve the findability of the library’s resources.

Behind the scenes, they have preserved the concept of collections, which results in users finding similar items more easily. This is different from the commercial tagging tools that are not library-focused. Most tagging systems are tagger-centric (librarians are the exception). As a result, tag clouds are less informative, since most of the tags are individualized and there isn’t enough overlap to make them more visible.

From usability interviews, they found that personal motivations are stronger than social motivations, and that they wanted to have the tags displays alongside traditional search results. They don’t know why, but many users perceived tagging to be a librarian thing and not something they can do themselves.

One other thing that stood out in the usability interviews was the issue of privacy. Access is limited to network login, which has its benefits (your tags and you) and its problems (inappropriate terminology, information living in the system beyond your tenure etc.).

They are redesigning the website to focus on outcomes (personal motivation) rather than on tagging as such.