ER&L 2013: Lightning Talks

“¡Rayos!” by José Eugenio Gómez Rodríguez

Speaker: Emily Guhde, NCLIVE
“We’ve Got Your Number: Making Usage Data Matter” is the project they are working on. What is a good target cost per use for their member libraries? They are organizing this by peer groups. How can the member libraries improve usage? They are hoping that other libraries will be able to replicated this in the future.

Speaker: Francis Kayiwa, UIC
He is a server administrator with library training, and wanted to be here to understand what it is his folks are coming back and asking him to do. Cross-pollinate conferences — try to integrate other kinds of conferences happening nearby.

Speaker: Annette Bailey, Virginia Tech
Co-developed LibX with her husband, now working on a new project to visualize what users are clicking on after they get a search result in Summon. This is a live, real-time visualization, pulled from the Summon API.

Speaker: Angie Rathnel, University of Kansas
Have been using a SAS called Callisto to track and claim eresources. It tracks access to entitlements daily/weekly, and can check to make sure proxy configurations are set up correctly.

Speaker: Cindy Boeke, Southern Methodist University
Why aren’t digital library collections included with other library eresources on lists and such (like the ubiquitous databases A-Z page)?

Speaker: Rick Burke, SCELC
SIPX to manage copyright in a consortial environment. Something something users buying access to stuff we already own. I’m guessing this is more for off-campus access?

Speaker: Margy Avery, MIT Press
Thinking about rich/enhanced digital publications. Want to work with libraries to make this happen, and preservation is a big issue. How do we catalog/classify this kind of resource?

Speaker: Jason Price, Claremont Colleges
Disgruntled with OpenURL and the dependency on our KB for article-level access. It is challenging to keep our lists (KBs) updated and accurate — there has to be a better way. We need to be working with the disgrundterati who are creating startups to address this problem. Pubget was one of the first, and since then there is Dublin Six, Readcube, SIPX, and Callisto. If you get excited about these things, contact the startups and tell them.

Speaker: Wilhelmina Ranke, St. Mary’s University
Collecting mostly born digital collections, or at least collections that are digitized already, in the repository: student newspaper, video projects, and items digitized for classroom use that have no copyright restrictions. Doesn’t save time on indexing, but it does save time on digitizing.

Speaker: Bonnie Tijerina, Harvard
The #ideadrop house was created to be a space for librar* to come together to talk about librar* stuff. They had a little free library box for physical books, and also a collection of wireless boxes with free digital content anyone could download. They streamed conversations from the living room 5-7 times a day.

Speaker: Rachel Frick
Digital Public Library of America focuses on content that is free to all to create a more informed citizenry. They want to go beyond just being a portal for content. They want to be a platform for community involvement and conversations.

CIL 2010: The Power in Your Browser – LibX & Zotero

Speaker: Krista Godfrey

She isn’t going to show how to create LibX or Zotero access, but rather how to use them to create life-long learners. Rather than teaching students how to use proprietary tools like Refworks, teaching them tools they can use after graduation will help support their continued research needs.

LibX works in IE and Firefox. They are working on a Chrome version as well. It fits into the search and discovery modules in the research cycle. The toolbar connects to the library catalog and other tools, and right-click menu search options are available on any webpage.  It will also embed icons in places like Amazon that will link to catalog searches, and any page with a document identifier (DOI, ISSN) will now present that identifier as a link to the catalog search.

Zotero is only in Firefox, unfortunately. It’s a records management tool that allows you to collect, manage, cite, and share, which fill in the rest of the modules in the research cycle. It will collect anything, archive anything, and store any attached documents. You can add notes, tags, and enhance the metadata. The citation process works in Word, Open Office, and Google Docs, with a program similar to Write-N-Cite that can be done by dragging and dropping the citation where you want it to go.

One of the down-sides to Zotero when it first came out was that it lived only in one browser on one machine, but the new version comes with server space that you can sync your data to, which allows you to access your data on other browsers/machines. You can create groups and share documents within them, which would be great for a class project.

Why aren’t we teaching Zotero/LibX more? Well, partially because we’ve spent money on other stuff, and we tend to push those more. Also, we might be worried that if we give our users tools to access our content without going through our doors, they may never come back. But, it’s about creating life-long learners, and they won’t be coming through our doors when they graduate. So, we need to teach them tools like these.

ER&L 2010: Adventures at the Article Level

Speaker: Jamene Brooks-Kieffer

Article level, for those familiar with link resolvers, means the best link type to give to users. The article is the object of pursuit, and the library and the user collaborate on identifying it, locating it, and acquiring it.

In 1980, the only good article-level identification was the Medline ID. Users would need to go through a qualified Medline search to track down relevant articles, and the library would need the article level identifier to make a fast request from another library. Today, the user can search Medline on their own; use the OpenURL linking to get to the full text, print, or ILL request; and obtain the article from the source or ILL. Unlike in 1980, the user no longer needs to find the journal first to get to the article. Also, the librarian’s role is more in providing relevant metadata maintenance to give the user the tools to locate the articles themselves.

In thirty years, the library has moved from being a partner with the user in pursuit of the article to being the magician behind the curtain. Our magic is made possible by the technology we know but that our users do not know.

Unique identifiers solve the problem of making sure that you are retrieving the correct article. CrossRef can link to specific instances of items, but not necessarily the one the user has access to. The link resolver will use that DOI to find other instances of the article available to users of the library. Easy user authentication at the point of need is the final key to implementing article-level services.

One of the library’s biggest roles is facilitating access. It’s not as simple as setting up a link resolver – it must be maintained or the system will break down. Also, document delivery service provides an opportunity to generate goodwill between libraries and users. The next step is supporting the users preferred interface, through tools like LibX, Papers, Google Scholar link resolver integration, and mobile devices. The latter is the most difficult because much of the content is coming from outside service providers and the institutional support for developing applications or web interfaces.

We also need to consider how we deliver the articles users need. We need to evolve our acquisitions process. We need to be ready for article-level usage data, so we need to stop thinking about it as a single-institutional data problem. Aggregated data will help spot trends. Perhaps we could look at the ebook pay-as-you-use model for article-level acquisitions as well?

PIRUS & PIRUS 2 are projects to develop COUNTER-compliant article usage data for all article-hosting entities (both traditional publishers and institutional repositories). Projects like MESUR will inform these kinds of ventures.

Libraries need to be working on recommendation services. Amazon and Netflix are not flukes. Demand, adopt, and promote recommendation tools like bX or LibraryThing for Libraries.

Users are going beyond locating and acquiring the article to storing, discussing, and synthesizing the information. The library could facilitate that. We need something that lets the user connect with others, store articles, and review recommendations that the system provides. We have the technology (magic) to make it available right now: data storage, cloud applications, targeted recommendations, social networks, and pay-per-download.

How do we get there? Cover the basics of identify>locate>acquire. Demand tools that offer services beyond that, or sponsor the creation of desired tools and services. We also need to stay informed of relevant standards and recommendations.

Publishers will need to be a part of this conversation as well, of course. They need to develop models that allow us to retain access to purchased articles. If we are buying on the article level, what incentive is there to have a journal in the first place?

For tenure and promotion purposes, we need to start looking more at the impact factor of the article, not so much the journal-level impact. PLOS provides individual article metrics.

css.php