Anna Creech is a university librarian with two cats, glasses, comfortable shoes, and a fear of turning into a stereotype.

Upcoming Talks

Mar 17 @ ER&L

Recent Comments

three weeks later – take-aways from ER&L 2014

Twitter glitter #erl14

With barely a half a day to catch my breath, I jumped from ER&L into the complexities of gender issues in the workplace where libraries and technology intersect via the LTG Summit. As a result, it’s taken me some time to go back over my notes from ER&L and pull out the things that are most poignant, or themes that kept resurfacing.

ebooks
Ebooks in a library setting are still pretty much a pain in the ass. Some sources are doing better about DRM and functionality, but the aggregators are still offering less than optimal solutions. Let’s not even mention ILL rights.

One thing that really struck me was how we all keep thinking the Sciences will be all over this ebook thing, since they took to ejournals like white on rice. However, we’ve managed to forget that the Sciences weren’t all that into print books compared to their love affair with print journals, so that’s not going to change much in the shift in format.

On the other hand, Social Sciences are gravitating towards ebooks pretty well. They’re more willing than the other disciplines to use the relatively crappy versions on aggregator platforms, per some research being done on eBrary and EBL usage over the past few years.

workflows
We’re still trying to figure out how to incorporate the quirks of eresources into our workflow models that were developed in the offline age of print. Division by format works only if the formats remain divided, but print and electronic comes bundled often, and sometimes it’s hard to tell if it’s a book or a serial.

Larger institutions are doing a lot of work on reorganizing and retraining, some better than others. I’m still not sure how to handle this in my Acquisitions team of four, with Cataloging in a different division. Communication seems to be key, along with acting as a telephone switch, redirecting requests to the proper individual.

ER&L 2014 — Mining and Application of Diverse Cultural Perspectives in User-Generated Content

“Coal Mining in Brazil” by United Nations Photo

Speaker: Brent Hecht, Ph.D., University of Minnesota

His research is at the junction of human/computer interaction, geography, and big data. What you won’t see here is libraries.

Wikipedia has revolutionized computing in two ways: getting users to a large repository of knowledge and by being hugely popular with many people. In certain cases it has become the brains of modern computing.

However, it’s written by people who exist in a complex cultural context: region, gender, religion, etc. The cultural biases of Wikipedia have an impact on related computer processes. Librarians and educators are keenly aware of the caveats of Wikipedia such as accuracy and depth, but we also need to think about cultural bias.

Wikipedia exists in a large number of languages. Not much has been understood about the relationships between them until recently. Computer scientists assume that larger language editions are supersets of smaller ones and conceptually consistent across them. Social scientists know that each cultural communities defines things differently, and will cover unique sets of concepts. Several studies have looked into this.

A vast majority of concepts appear in only one language. Only a fraction of a percent are in all languages.

If you only read the English article about a concept that has an article in at least one other language edition, you are missing about 29% of the information that you could have gotten if you could read that other article.

Some of the differences are due to cultural factors. Each language edition will have a bias towards countries where the language is prominent.

What can we do if we take advantage of this? Omnipedia tries to break down the language silos to provide a diverse repository of world knowledge, and highlights the similarities and differences between the versions. The interface can be switched to display and search in any of the 25 languages covered.

Search engines are good for closed informational requests and navigational queries, but not so great for exploratory search. Atlasify tries to map concepts to regions. When the user clicks on entities in the map, it will display (in natural language) the relationship between the query and the location. They know this kind of mapping doesn’t work for every concept, but the idea of mapping search query concepts can be applied to other visualizations like the periodic tables or congressional seat assignments.

Bear in mind, though, that all of these tools are sensitive to the biases of their data sources. If they use only the English Wikipedia, they can miss important pieces, or worse, perpetuate the cultural biases.

ER&L 2014 — Building the Eresources Team: the MIT Libraries Experience

“DC Hero Minifigs (most of them)” by Julian Fong

Speaker: Kim Maxwell

Goal is to be more of a dialogue than a monologue.

In 2011, they were a traditional acquisitions and cataloging department. They had 18.1 FTE in technical services, with 8 acquisitions people working on both print and electronic, and 5 in cataloging. It felt very fragmented.

They were getting more eresources but no new staff. Less print, but staff weren’t interchangeable. The hybrid positions weren’t working well, and print was still seen as a priority by some of the staff. They could see the backlogs and made it seem like they had to deal with them first.

They hired consultants and decided to create two format-based teams: tangible formats and electronic resources. They defined the new positions and asked staff for their preferences, and then assigned staff to one team or the other. The team leads are focused on cataloging side and acquisition side, rather than by format.

To implement this they: oriented and trained staff; created workflow teams for ejournals, ebooks, and databases; talked with staff extensively; tried to be as transparent as possible; and hired another librarian.

They increased the FTE working on eresources, and they could use more, but this is good enough for now.

Some of the challenges include: staff buy-in and morale; communicating who does what to all the points of contact; workflows for orders with dual formats; budget structure (monographs/serials, with some simplification where possible, but still not tangible/electronic); and documentation organization (documenting isn’t a problem — find it is).

The benefits are: staff focusing on a single format; bringing acquisitions and cataloging together (better communication between functions); easier cross-training opportunities; workflows streamlined easier; and ease in planning and priorities.

ER&L 2014 — Practitioner Perspectives on the Core Competencies for Electronic Resource Librarians: Preliminary Results of a Qualitative Study

“Cats that Webchick is herding” by Kathleen Murtagh

Presenter: Sheri Ross, St. Catherine University

She teaches a course on ERM, which is why she’s doing this research. She initially planned to do a comprehensive job ad analysis and then look at LIS syllabi to see if we were meeting the needs. Then she found Sarah Sutton’s 2011 dissertation that did most of this already. So, she changed her strategy to evaluate the competencies once they were adopted.

She interviewed ER librarians about their work, their institution’s workflow, and their perspectives on the competencies. She targeted jobs posted to ERIL from 2008-2012 and followed up with the folks who were hired. Of those identified (42), 16 responded to her inquiry.

Most had paraprofessional experience with web design, reference, serials, ILL, and archives. They received their degrees between 1980 and 2012. Most have reference/instruction and collection development/subject liaison responsibilities. Median FTE at their institutions was 13,900.

Their typical day “depends on the time of year,” which affirms the importance of understanding the lifecycle of ERM. Work seems to be most intense at the beginning and end of the semester, which isn’t covered in the lifecycle as explicitly.

Though they had many different roles, there were some commonalities: troubleshooting access and other issues, primary point person for vendor communication, and working closely with subject specialist and systems/IT personnel.

The interview took the Core Competencies and lumped them into four broad areas and didn’t specify the source: technical, analytical, legal, and interpersonal. The participants were asked to rank these.

Least important were the legal competencies, in part because many institutions had legal departments that could make sure that they were signing reasonable licenses, or they were negotiated at the consortial level. They also felt that identifying library-related clauses becomes rote after a short period of time.

Third most important were the technical competencies, in part because it’s all in flux and you’ll have to learn something new tomorrow anyway. It’s more important to be able to learn than what you know right now. Most important skills were website/database design and Excel. A certain degree of technical savvy was important communicating with internal and external technical contacts. Some felt that having a deep knowledge of cataloging as a core competency was off-base, possibly because the metadata department usually handles all formats and ER librarians are typically not involved with that.

Second most important was analytical.

First most important were interpersonal competencies. You need to be able to understand what is happening with so many key contacts, from students to faculty to colleagues to vendors. Good relationships with vendors can influence product development and getting good customer service. Collaboration is huge.

Moving forward, she plans to continue the analysis of the open-ended questions and compare with the Core Competencies. She will evaluate the MLIS program curricular pathways and syllabi for ERM courses to get at the unique competencies that are not covered by ALA requirements.

Really admires the comprehensiveness of the NASIG Core Competencies.

Question about the dearth of ERM courses in our LIS programs. Speaker noted she is doing this to promote ERM education because everyone coming out of LIS should know something of this, like we all learned cataloging and reference.

Is 16 a good sample? Yes, for qualitative research. Each interview was 1.5 hrs, and takes about 6hrs to transcribe. Her colleagues thought it was a great sample, too. If she did more, it would be a questionnaire based on this qualitative research to get a broader response.

What were the reasons people were enthusiastic about Excel? A lot to do with not much luck with implementing ERMS and using it to manage their administrative data. Title analysis, generating stats, use reports, etc. “I try to develop a new Excel technique every week.”

Question for us: Is there a strong distinction between digital librarianship and licensed content librarianship? Response: Yes!

Question for us: relationship between ERM and cataloging. Response: It’s good for us to know some to communicate (more than say, a reference librarian), but we don’t do the work.

Respondents said, for the most part, that no one else in their library could do their job if they got hit by a bus. The main reasons being that no one else had the comprehensive vision of what goes on with managing ERM.

ER&L 2014 — Beyond COUNTER: The changing definition of “usage” in an open access economy

Speakers: Kathy Perry (VIVA), Melissa Blaney (American Chemical Society), and Nan Butkovitch (Pennsylvania State University)

In 1998, ICOLC created guidelines for delivering usage information, and they have endorsed COUNTER and SUSHI. COUNTER works because all the players are involved and agree to reasonable timeframes.

COUNTER Code of Practice 4 now recognizes media and tracking of use through mobile devices.

PIRUS (Publisher and Institutional Repository Usage Statistics) is the next step, but they are going to drop the term and incorporate it as an optional report in COUNTER (Article Report 1). There is a code of practice and guidelines on the website.

Usage Factor metric as a tool for assessing journals that aren’t covered by impact factor. It won’t be comparable across subject groups because they are measuring different things.

If your publishers are not COUNTER compliant, ask them to do it.

ACS chose to go to COUNTER 4 in part because it covers all formats. They like being able to highlight usage of gold open access titles and denials due to lack of license. They also appreciated the requirement for the ability to provide JR5, which reports usage by year of publication.

Big increases in search can also mean that people aren’t finding what they want.

ACS notes that users are increasingly coming from Google, Mendeley, and other indexing sources, rather than the publisher’s site itself.

They hear a lot that users want platforms that allow sharing and collaborating across disciplines and institutions. Authors are wanting to measure the impact of their work in traditional and new ways.

Science librarian suggests using citation reports to expand upon the assessment of usage reports. If you have time for that sort of thing and only care about journals that are covered by ISI.

Chemistry authors have been resistant to open access publishing, particularly if they think they can make money off of a patent, etc. She thinks it will be useful to have OA article usage information, but needs to be put in the context of how many OA articles there are available.

What you want to measure in usage can determine your sources. Every measurement method has bias. Multiple usage measurements can have duplication. A new metric is just around the corner.