With barely a half a day to catch my breath, I jumped from ER&L into the complexities of gender issues in the workplace where libraries and technology intersect via the LTG Summit. As a result, it’s taken me some time to go back over my notes from ER&L and pull out the things that are most poignant, or themes that kept resurfacing.
Ebooks in a library setting are still pretty much a pain in the ass. Some sources are doing better about DRM and functionality, but the aggregators are still offering less than optimal solutions. Let’s not even mention ILL rights.
One thing that really struck me was how we all keep thinking the Sciences will be all over this ebook thing, since they took to ejournals like white on rice. However, we’ve managed to forget that the Sciences weren’t all that into print books compared to their love affair with print journals, so that’s not going to change much in the shift in format.
On the other hand, Social Sciences are gravitating towards ebooks pretty well. They’re more willing than the other disciplines to use the relatively crappy versions on aggregator platforms, per some research being done on eBrary and EBL usage over the past few years.
We’re still trying to figure out how to incorporate the quirks of eresources into our workflow models that were developed in the offline age of print. Division by format works only if the formats remain divided, but print and electronic comes bundled often, and sometimes it’s hard to tell if it’s a book or a serial.
Larger institutions are doing a lot of work on reorganizing and retraining, some better than others. I’m still not sure how to handle this in my Acquisitions team of four, with Cataloging in a different division. Communication seems to be key, along with acting as a telephone switch, redirecting requests to the proper individual.
His research is at the junction of human/computer interaction, geography, and big data. What you won’t see here is libraries.
Wikipedia has revolutionized computing in two ways: getting users to a large repository of knowledge and by being hugely popular with many people. In certain cases it has become the brains of modern computing.
However, it’s written by people who exist in a complex cultural context: region, gender, religion, etc. The cultural biases of Wikipedia have an impact on related computer processes. Librarians and educators are keenly aware of the caveats of Wikipedia such as accuracy and depth, but we also need to think about cultural bias.
Wikipedia exists in a large number of languages. Not much has been understood about the relationships between them until recently. Computer scientists assume that larger language editions are supersets of smaller ones and conceptually consistent across them. Social scientists know that each cultural communities defines things differently, and will cover unique sets of concepts. Several studies have looked into this.
A vast majority of concepts appear in only one language. Only a fraction of a percent are in all languages.
If you only read the English article about a concept that has an article in at least one other language edition, you are missing about 29% of the information that you could have gotten if you could read that other article.
Some of the differences are due to cultural factors. Each language edition will have a bias towards countries where the language is prominent.
What can we do if we take advantage of this? Omnipedia tries to break down the language silos to provide a diverse repository of world knowledge, and highlights the similarities and differences between the versions. The interface can be switched to display and search in any of the 25 languages covered.
Search engines are good for closed informational requests and navigational queries, but not so great for exploratory search. Atlasify tries to map concepts to regions. When the user clicks on entities in the map, it will display (in natural language) the relationship between the query and the location. They know this kind of mapping doesn’t work for every concept, but the idea of mapping search query concepts can be applied to other visualizations like the periodic tables or congressional seat assignments.
Bear in mind, though, that all of these tools are sensitive to the biases of their data sources. If they use only the English Wikipedia, they can miss important pieces, or worse, perpetuate the cultural biases.
Goal is to be more of a dialogue than a monologue.
In 2011, they were a traditional acquisitions and cataloging department. They had 18.1 FTE in technical services, with 8 acquisitions people working on both print and electronic, and 5 in cataloging. It felt very fragmented.
They were getting more eresources but no new staff. Less print, but staff weren’t interchangeable. The hybrid positions weren’t working well, and print was still seen as a priority by some of the staff. They could see the backlogs and made it seem like they had to deal with them first.
They hired consultants and decided to create two format-based teams: tangible formats and electronic resources. They defined the new positions and asked staff for their preferences, and then assigned staff to one team or the other. The team leads are focused on cataloging side and acquisition side, rather than by format.
To implement this they: oriented and trained staff; created workflow teams for ejournals, ebooks, and databases; talked with staff extensively; tried to be as transparent as possible; and hired another librarian.
They increased the FTE working on eresources, and they could use more, but this is good enough for now.
Some of the challenges include: staff buy-in and morale; communicating who does what to all the points of contact; workflows for orders with dual formats; budget structure (monographs/serials, with some simplification where possible, but still not tangible/electronic); and documentation organization (documenting isn’t a problem — find it is).
The benefits are: staff focusing on a single format; bringing acquisitions and cataloging together (better communication between functions); easier cross-training opportunities; workflows streamlined easier; and ease in planning and priorities.
She teaches a course on ERM, which is why she’s doing this research. She initially planned to do a comprehensive job ad analysis and then look at LIS syllabi to see if we were meeting the needs. Then she found Sarah Sutton’s 2011 dissertation that did most of this already. So, she changed her strategy to evaluate the competencies once they were adopted.
She interviewed ER librarians about their work, their institution’s workflow, and their perspectives on the competencies. She targeted jobs posted to ERIL from 2008-2012 and followed up with the folks who were hired. Of those identified (42), 16 responded to her inquiry.
Most had paraprofessional experience with web design, reference, serials, ILL, and archives. They received their degrees between 1980 and 2012. Most have reference/instruction and collection development/subject liaison responsibilities. Median FTE at their institutions was 13,900.
Their typical day “depends on the time of year,” which affirms the importance of understanding the lifecycle of ERM. Work seems to be most intense at the beginning and end of the semester, which isn’t covered in the lifecycle as explicitly.
Though they had many different roles, there were some commonalities: troubleshooting access and other issues, primary point person for vendor communication, and working closely with subject specialist and systems/IT personnel.
The interview took the Core Competencies and lumped them into four broad areas and didn’t specify the source: technical, analytical, legal, and interpersonal. The participants were asked to rank these.
Least important were the legal competencies, in part because many institutions had legal departments that could make sure that they were signing reasonable licenses, or they were negotiated at the consortial level. They also felt that identifying library-related clauses becomes rote after a short period of time.
Third most important were the technical competencies, in part because it’s all in flux and you’ll have to learn something new tomorrow anyway. It’s more important to be able to learn than what you know right now. Most important skills were website/database design and Excel. A certain degree of technical savvy was important communicating with internal and external technical contacts. Some felt that having a deep knowledge of cataloging as a core competency was off-base, possibly because the metadata department usually handles all formats and ER librarians are typically not involved with that.
Second most important was analytical.
First most important were interpersonal competencies. You need to be able to understand what is happening with so many key contacts, from students to faculty to colleagues to vendors. Good relationships with vendors can influence product development and getting good customer service. Collaboration is huge.
Moving forward, she plans to continue the analysis of the open-ended questions and compare with the Core Competencies. She will evaluate the MLIS program curricular pathways and syllabi for ERM courses to get at the unique competencies that are not covered by ALA requirements.
Really admires the comprehensiveness of the NASIG Core Competencies.
Question about the dearth of ERM courses in our LIS programs. Speaker noted she is doing this to promote ERM education because everyone coming out of LIS should know something of this, like we all learned cataloging and reference.
Is 16 a good sample? Yes, for qualitative research. Each interview was 1.5 hrs, and takes about 6hrs to transcribe. Her colleagues thought it was a great sample, too. If she did more, it would be a questionnaire based on this qualitative research to get a broader response.
What were the reasons people were enthusiastic about Excel? A lot to do with not much luck with implementing ERMS and using it to manage their administrative data. Title analysis, generating stats, use reports, etc. “I try to develop a new Excel technique every week.”
Question for us: Is there a strong distinction between digital librarianship and licensed content librarianship? Response: Yes!
Question for us: relationship between ERM and cataloging. Response: It’s good for us to know some to communicate (more than say, a reference librarian), but we don’t do the work.
Respondents said, for the most part, that no one else in their library could do their job if they got hit by a bus. The main reasons being that no one else had the comprehensive vision of what goes on with managing ERM.
Speakers: Kathy Perry (VIVA), Melissa Blaney (American Chemical Society), and Nan Butkovitch (Pennsylvania State University)
In 1998, ICOLC created guidelines for delivering usage information, and they have endorsed COUNTER and SUSHI. COUNTER works because all the players are involved and agree to reasonable timeframes.
COUNTER Code of Practice 4 now recognizes media and tracking of use through mobile devices.
PIRUS (Publisher and Institutional Repository Usage Statistics) is the next step, but they are going to drop the term and incorporate it as an optional report in COUNTER (Article Report 1). There is a code of practice and guidelines on the website.
Usage Factor metric as a tool for assessing journals that aren’t covered by impact factor. It won’t be comparable across subject groups because they are measuring different things.
If your publishers are not COUNTER compliant, ask them to do it.
ACS chose to go to COUNTER 4 in part because it covers all formats. They like being able to highlight usage of gold open access titles and denials due to lack of license. They also appreciated the requirement for the ability to provide JR5, which reports usage by year of publication.
Big increases in search can also mean that people aren’t finding what they want.
ACS notes that users are increasingly coming from Google, Mendeley, and other indexing sources, rather than the publisher’s site itself.
They hear a lot that users want platforms that allow sharing and collaborating across disciplines and institutions. Authors are wanting to measure the impact of their work in traditional and new ways.
Science librarian suggests using citation reports to expand upon the assessment of usage reports. If you have time for that sort of thing and only care about journals that are covered by ISI.
Chemistry authors have been resistant to open access publishing, particularly if they think they can make money off of a patent, etc. She thinks it will be useful to have OA article usage information, but needs to be put in the context of how many OA articles there are available.
What you want to measure in usage can determine your sources. Every measurement method has bias. Multiple usage measurements can have duplication. A new metric is just around the corner.
Used TERMS as a framework for developing an eresources team and a course at University of Wisconsin.
How are we going to systematically ensure that our eresources knowledge evolves and continues?
75% of UConn’s budget is for e-content. The human resources was 3.25 FTE when she arrived, but now they are at 5.65 FTE. The only other unit smaller is the digital scholarship and data curation team created a year ago.
Why does collection development for non-ERM staff exist as a term for non-electronic monograph acquisitions? In 2014? How do we establish eresources teams and teach this to staff?
TERMS helps build a framework for discussion among her students and her work team. The Core Competencies was used for class reading and discussions with her team, and became a framework for submitting training requests. TERMS has been a lighthouse for them, and they’ve continued to go back to them and review the cyclical process to identify successes and areas for improvements.
Only 19% of the ALA accredited LIS programs cover ERM topics, yet 73% of recent job ads require ERM competencies.
The financial resources are be allocated, what about the human resources to do our work. Eresources positions are not entry-level, and yet the spend in that content is increases. How can we expand/grow the ERM skill-set to more of our staff positions? This is not a new problem. We’ve been talking about this as a profession since 2000 or earlier.
The Core Competencies should be for the entire library, not just the ERM staff.
We need to eliminate the delineation between print and electronic management/acquisitions.
Establish partnerships with LIS programs. Establish paid fellowships that are at least two fiscal years in length. Get support from library administrators for adequate staffing and the time to teach courses, etc.
Good strategies for training staff: Listening to them and knowing what they already know how to do. Making analogies from what you know to what they know. Small chunks at a time.
Are the NASIG Core Competencies a laundry list of the ideal rather than true core competencies that can be expected at the beginning of an ERM career? No. The point is that no one person can do everything ERM. But, these are the things that are needed to manage eresources, regardless of how many people it takes to do it.
Audience member says she had to fail badly with only two staff in order to get the change needed to have a sufficient number of people on her team.
Ebooks aren’t terrible. Instead, we’d like to think of them as teenagers. They’re always changing, often hard to find, and difficult to interact with. Lots of terrible teenagers turn into excellent human beings. There is hope for ebooks.
Scholar’s Portal is a repository for purchased ebooks. Used to be mostly DRM-free, but in 2013, they purchased books from two sources that came with DRM and other restrictions. One of those sources were from Canadian UPs, and they really needed to be viable for course adoption (read: sell many copies to students instead of one to the library). The organization wanted everything, so they agreed to the terms.
In adding this content, with very different usability, they had to determine how they were going to manage it: loan periods, Adobe Digital Editions, and really, how much did they want to have to explain to the users?
One of the challenges is not having control over the ADE layout and wording for alerts and error messages. You can’t use a public computer easily, since your user ID can be logged in on six devices at most.
Faculty can’t use the books in their class. Communicating this to them is… difficult.
Ecclestone did a small usability test. Tried to test both a user’s ability to access a title and their perception of borrowable ebooks. The failure rate was 100%.
Lessons learned: Adobe = PDFs (they don’t get that ADE is not the same); .exe files are new to students, or potentially viruses; returning ebooks manually is never going to happen; and terms like “borrow” and “loan” are equated with paying.
The paradox is that it’s really challenging the first time around, but once they have the right software and have gone through the download process, it’s easier and they have a better opinion.
Suggestions for getting ready to deal with DRM ebooks: Train the trainer. Test your interface.
They put error messages in LibAnswers and provide solutions that way in case the user is searching for help with the error.
Change and destruction can bring an opportunity, but it can also be stressful. How are we in the process of change? What tools can help us thrive in this process?
She’s been in this industry for a long time, but has never heard people stand up at conferences and talk about how we are as humans.
Resilience is a message of hope. It’s about what we all have inside us.
Resilience is recovering from or adjusting to misfortune or change; is generating supportive behaviors that help us cope when facing adversity; is bouncing back from stress and adversity and promptly take on new challenges.
Resilience is not keeping going at all costs; is not about ignoring the difficult or sugar coating it; it’s not about irritating platitudes.
Why now? We have an incredibly dynamic sector. There is always change, challenge, uncertainty, and opportunity.
Challenge and opportunity can be good and inspiring but also frustrating and draining, and it can get depleting and stressful. We have evolved to handle short, sharp time of stress, and not the low, background hum that typify our day-to-day lives now. In the UK, stress is now the biggest single cause of sickness in the working population. In the US, job pressure is the main cause of stress, and negatively impacts every aspect of your work.
When we’re under stress in the workplace, we can become disengaged, burnt out, and can result in absenteeism (or presenteeism – being unwell but showing up for work anyway).
Resilience can help. Resilience training can improve performance and productivity, staff motivation, and reduce instances of workplace stress.
We are living in a time when risk is probably good to take, but we need to be able to handle the pressures associated with it.
Resilience is not an innate trait. It’s a capacity we can build within ourselves. However, the practical stuff is hard to convey in a plenary. Workshop tomorrow.
Case Study: Illinois Bell Telephone Company
12 year study in the 1980s during the time when the US telephone monopoly was deregulated. In 1980 to 1981, they downsized by nearly half, and they didn’t get clear direction from the government on what they could or could not do.
2/3 of the staff had significant breakdowns in wellness, but the rest maintained and thrived. The more resilient cohort had powerful attitudes/beliefs about themselves and the world around them. It motivated and encouraged them.
The researchers concluded that challenge, control, commitment (connection) were the key. Being able to view setbacks and difficulties as challenges rather than threats that paralyze. Focusing time and energy on situations where they had some control and could have the most impact. Reaching out to people around them and stayed connected, committed, and motivated.
Resilience training helps us tune into our habitual thoughts, beliefs, and behaviors. We practice generating genuine optimistic opportunities, reaching out, etc.
Change and disruption is a normal, expected part of our lives. These things can enhance our lives, but also negatively impact our well-being. Resilience is not innate, but can be learned and cultivated. There is a large and growing research base that can offer tools and ideas for us to use.
Let’s focus on how we’re being so we can get better at the doing.
Carmen Mitchell, “We Are All Connected”
Inspired by a couple of speeches at code4lib. She’s one of the two cross-polinators. She thinks that librarians are all connected, and we need to start reaching out beyond our walls of expertise. The website is showing recent changes to Wikipedia as string plucks (subtractions) and bell tones (additions), with the size of circles indicating the size of edit. It connects emotionally. We’re all interconnected and need to look to those to connections to move each other forward.
Also a cross-polinator. Doing a thesis project on interactive design media, and part of it is user research. Read an article by Don Norman that said that design research isn’t very good at creating new things (better at fixing existing things). He is questioning what role research has to play in engineering/architecture. Libraries have been doing a lot of incremental changes, and if we want to do something new, how do we start?
Mary Nugent, Project Transfer Update
TRANSFER is a code of practice for moving a title from one publisher to another. It covers what will be transferred, including perpetual access, and where it will move. The code began to be developed in 2006. Encourages publishers to sign on if they haven’t already. It’s currently on the UKSG website. The 3.0 release, it incorporates the changes that have been happening in business practices. Librarians can sign up to be alerted of transfer details for journals that are moving. There’s also a searchable database of past transfers. A proposal has been submitted to NISO to become a standard in the future.
?, more of a discussion/question than presentation
How are other libraries using Google Analytics to gather statistics about how their resources being used?
-Where users are getting to the resources (which page they are coming from)
-Where content on the website is most effective
-Publisher uses it to track where people are coming from
-Analyze facets used in the discovery system
Todd Carpenter, NISO
He wants feedback on open access infrastructure needs. Any interest or thoughts on what NISO might do about infrastructure elements? Tweet him at @TAC_NISO.
What are we going to do to keep our profession going and being bold librarians? Wants to figure out how to help with diversity in libraries and technology. Thinks that organizations like Black Girls Code could be good models for making a start.
Heather Hilton, student award winner
Had a major life change 5 years ago. Found her passion in cataloging/metadata. Wants to learn everything she can from us.
Roxanne Brazil, student award winner
Very thankful to be here. Would like to mentor with Angie.
Plug for the digital materials panel tomorrow. Tech services librarian at a public library. Wants to know how to provide access to her patrons through her one point of access – the catalog. Hoping to give something back, such as the popular materials though a service called Hoopla.
Pushing for the idea of taking library resources out of where we normally find them. Helped build a tool with EDS that lets faculty add library content into their CMS while complying with copyright. Uses an API to pull content in from the discovery system and annotate as needed. This will link to the content, which will then allow each use to be counted.
How do we get our faculty to get the resources we have? If we had the personality of advertisers, we would have gone that direction. We need to harness the advertising power of our vendors and double-team to reach faculty. “Account development manager” is often the vendor-side person for that. Many publisher/vendors have them.
Speakers: Michael Levine-Clark (University of Denver) & Kari Paulson (ProQuest eBrary/EBL)
ProQuest is looking at usage data across the eBrary and EBL platforms as they are working to merge them together. To help interpret the data, they asked Levine-Clark to look at it as well. This is more of a proof-of-concept than a final conclusion.
They looked at 750,000 ebooks initially, narrowing it down for some aspects. He asked several questions, from the importance of quality to disciplinary preferences to best practices for measuring use, and various tangential questions related to these.
They looked at eBrary data from 2010-2013Q3 and EBL data from 2011-2013Q3. They used only the titles with an LC call number, and separate analysis of those titles that come from university presses specifically.
Usage was defined in three ways: sessions, views (count of page views), and downloads (entire book). Due to the variations in the data sets (number of years, number of customers, platforms), they could not easily compare the usage information between eBrary and EBL.
Do higher quality ebooks get used more? He used university press books as a measure of quality, though he recognizes this is not the best measure. For titles with at least one session, he found that the rate of use was fairly comparable, but slightly higher for university press books. The session counts and page views in eBrary was significantly higher for UP books, but not as much with EBL. In fact, consistently use was higher for UP books across the categories, but this may be because there are more UP books selected by libraries, thus increasing their availability.
What does usage look like across broad disciplines? Humanities, Social Sciences, and STEM were broken out and grouped by their call number ranges. He excluded A & Z (general) as well as G (too interdisciplinary) out of the equation. The social sciences were the highest in sessions and views on eBrary, but humanities win the downloads. For EBL, the social sciences win all categories. When he looked at actions per session, STEM had higher views, but all downloaded at about the same rate on both platforms.
How do you measure predicted use? He used the percentage of books in an LC class relative to the total books available. If the percentage of a use metric is lower then it is not meeting expected use, and vice versa. H, L, G, N, and D were all better than expected. Q, F, P, K and U were worse than expected.
How about breadth versus depth? This gets complicated. Better to find the slides and look at the graphs. The results map well to the predicted use outcomes.
Can we determine the level of immersion in a book? If more pages are viewed per session in a subject area, does that mean the users spend more time reading or just look at more pages? Medicine (R), History of the Americas (F), and Technology (T) appear to be used at a much higher rate within a session than other areas, despite performing poorly in breadth versus depth assessment. In other words, they may not be used much per title, but each session is longer and involves more actions than others.
How do we use these observations to build better collections and better serve our users?
Books with call numbers tend to be use more than those without. Is it because a call number is indicative of better metadata? Is it because publishers of better quality will provide better metadata? It’s hard to tell at this point, but it’s something he wants to look into.
A white paper is coming soon and will include a combined data set. It will also include the EBL data about how long someone was in a book in a session. Going forward, he will also look into LC subclasses.