NASIG 2013: Losing Staff — the Seven Stages of Loss and Recovery

CC BY-ND 2.0 2013-06-10
“Autumn dawn” by James Jordan

Speaker: Elena Romaniuk

This is about losing staff to retirement, and not about losing staff to death, which is similar but different.

They started as one librarian and six staff, and now two of them have retired and have not been replaced. This is true of most of technical services, where staff were not replaced or shifted to other departments.

The staff she lost were key to helping run the department, often filling in when she was out for extended leaves. They were also the only experienced support staff catalogers.

The stages:

  1. Shock and denial
  2. Pain and guilt
  3. Anger and bargaining
  4. Depression, reflection, loneliness
  5. Upward turn
  6. Reconstruction and working through
  7. Acceptance and hope

The pain went beyond friends leaving, because they also lost a lot of institutional memory and the workload was spread across the remaining staff. They couldn’t be angry at the staff who left, and they couldn’t bargain except to let administrators know that with less people, not all of the work could be continued and there may be some backlogs.

However, this allowed them to focus on the reflection stage and assess what may have changed about the work in recent years, and how that could be reflected in the new unit responsibilities. The serials universe is larger and more complex, with diverse issues that require higher-level understanding. There are fewer physical items to manage, and they don’t catalog as many titles anymore, with most of them being for special collections donations.

They are still expected to get the work done, despite having fewer staff, and if they got more staff, they would need more than one to handle it all. Given the options, she decided to take the remaining staff in the unit who have a lot of serials-related experience and train them up to handle the cataloging as well, as long as they were willing to do it.

In the end, they re-wrote the positions to be the same, with about half focused on cataloging and the rest with the other duties rotated through the unit on a monthly basis.

They have acceptance and hope, with differing levels of anxiety among the staff. The backlogs will grow, but as they get more comfortable with the cataloging they will catch up.

What worked in their favor: they had plenty of notice, giving them time to plan and prepare, and do some training before the catalogers left.

One of the recommended coping strategies was for the unit head to be as available as possible for problem solving. They needed clear priorities with documented procedures that are revised as needed. The staff also needed to be willing to consult with each other. The staff also needed to be okay with not finishing everything every day, and that backlogs will happen.

They underestimated the time needed for problem-solving, and need to provide more training about basic cataloging as well as serials cataloging specifically. There is always too much work with multiple simultaneous demands.

She is considering asking for another librarian, even if only on a term basis, to help catch up on the work. There is also the possibility of another reorganization or having someone from cataloging come over to help.

[lovely quote at the end that I will add when the slides are uploaded]

ER&L 2012: Knockdown/Dragout Webscale Discovery Service vs. Niche Databases — Data-Driven Evaluation Methods

tug-of-war
photo by TheGiantVermin

Speaker: Anne Prestamo

You will not hear the magic rational that will allow you to cancel all your A&I databases. The last three years of analysis at her institution has resulted in only two cancelations.

Background: she was a science librarian before becoming an administrator, and has a great appreciation for A&I searching.

Scenario: a subject-specific database with low use had been accessed on a per-search basis, but going forward it would be sole-sourced and subscription based. Given that, their cost per search was going to increase significantly. They wanted to know if Summon would provide a significant enough overlap to replace the database.

Arguments: it’s key to the discipline, specialized search functionality, unique indexing, etc… but there’s no data to support how these unique features are being used. Subject searches in the catalog were only 5% of what was being done, and most of them came from staff computers. So, are our users actually using the controlled vocabularies of these specialized databases. Finally, librarians think they just need to promote these more, but sadly, that ship’s already sailed.

Beyond usage data, you can also look at overlap with your discovery service, and also identify unique titles. For those, you’ll need to consider local holdings, ILL data, impact factors, language, format, and publication history.

Once they did all of that, they found that 92% of the titles were indexed in their discovery service. The depth of the backfile may be an issue, depending on the subject area. Also, you may need to look at the level of indexing (cover to cover vs. selective). In the end, they found that 8% of the titles not included, they owned most of them in print and they were rather old. 15% of the 8% had impact factors, which may or may not be relevant, but it is something to consider. And, most of the titles were non-English. They also found that there were no ILL requests for the non-owned unique titles, and less than half were scholarly and currently being published.

reason #237 why JSTOR rocks

For almost two decades, JSTOR has been digitizing and hosting core scholarly journals across many disciplines. Currently, their servers store more than 1,400 journals from the first issue to a rolling wall of anywhere from 3-5 years ago (for most titles). Some of these journals date back several centuries.

They have backups, both digital and virtual, and they’re preserving metadata in the most convertible/portable formats possible. I can’t even imagine how many servers it takes to store all of this data. Much less how much it costs to do so.

And yet, in the spirit of “information wants to be free,” they are making the pre-copyright content open and available to anyone who wants it. That’s stuff from before 1923 that was published in the United States, and 1870 for everything else. Sure, it’s not going to be very useful for some researchers who need more current scholarship, but JSTOR hasn’t been about new stuff so much as preserving and making accessible the old stuff.

So, yeah, that’s yet another reason why I think JSTOR rocks. They’re doing what they can with an economic model that is responsible, and making information available to those who can’t afford it or are not affiliated with institutions that can purchase it. Scholarship doesn’t happen in a vacuum, and  innovators and great minds aren’t always found solely in wealthy institutions. This is one step towards bridging the economic divide.

ER&L: You’ve Flipped – the implications of ejournals as your primary format

Speaker: Kate Seago

In 2005, her institution’s were primarily print-based, but now they are mostly electronic. As a graduate of the University of Kentucky’s MLIS program, this explains so much. I stopped paying attention when I realized this presentation was all about what changed in the weird world of the UK Serials Dept, which has little relevance to my library’s workflows/decisions. I wish she had made this more relatable for others, as this is a timely and important topic.

ER&L 2010: Beyond Log-ons and Downloads – meaningful measures of e-resource use

Speaker: Rachel A. Flemming-May

What is “use”? Is it an event? Something that can be measured (with numbers)? Why does it matter?

We spend a lot of money on these resources, and use is frequently treated as an objective for evaluating the value of the resource. But, we don’t really understand what use is.

A primitive concept is something that can’t be boiled down to anything smaller – we just know what it is. Use is frequently treated like a primitive concept – we know it when we see it. To measure use we focus on inputs and outputs, but what do those really say about the nature/value of the library?

This gets more complicated with electronic resources that can be accessed remotely. Patrons often don’t understand that they are using library resources when they use them. “I don’t use the library anymore, I get most of what I need from JSTOR.” D’oh.

Funds are based on assessments and outcomes – how do we show that? The money we spend on electronic resources is not going to get any smaller. ROI is focused more on funded research, but not electronic resources as a whole.

Use is not a primitive concept. When we talk about use, it can be an abstract concept that covers all use of library resources (physical and virtual). Our research often doesn’t specify what we are measuring as use.

Use as a process is the total experience of using the library, from asking reference questions to finding a quiet place to work to accessing resources from home. It is the application of library resources/materials to complete a complex/multi-stage process. We can do observational studies of the physical space, but it’s hard to do them for virtual resources.

Most of our research tends to focus on use as a transaction – things that can be recorded and quantified, but are removed from the user. When we look only at the transaction data, we don’t know anything about why the user viewed/downloaded/searched the resource. Because they are easy to quantify, we over-rely on vendor-supplied usage statistics. We think that COUNTER assures some consistency in measures, but there are still many grey areas (i.e. database time-outs equal more sessions).

We need to shift from focusing on isolated instances of downloads and ref desk questions, but focus on the aggregate of the process from the user perspective. Stats are only one component of this. This is where public services and technical services need to work together to gain a better understanding of the whole. This will require administrative support.

John Law’s study of undergraduate use of resources is a good example of how we need to approach this. Flemming-May thinks that the findings from that study have generated more progress than previous studies that were focused on more specific aspects of use.

How do we do all of this without invading on the privacy of the user? Make sure that your studies are thought-out and pass approval from your institution’s review board.

Transactional data needs to be combined with other information to make it valuable. We can see that a resource is being used or not used, but we need to look deeper to see why and what that means.

As a profession, are we prepared to do the kind of analysis we need to do? Some places are using anthropologists for this. A few LIS programs are requiring a research methods course, but it’s only one class and many don’t get it. This is a great continuing education opportunity for LIS programs.

NASIG 2009: Ambient Findability

Libraries, Serials, and the Internet of Things

Presenter: Peter Morville

He’s a librarian that fell in love with the web and moved into working with information architecture. When he first wrote the book Information Architecture, he and his co-author didn’t include a definition of information architecture. With the second edition, they had four definitions: the structural design of shared information environments; the combination of organization, labeling, search, and navigation systems in webs sites and intranet; the art and science of shaping information products and experiences to support usability and finadability; an emerging discipline and community of practice focused on bringing principles of designing and architecture to the digital landscape.

[at this point, my computer crashed, losing all the lovely notes I had taken so far]

Information systems need to use a combination of categories (paying attention to audience and taxonomy), in-text linking, and alphabetical indexes in order to make information findable. We need to start thinking about the information systems of the future. If we examine the trends through findability, we might have a different perspective. What are all the different ways someone might find ____? How do we describe it to make it more findable?

We are drowning in information. We are suffering from information anxiety. Nobel Laureate Economist Herbert Simon said, “A wealth of information creates a poverty of attention.”

Ambient devices are alternate interfaces that bring information to our attention, and Moreville thinks this is a direction that our information systems are moving towards. What can we now do when our devices know where we are? Now that we can do it, how do we want to use it, and in what contexts?

What are our high-value objects, and is it important to make them more findable? RFID can be used to track important but easily hidden physical items, such as wheelchairs in a hospital. What else can we do with it besides inventory books?

In a world where for every object there are thousands of similar objects, how do we describe the uniqueness of each one? Who’s going to do it? Not Microsoft, and not Donald Rumsfeld and his unknown unknown. It’s librarians, of course. Nowadays, metadata is everywhere, turning everyone who creates it into librarians and information architects of sorts.

One of the challenges we have is determine what aspects of our information systems can evolve quickly and what aspects need more time.

In five to ten years from now, we’ll still be starting by entering a keyword or two into a box and hitting “go.” This model is ubiquitous and it works because it acknowledges human psychology of just wanting to get started. Search is not just about the software. It’s a complex, adaptive system that requires us to understand our users so that they not only get started, but also know how to take the next step once they get there.

Some example of best and worse practices for search are on his Flickr. Some user-suggested improvements to search are auto-compete search terms, suggested links or best bets, and for libraries, federated search helps users know where to begin. Faceted navigation goes hand in hand with federated search, which allows users to formulate what in the past would have been very sophisticated Boolean queries. It also helps them to understand the information space they are in by presenting a visual representation of the subset of information.

Morville referenced last year’s presentation by Mike Kuniavsky regarding ubiquitous computing, and he hoped that his presentation has complemented what Kuniavsky had to say.

Libraries are more than just warehouses of materials — they are cathedrals of information that inspire us.

PDF of his slides

CIL 2009: What Have We Learned Lately About Academic Library Users

Speakers: Daniel Wendling & Neal K. Kaske, University of Maryland

How should we describe information-seeking behavior?

A little over a third of the students interviewed reported that they use Google in their last course-related search, and it’s about the same across all classes and academic areas. A little over half of the same students surveyed used ResearchPort (federated search – MetaLib), with a similar spread between classes and academic areas, although social sciences clearly use it more than the other areas. (survey tool: PonderMatic – copy of survey form in the conference book).

Their methodology was a combination of focus-group interviews and individual interviews, conducted away from the library to avoid bias. They used a coding sheet to standardize the responses for input into a database.

This survey gathering & analysis tool is pretty cool – I’m beginning to suspect that the presentation is more about it than about the results, which are also rather interesting.

 

Speaker: Ken Varnum

Will students use social bookmarking on a library website?

MTagger is a library-based tagging tool, borrowing concepts from resources like Delicious or social networking sites, and intended to be used to organize academic bookmarks. In the long term, the hope is that this will create research guides in addition to those supported by the librarians, and to improve the findability of the library’s resources.

Behind the scenes, they have preserved the concept of collections, which results in users finding similar items more easily. This is different from the commercial tagging tools that are not library-focused. Most tagging systems are tagger-centric (librarians are the exception). As a result, tag clouds are less informative, since most of the tags are individualized and there isn’t enough overlap to make them more visible.

From usability interviews, they found that personal motivations are stronger than social motivations, and that they wanted to have the tags displays alongside traditional search results. They don’t know why, but many users perceived tagging to be a librarian thing and not something they can do themselves.

One other thing that stood out in the usability interviews was the issue of privacy. Access is limited to network login, which has its benefits (your tags and you) and its problems (inappropriate terminology, information living in the system beyond your tenure etc.).

They are redesigning the website to focus on outcomes (personal motivation) rather than on tagging as such.

CiL 2008: Speed Searching

Speaker: Greg Notess

His talk summarizes points from his Computers in Libraries articles on the same topic, so go find them if you want more details than what I provide.

It takes time to find the right query/database, and to determine the best terminology to use in order to find what you are seeking. Keystroke economy makes searching faster, like the old OCLC FirstSearch 3-2-2-1 searching. Web searching relevancy is optimized by using only a few unique words rather than long queries. Do spell checking through a web search and then take that back into a reference database. Search suggestions on major search engines help with the spelling problem, and the suggestions are ranked based on the frequency with which they are searched, but they require you to type slowly to use them effectively and increase your search speed. Copy and paste can be enhanced through browser plugins or bookmarklets that allow for searching based on selected text.

The search terms matter. Depending on the source, average query length searches using unique terms perform better over common search terms or long queries. Use multiple databases because it’s fun, you’re a librarian, and there is a lack of overlap between data sources.

Search switching is not good for quick look-ups, but it can be helpful with hard to find answers that require in-depth query. We have a sense that federated searching should be able to do this, but some resources are better searched in their native interfaces in order to find relevant sources. There are several sites that make it easy to switch between web search engines using the same query, including a nifty site that will allow you to easily switch between the various satellite mapping sources for any location you choose.

I must install the Customize Google Firefox plugin. (It’s also available for IE7, but why would you want to use IE7, anyway?)

CiL 2008: Woepac to Wowpac

Moderator: Karen G. Schneider – “You’re going to go un-suck your OPACs, right?”


Speaker: Roy Tennant

Tennant spent the last ten years trying to kill off the term OPAC.

The ILS is your back end system, which is different from the discovery system (doesn’t replace the ILS). Both of these systems can be locally configured or hosted elsewhere. Worldcat Local is a particular kind of discovery system that Tenant will talk about if he has time.

Traditionally, users would search the ILS to locate items, but now the discovery system will search the ILS and other sources and present it to the user in a less “card catalog” way. Things to consider: Do you want to replace your ILS or just your public interface? Can you consider open source options (Koha, Evergreen, vuFind, LibraryFind etc.)? Do you have the technical expertise to set it up and maintain it? Are you willing to regularly harvest data from your catalog to power a separate user interface?


Speaker: Kate Sheehan

Speaking from her experience of being at the first library to implement LibraryThing for Libraries.

The OPAC sucks, so we look for something else, like LibraryThing. The users of LibraryThing want to be catalogers, which Sheehan finds amusing (and so did the audience) because so few librarians want to be catalogers. “It’s a bunch of really excited curators.”

LibraryThing for libraries takes the information available in LibraryThing (images, tags, etc.) and drops them into the OPAC (platform independent). The display includes other editions of books owned by the library, recommendations based on what people actually read, and a tag cloud. The tag cloud links to a tag browser that opens up on top of the catalog and allows users to explore other resources in the catalog based on natural language tags rather than just subject headings. Using a Greasmonkey script in your browser, you can also incorporate user reviews pulled from LibraryThing. Statistics show that the library is averaging around 30 tag clicks and 18 recommendations per day, which is pretty good for a library that size.

“Arson is fantastic. It keeps your libraries fresh.” — Sheehan joking about an unusual form of collection weeding (Danbury was burnt to the ground a few years ago)

Data doesn’t grow on trees. Getting a bunch of useful information dropped into the catalog saves staff time and energy. LibraryThing for Libraries didn’t ask for a lot from patrons, and it gave them a lot in return.


Speaker: Cindi Trainor

Are we there yet? No. We can buy products or use open source programs, but they still are not the solution.

Today’s websites are consist of content, community (interaction with other users), interactivity (single user customization), and interoperability (mashups). RSS feeds are the intersection of interactivity and content. There are a few websites that are in the sweet spot in the middle of all of these: Amazon (26/32)*, Flickr (26/32), Pandora (20/32), and Wikipedia (21/32) are a few examples.

Where are the next generation catalog enhancements? Each product has a varying degree of each element. Using a scoring system with 8 points for each of the four elements, these products were ranked: Encore (10/32), LibraryFind (12/32), Scriblio (14/32), and WorldCat Local (16/32). Trainor looked at whether the content lived in the system or elsewhere and the degree to which it pulled information from sources not in the catalog. Library products still have a long way to go – Voyager scored a 2/32.

*Trainor’s scoring system as described in paragraph three.


Speaker: John Blyberg

When we talk about OPACs, we tend to fetishize them. In theory, it’s not hard to create a Wowpac. The difficulty is in creating the system that lives behind it. We have lost touch with the ability to empower ourselves to fix the problems we have with integrated library systems and our online public access catalogs.

The OPAC is a reflection of the health of the system. The OPAC should be spilling out onto our website and beyond, mashing it up with other sites. The only way that can happen is with a rich API, which we don’t have.

The title of systems librarian is becoming redundant because we all have a responsibility and role in maintaining the health of library systems. In today’s information ecology, there is no destination — we’re online experiencing information everywhere.

There is no way to predict how the information ecology will change, so we need systems that will be flexible and can grow and change over time. (Sopac 2.0 will be released later this year for libraries who want to do something different with their OPACs.) Containers will fail. Containers are temporary. We cannot hang our hat on one specific format — we need systems that permit portability of data.

Nobody in libraries talks about “the enterprise” like they do in the corporate world. Design and development of the enterprise cannot be done by a committee, unless they are simply advisors.

The 21st century library remains un-designed – so let’s get going on it.

css.php