ER&L 2012: Taking the Guesswork Out of Demand-Driven Acquisition — Two Approaches

Tome Reader
photo by QQ Li

Speakers: Carol J. Cramer & Derrik Hiatt

They did an analysis of their circulating print collection to see what areas or books would have the equivalent uses to trigger a purchase if it were electronic. Only 2% of their entire circulating collection met the trigger point to where it would be more cost effective to purchase than to go with a short term loan option.

They announced the DDA trial, but deliberately did not tell the users that it would incur cost, just that it was there. They would pay short term loans up to the sixth use, and then they would purchase the title. The year of usage gave them an idea of what adjustments needed to be made to the trigger point. Eventually, the cost flattens out at the sixth use, and the difference between continuing to pay STLs and buying the book is small.

They were able to identify if the triggered purchase book was used by a single person (repeatedly), by a class (several people), or a mix of both, and it was split in almost even thirds.

They determined that 6 was a good trigger. The STL cost ended up being an average of 10.5% of the list cost. DDA doesn’t have to break the bank, and was lower than expected. The number of titles in the catalog didn’t have as much to do with the amount spent as the FTE. It also lead to questioning the value of firm ordering ebooks rather than letting DDA cover it

However, this is only 11 months of data, and more longitudinal studies are needed.

Speaker: Lea Currie

They loaded records for slip books, and then the users have the option to request them at various levels of speed. The users are notified when the print book arrives, and the full MARC record is not loaded until the book is returned.

They saved quit a bit of money per month using this method, and 88% of the titles purchased circulated. Only about 75% of their ILL titles will circulate, to put that into perspective.

Of course, librarians still had some concerns. First, the library catalog is not an adequate tool for discovering titles. Faculty were concerned about individuals doing massive requests for personal research topics. Also, faculty do not want to be selectors for the libraries. [ORLY? They want the books they want when they want them — how is that different?]

The next DDA project was for ebooks, using the typical trigger points. They convinced the Social Science and Sci/Tech librarians to put a price cap for DDA titles. Up to a certain price, the book would be included in the approval plan, between a range it would go in DDA, and then above that range it would require the librarian’s approval. These were written into their YBP profile.

For the pDDA, they discovered that as the books aged, it was harder to do rush orders since they were going out of print. They also modified their language to indicate that the books may not be available if they are out of print.

They have not done DDA for humanities or area studies. They based their decisions on the YBP profile on retrospective reports, which allowed them to get an idea of the average cost.

For FY12, they expect that the breakdown will be 23% eDDA, 50% pDDA, 20% approval, and 7% selected by subject bibliographers. They’ve also given the subject librarians the options to review the automatic approval ebooks — they have a week to reject or shift to DDA each title if they want. They can also shift the expensive titles to DDA if they want to see if anyone would use it before choosing to purchase it.

Questions:
Are you putting the records in your discovery service if you have one, and can you tell if the uses are coming from that or your catalog? Not yet. Implementing a discovery service. Some find resources through Google Scholar.

ER&L 2012: New ARL Best Practices in Fair Use

laptop lid comic relief + Dawn and Drew
photo by YayAdrian

Speaker: Brandon Butler (Peter Jaszi was absent)

The purpose of copyright is to promote the creation of culture. It is not to ensure that authors get a steady stream of income no matter what, or to pay them back for the hard work they do, or to show our respect for the value they add to society. It’s about getting the stuff into the culture, and giving the creators enough incentive to do it.

One way it does it is to give creators exclusive rights for a limited period of time. The limit encourages new makers to use and remix existing culture.

Fair use is the biggest balancing feature of copyright. It ensures that the rights provided to the creators don’t become oppressive to the users. Fair use is the legal, unauthorized use of copyrighted material… under some circumstances. And we’ve spent generations trying to figure out which circumstances apply.

Fair use is a space for creativity. It gives you the leeway to take the culture around you and incorporate it into your work. It allows you to quote other scholarship in your research. It allows you to incorporate art into new works.

There are four factors of fair use. Every judge should consider the reason for the use, the kind of work used, the amount used, and the effect on the market. But it doesn’t tell the judges how much to consider or which is more important. The good news is that judges love balancing features, and the Supreme Court has determined that fair use protects free speech. However, since copyright is automatically conferred as soon as the creation is fixed, the fair use judicial interpretations have shifted greatly since 1990 to be more in the balance of the users in certain circumstances.

Without fair use, copyright would be in conflict with the 1st Amendment.

Judges want to know if the use is transformative (i.e. for a new purpose, context, audience, insight) and if you used the right amount in that transformative process. For example, parody is making fun of the original work, not just reusing it. An appropriate amount can refer to both the quantity of the original in the transformative work, and also the audience who received your transformative work. For example, the many photographic memes that take pictures and alter them to fit a theme, like One Tiny Hand.

Judges care about you and what you think is fair. There is a pattern of judges deferring to the well-articulated norms of a practice community.

Best practices codes are a logical outgrowth of the things the communities have articulated as their values and the things they would consider to be legitimate transformative works. Documentary filmmakers, scholars, media literacy teachers, online video, dance collections, open course ware, poets… many groups are creating best practices for fair use.

The documentary filmmakers have had a code of best practice for a long time. They realized that without it, they were limiting themselves too much in what they could create. Once they codified their values, more broadcast sources were willing to take films and new kinds of films were being made. Insurers of errors and omissions insurance were able to accept fair use claims, and lawyers use the Statement to build their own best practices in the relevant areas.

Keep in mind, though, that these are best practices and not guidelines. Principles, not rules. Limitations, not bans. Reasoning, not rote. The numerical limits we once followed are not the law, and we need to keep them fresh to be relevant.

Licensing is a different thing all together. This means you may have less rights in some instances, and more rights in others, regardless of fair use.

For libraries, fair use enables our mission to serve knowledge past, present, and future. We have a duty to make copyrighted works real and accessible in the way people use things now. What will libraries be in the future? How will we stay relevant? We need to have some flexibility with the stuff we have in our collections.

Many librarians are discouraged. Insecurity and hesitation equal staff costs to hire someone to clear copyright questions. Fair use would help, but it’s underused. Risk aversion subsumes fair use analysis.

The ARL document took a lot of people from diverse institutions and many hours of discussion to create it, and it was reviewed by several legal experts. It’s not risk-free, since it would need to stand up in court first (and there are always lawsuit-happy people), but it seems okay based on past judgement.

They hope it will put legal risks into perspective, and will give librarians a tool to go to general counsels and administrations and let them know things are changing. It considered the views of librarians and their values, and they also hope that people will speak out publicly that they support the Code.

Fair use applies in these common situtations:

  • course reserves — digital access to teaching materials for students and faculty, although it should be limited to access by only the appropriate audience
  • both physical and virtual exhibits — if it highlights a theme or commonality, you’re doing something new to help people understand what’s in your library
  • digitizing to preserve at-risk items — you’re not a publisher or scam artist, you’re a librarian making sure the things are accessible over time (like VHS tapes)
  • digitizing special collections and archives — you’re keeping it alive
  • access to research and teaching materials for disabled users — i.e. Daisy
  • institutional repositories
  • creation of search indexes
  • making topically-based collections of web-based materials

Practice makes practice. It won’t work if you don’t use it.

ER&L 2012: Next Steps in Transforming Academic Libraries — Radical Redesign to Mainstream E-Resource Management

The smallest details add up
photo by Garrett Coakley

Speaker: Steven Sowell

His position is new for his library (July 2011), and when Barbara Fister saw the job posting, she lamented that user-centered collection development would relegate librarians to signing licenses and paying invoices, but Sowell doesn’t agree.

Values and assumptions: As an academic library, we derive our reason for existing from our students and faculty. Our collections are a means to an end, rather than an end to themselves. They can do this in part because they don’t have ARL-like expectations of themselves. A number of studies has shown that users do a better job of selecting materials than we do, and they’ve been moving to more of a just in time model than a just in case.

They have had to deal with less money and many needs, so they’ve gotten creative. The university recently realigned departments and positions, and part of that included the creation of the Collections & Resource Sharing Department (CRSD). It’s nicknamed the “get it” department. Their mission is to connect the community to the content.

PDA, POV, PPV, approval plans, shelf-ready, and shared preservation are just a few of the things that have changed how we collect and do budget planning.

CRSD includes collection development, electronic resources, collections management, resource sharing & delivery, and circulation (refocusing on customer service and self-servicing, as well as some IT services). However, this is a new department, and Sowell speaks more about what these things will be doing than about what they are doing or how the change has been effective or not.

One of the things they’ve done is to rewrite position descriptions to refocus on the department goals. They’ve also been focusing on group facilitation and change management through brainstorming, parking lot, and multi-voting systems. Staff have a lot of anxiety over feeling like an expert in something and moving to where they are a novice and having to learn something new. They had to say goodbye to the old routines, mix them with new, and then eventually make the full shift.

They are using process mapping to keep up with the workflow changes. They’re also using service design tools like journey mapping (visualization of the user’s experience with a service), five whys, personas, experience analogy, and storyboards (visualization of how you would like things to occur).

For the reference staff, they are working on strategic planning about the roles and relationships of the librarians with faculty and collections.

Change takes time. When he proposed this topic, he expected to be further along than he is. Good communication, system thinking, and staff involvement are very important. There is a delicate balance between uncertainty/abstract with a desire for concrete.

Some unresolved issues include ereaders, purchasing rather than borrowing via ILL and the impact on their partner libraries, role of the catalog as an inventory in the world of PDA/PPV. The re-envisioning of the collection budget as a just in time resource. Stakeholder involvement and assessment wrap up the next steps portion of his talk.

Questions:
In moving print to the collection maintenance area, how are you handling bundled purchases (print + online)? How are you handling the impression of importance or lack thereof for staff who still work with traditional print collection management? Delicately.

Question about budgeting. Not planning to tie PDA/PPV to specific subjects. They plan to do an annual review of what was purchased and what might have been had they followed their old model.

How are they doing assessment criteria? Not yet, but will take suggestions. Need to tie activities to student academic success and teaching/researching on campus. Planning for a budget cut if they don’t get an increase to cover inflation. Planning to do some assessment of resource use.

What will you do if people can’t do their new jobs? Hopefully they will after the retraining. Will find a seat for them if they can’t do what we hope they can do.

What are you doing to organize the training so they don’t get mired in the transitional period? Met with staff to reassure them that the details will be worked out in the process. They prepared the ground a bit, and the staff are ready for change.

Question about the digital divide and how that will be addressed. Content is available on university equipment, so not really an issue/barrier.

What outreach/training to academic departments? Not much yet. Will honor print requests. Subject librarians will still have a consultative role, but not necessarily item by item selection.

ER&L 2012 – Between Physical and Digital: Understanding Cross-Channel User Experiences

UX Brighton 2011 - Andrea Resmini
photo by Katariina Järvinen

speaker: Andrea Resmini

He starts with a brief description of the movie The Name of the Rose, which is a bit of a medieval murder mystery involving a monastery library. The “library” is actually a labyrinth, but only in the movie. (The book is a little different.)

The letters on the arches represent the names of the places in the world, and are placed in the library where they would be in the world as it relates to Europe. They didn’t exactly replicate the world, but they ordered it like good librarians.

If you don’t understand the organizational system, it’s just a labyrinth. The movie had to change this because it wouldn’t work to have room after room of books covering the walls. We have to see the labyrinth to be able to participate in the experience, which can be different depending on the medium (book or movie).

Before computers, we relied on experts (people), books, and mentors to learn. With computers, we have access to all of them, at any time. We are constantly connected (if we choose) to streams of data, and the access points are more and more portable.

“Cyberspace is not a place you go to but rather a layer tightly integrated into the world around us.” –Institute for the Future

This is not the future. It’s here now. Facebook, Twitter, Foursquare… our phones and mobile devices connect us.

Think about how you might send a message? Email, text, handwritten, smoke signals, ouija… ti’s the same task, but with many different mediums.

What if someone is looking for a book? They could go to the circ desk, but that’s becoming less common. They could go to a virtual bookshelf for the library. Or they could go to a competitor like Amazon. They could do this on a mobile phone. Or they could just start looking on the shelves themselves, whether they understand the classification/organization or not. The only thing that matters is the book. They don’t want to fight with mobile interfaces, search results in the millions, or creepy library stacks. They just want the book, when they want it, and how they want it.

The library is a channel, as is the labeling, circ desk, website, mobile interface, etc. Unfortunately, they don’t work together. We have silos of channels, not just silos of information.

Think about a bank. You can talk to the call center employee — they can’t help you if it’s not a part of their scripted routines. You can’t start an online process and finish it in a physical space (i.e. online banking then local branch).

Entertainment now uses many channels to reach consumers. If you really want to understand the second and third Matrix movies, you have to be familiar with the accessory channels of information (comic books, video games, etc.). In cross-channel experiences, users constantly move between channels, and will not stay in any single one of them from start to finish.

More companies, like clothing stores, are breaking down the barriers to flow between their physical and virtual stores. You can shop on line and return items to the physical store, for example.

Manifesto:

  1. Information architectures are becoming open ecologies: no artifacts stand alone — they are all apart of the user experience
  2. users are becoming intermediaries: participants in these ecosystems actively produce and re-mediate content and meaning
  3. static becomes dynamic: ecologies are perpetually unfinished, always changing, always open to further refinement and manipulation
  4. dynamic becomes hybrid: the boundaries separating media, channels, and genres get thinner
  5. horizontal prevails over vertical: intermediaries push for spontaneity, ephemeral structures of meaning and constant change
  6. products are becoming experiences: focus shifts from how to design single items to how to design experiences spanning multiple steps
  7. experiences become cross-channel experiences: experiences bridge multiple connected media, devices and environments into ubiquitous ecologies

ALA Virtual 2011: Currents of Change and Innovation

Moderator: Ann Coder, Library Services Manager, Brookhaven College

Speaker: Linda McCann, Director of Library Services, Bucks County Community College

Probably had something interesting to say, but her phone connection was so awful I tuned it out. Plus, I hate the “let me tell you useless stats about my institution” portion that for some reason people think is important to include in every presentation about something they did at their library.

In summary, they got rid of formats and collections that are no longer needed and converted the space into a popular (and apparently award winning) learning commons.

 

Speaker: Denise Repman, Dean of Library Services, Delgado Community College

Oy. Sound not much better on this one. Maybe it’s ALA’s connection? In summary: something something something new library buildings.

 

Speaker:  Theresa C. Stanley, Library Director, Pima Community College

Still crappy sound. In summary: they had to reduce their collection by 30%, so they removed duplicates and content no longer relevant to their current programs. Kept notes in a wiki and used a shared calendar to schedule the project, which is probably a good idea.

NASIG 2011: Leaving the Big Deal – Consequences and Next Steps

Speaker: Jonathan Nabe

His library has left the GWLA Springer/Kluwer and Wiley-Blackwell consortia deals, and a smaller consortia deal for Elsevier. The end result is a loss of access to a little less than 2000 titles, but most of the titles had fewer than 1 download per month in the year prior to departure. So, they feel that ILL is a better price than subscription for them.

Because of the hoops jumped for ILL, he thinks those indicate more of a real need than downloading content available directly to the user. Because they retain archival access, withdrawing from the deals only impacts current volumes, and the time period has been too short to truly determine the impact, as they left the deals in 2009 and 2010. However, his conclusion based on the low ILL requests is that the download stats are not accurate due to incidental use, repeat use, convenience, and linking methods.

The other area of impact is reaction and response, and so far they have had only three complaints. It could be because faculty are sympathetic, or it could be because they haven’t needed the current content, yet. They have used this as an opportunity to educate faculty about the costs. They also opened up cancellations from the big publishers, spreading the pain more than they could in the past.

In the end, they saved the equivalent of half their monograph budget by canceling the big deals and additional serials. Will the collection be based on the contracts they have or by the needs of the community?

Moving forward, they have hit some issues. One is that a certain publisher will impose a 25% content fee to go title by title. Another issue is that title by title purchasing put them back at the list price which is much higher than the capped prices they had under the deal. They were able to alleviate some issues with negotiation and agreeing to multi-year deals that begin with the refreshed lists of titles.

The original GWLA deal with Springer allowed for LOCKSS as a means for archival access. However, they took the stance that they would not work with LOCKSS, so the lawyers got involved with the apparent breech of contract. In the end, Springer agreed to abide by the terms of the contract and make their content available to LOCKSS harvesting.

Make sure you address license issues before the end of the terms.

Speaker: David Fowler

They left the Elsevier and Wiley deals for their consortias. They have done cost savings measures in the past with eliminating duplication of format and high cost & low use titles, but in the end, they had to consider their big deals.

The first thing they eliminated was the pay per use access to Elsevier due to escalating costs and hacking abuse. The second thing they did was talk to OSU and PSU about collaborative collection development, including a shared collection deal with Elsevier. Essentially, they left the Orbis Cascade deal to make their own.

Elsevier tried to negotiate with the individual schools, but they stood together and were able to reduce the cancellations to 14% due to a reduced content fee. So far, the 2 year deal has been good, and they are working on a 4 year deal, and they won’t exceed their 2009 spend until 2014.

They think that ILL increase has more to do with WorldCat Local implementation, and few Elsevier titles were requested. Some faculty are concerned about the loss of low use high cost titles, so they are considering a library mediated pay-per-view option.

The Wiley deal was through GWLA, and when it came to the end, they determined that they needed to cancel titles that were not needed anymore, which meant leaving the deal. They considered going the same route they did with Elsevier, but were too burnt out to move forward. Instead, they have a single-site enhanced license.

We cannot continue to do business as usual. They expect to have to do a round of cancellations in the future.

multitasking & efficient use of resources

Lukas Mathis wrote recently on his blog Ignore the Code about multitasking and what that means for humans versus computers. He made one point that resonated with me:

“The fact that the iPad only lets me see one app at a time often does not help me focus. Instead, it forces me to switch between apps constantly, thus preventing me from focusing on my task. Every time I have to deal with the iPad’s task switching, I’m interrupted.”

I noticed this when I was using the iPad at the last two conferences I attended. It was great for focusing my attention on the speaker and content, because I had to leave the note-taking app and open the Twitter app if I wanted to check on the back channel chatter. However, it was frustrating for that same reason, as it also meant that if I wanted to toss out a pithy quote from the presentation, it meant taking a chance on missing something important while I switched programs.

When I’ve had a laptop or netbook with me for note-taking, switching between programs was a simple keystroke that took a fraction of a second and barely any of my mental focus, and more often than not I could have Twitter and my note-taking program open side-by-side. While I was using only one resource at a time, by being able to switch between them quickly, I could “multi-task” efficiently.

Thankfully, I don’t often have need to do this on a mobile device like the iPad or my Android phone, so right now this isn’t a problem for me. However, if these types of interfaces become the new standard for computing, someone will need to find a way to allow for multiple screens running multiple programs that can be moved between with the flick of a finger. Otherwise, we will have even more problems focusing on the task at hand.

CIL 2011: EBook Publishing – Practices & Challenges

Speaker: Ken Breen (EBSCO)

In 1997, ebooks were on CD-ROM and came with large paper books to explain how to use them, along with the same concerns about platforms we have today.

Current sales models involve purchase by individual libraries or consortia, patron-driven acquisition models, and subscriptions. Most of this presentation is a sales pitch for EBSCO and nothing you don’t already know.

Speaker: Leslie Lees (ebrary)

Ebrary was founded a year after NetLibrary and was acquired by ProQuest last year. They have similar models, with one slight difference: short term loans, which will be available later this spring.

With no longer a need to acquire books because they may be hard to get later, do we need to be building collections, or can we move to an on-demand model?

He thinks that platforms will move towards focusing more on access needs than on reselling content.

Speaker: Bob Nardini (Coutts)

They are working with a variety of incoming files and outputting them in any format needed by the distributors they work with, both ebook and print on demand.

A recent study found that academic libraries have significant number of overlap with their ebook and print collections.

They are working on approval plans for print and ebooks. The timing of the releases of each format can complicate things, and he thinks their model mediates that better. They are also working on interlibrary loan of ebooks and local POD.

Because they work primarily with academic libraries, they are interested in models for archiving ebooks. They are also looking into download models.

Speaker: Mike (OverDrive)

He sees the company as an advocate for libraries. Promises that there will be more DRM-free books and options for self-published authors. He recommends their resource for sharing best practices among librarians.

Questions:

What is going on with DRM and ebooks? What mechanism does your products use?

Adobe Digital Editions is the main mechanism for OverDrive. Policies are set by the publishers, so all they can do is advocate for libraries. Ebrary and NetLibrary have proprietary software to manage DRM. Publishers are willing to give DRM-free access, but not consistently, and not for their “best” content.

It is hard to get content onto devices. Can you agree on a single standard content format?

No response, except to ask if they can set prices, too.

Adobe became the de facto solutions, but it doesn’t work with all devices. Should we be looking for a better solution?

That’s why some of them are working on their own platforms and formats. ePub has helped the growth of ebook publishing, and may be the direction.

Public libraries need full support for these platforms – can you do that?

They try the best they can. OverDrive offers secondary support. They are working on front-line tech support and hope to offer it soon.

Do publishers work with all platforms or are there exclusive arrangements?

It varies.

Do you offer more than 10 pages at a time for downloads of purchased titles?

Ebrary tries to do it at the chapter level, and the same is probably true of the rest. EBSCO is asking for the right to print up to 60 pages at a time.

When will we be able to loan ebooks?

Coutts is working on ILL.

CIL 2011: In Pursuit of Library Elegance

Speaker: Erica Reynolds

Elegant solutions/designs are often invisible to the user. Observe what is happening, and look at what could be removed (distractions/barriers), rather than what needs to be added.

Simple rules create effective order. The more complexity in an equation, the more doubtful that it is true.

Another aspect of elegance is seduction. Limiting information creates intrigue. Libraries could play more on curiosity to draw users to information. Play hard to get, in a way. Don’t be so eager to dump information in response to user questions.

Restraint and removal can increase impact and value. Encourage people to use their brains. Why do we act like they are so stupid that they need signs everywhere in the library?

Limited resources spark creativity and innovation. The creative tension at the center of elegance: achieving the maximum effect with the minimum of effort.

The path to elegance begins with: resisting the urge to act; observe; ensure a diversity of opinions and expertise are heard; carve out time to think and not think; get away from your devices; get some sleep; and get outside.

Speaker: John Blyberg

The primary intent of our website may not be about getting you from point A to point B. It could be about building community and connection.

They found that when they removed the fortress that was the old reference desk, it was much more popular and approachable. Like Apple not including a manual with the iPhone, your library should be intuitive enough to use with minimal signage or instruction. Digital signage can evolve and be interactive, which will spark curiosity and inquiry.

css.php