NASIG 2011: Books in Chains

Speaker: Paul Duguid

Unlike the automotive brand wars, tech brand wars still require a level of coordination and connectivity between each other. Intel, Windows, and Dell can all be in one machine, and it became a competition as to which part motivated the purchase.

The computer/tech supply chain is odd. The most important and difficult component to replace is the hard drive, and yet most of us don’t know who makes the drives in our computers. It makes a huge difference in profit when your name is out front.

Until the mid 1800s, the wine sold had the retailer name on it, not the vineyard. Eventually, that shifted, and then shifted again to being sold by the name of the varietal.

In the book supply chain, there are many links, and the reader who buys the book may not see any of the names involved, and at different times in history, the links were the brand that sold it. Mark Twain and Rudyard Kipling tried to trademark their names so that publishers could not abuse them.

In academia, degrees are an indication of competency, and the institution behind the degree is a part of the brand. Certification marks began with unions in the US, and business schools were among the first to go out and register their names. However, it gets tricky when the institution conferring the degrees is also taking in fees from students. Is it certification or simply selling the credentials?

Who brands in publishing? We think the author, but outside of fiction, that starts to break down. Reference works are generally branded by the publisher. Reprint series are branded by the series. Romances are similar. Do we pay attention to who wrote the movie, TV series, or even newspaper article?

What happens when we go digital? The idealist’s view is that information wants to be free. The pragmatic view is that information needs to be constrained. Many things that are constraints are also resources. The structure and organization of a newspaper has much to do with the paper it is on. Also, by limiting to what fits on the paper, it conveys an indication of importance if it makes it into print. Free information suffers from a lack of filters to make the important bits rise to the top.

We think of technologies replacing each other, but in fact they tend to create new niches by taking away some but not all of the roles of the old tech. What goes and what stays is what you see as integral.

NASIG 2011: Reporting on Collections

Speakers: Sandy Hurd, Tina Feick, & John Smith

Development begins with internal discussion, a business case, and a plan for how the data will be harvested. And discussion may need to include the vendors who house or supply the data, like your ILS or ERM.

Product development on the vendor side can be prompted by several things, including specific needs, competition, and items in an RFP. When customers ask for reports, they need to determine if it is a one-time thing, something that can be created by enhancing what they already have, or something they aren’t doing yet. There may be standards, but collaborative data is still custom development between two entities, every time.

Have you peeked under the rug? The report is only as good as the data you have. How much cleanup are you willing to do? How can your vendor help? Before creating reports, think about what you have to solve and what you wish you could solve, statistics you need, the time available to generate them, and whether or not you can do it yourself.

There are traditional reporting tools like spreadsheets, and increasingly there are specialized data storage and analysis tools. We are looking at trends, transactional data, and projections, and we need this information on demand and more frequently than in the past. And the data needs to be interoperable. (Dani Roach is quietly shouting, “CORE! CORE!”) Ideally, we would be able to load relevant data from our ERMS, acquisitions modules, and other systems.

One use of the data can be to see who is using what, so properly coded patron records are important. The data can also be essential for justifying the redistribution of resources. People may not like what they hear, but at least you have the data to back it up.

The spreadsheets are not the reports. They are the data.

NASIG 2011: Using Assessment to Make Collection Development Decisions

Speaker: Mary Ann Trail & Kerry Chang FitzGibbon

It is not in the interest of faculty to cut journal titles because it may be perceived as an admission that it is not needed. With relying on faculty input for collection decisions, the collection can become skewed when certain faculty are more vocal than others.

When a new director arrived in 2000, they began to use more data to make decisions. And, the increase in aggregator databases and ejournals changed what was being collected. In addition to electronic publishing, electronic communication has changed the platform and audience for faculty communicating with each other and administrators, which can be both good and bad for library budgets.

In 2005, after some revision of collection methods, cancellations, and reallocation, they went to a periodicals allocation formula. This didn’t work out as well as expected, and was abandoned in 2008.

As a part of their assessment projects in 2008, they looked at the overlap between print and electronic titles to see if they could justify canceling the print in order to address the budget deficit. Most importantly, they wanted to proactively calm the faculty, who were already upset about past cancellations, with assurances that they would not lose access to the titles.

They used their ERMS to generate overlap analysis report, and after some unnecessary and complicated exporting and sorting, she was able to identify overlaps with their print collection. Then she identified the current subscriptions before going to the databases to verify that the access is correct and noted any embargo information. This was then combined with budget line, costs, and three years of usage (both print and electronic for non-aggregator access).

They met their budget target by canceling the print journals, and they used the term “format change” instead of cancel when they communicated with faculty. Faculty showed more support for this approach, and were more willing to advocate for library funds.

Did they consider publications that have color illustrations or other materials that are better in print? Yes, and most of them were retained in print.

Did they look at acquiring other databases to replace additional print cancellations? No, not with their funding situation.

What was the contingency plan for titles removed from the aggregator? Would resubscribe if the faculty asked for it, but funds would likely come from the monograph budget.

NASIG 2011: Leaving the Big Deal – Consequences and Next Steps

Speaker: Jonathan Nabe

His library has left the GWLA Springer/Kluwer and Wiley-Blackwell consortia deals, and a smaller consortia deal for Elsevier. The end result is a loss of access to a little less than 2000 titles, but most of the titles had fewer than 1 download per month in the year prior to departure. So, they feel that ILL is a better price than subscription for them.

Because of the hoops jumped for ILL, he thinks those indicate more of a real need than downloading content available directly to the user. Because they retain archival access, withdrawing from the deals only impacts current volumes, and the time period has been too short to truly determine the impact, as they left the deals in 2009 and 2010. However, his conclusion based on the low ILL requests is that the download stats are not accurate due to incidental use, repeat use, convenience, and linking methods.

The other area of impact is reaction and response, and so far they have had only three complaints. It could be because faculty are sympathetic, or it could be because they haven’t needed the current content, yet. They have used this as an opportunity to educate faculty about the costs. They also opened up cancellations from the big publishers, spreading the pain more than they could in the past.

In the end, they saved the equivalent of half their monograph budget by canceling the big deals and additional serials. Will the collection be based on the contracts they have or by the needs of the community?

Moving forward, they have hit some issues. One is that a certain publisher will impose a 25% content fee to go title by title. Another issue is that title by title purchasing put them back at the list price which is much higher than the capped prices they had under the deal. They were able to alleviate some issues with negotiation and agreeing to multi-year deals that begin with the refreshed lists of titles.

The original GWLA deal with Springer allowed for LOCKSS as a means for archival access. However, they took the stance that they would not work with LOCKSS, so the lawyers got involved with the apparent breech of contract. In the end, Springer agreed to abide by the terms of the contract and make their content available to LOCKSS harvesting.

Make sure you address license issues before the end of the terms.

Speaker: David Fowler

They left the Elsevier and Wiley deals for their consortias. They have done cost savings measures in the past with eliminating duplication of format and high cost & low use titles, but in the end, they had to consider their big deals.

The first thing they eliminated was the pay per use access to Elsevier due to escalating costs and hacking abuse. The second thing they did was talk to OSU and PSU about collaborative collection development, including a shared collection deal with Elsevier. Essentially, they left the Orbis Cascade deal to make their own.

Elsevier tried to negotiate with the individual schools, but they stood together and were able to reduce the cancellations to 14% due to a reduced content fee. So far, the 2 year deal has been good, and they are working on a 4 year deal, and they won’t exceed their 2009 spend until 2014.

They think that ILL increase has more to do with WorldCat Local implementation, and few Elsevier titles were requested. Some faculty are concerned about the loss of low use high cost titles, so they are considering a library mediated pay-per-view option.

The Wiley deal was through GWLA, and when it came to the end, they determined that they needed to cancel titles that were not needed anymore, which meant leaving the deal. They considered going the same route they did with Elsevier, but were too burnt out to move forward. Instead, they have a single-site enhanced license.

We cannot continue to do business as usual. They expect to have to do a round of cancellations in the future.

NASIG 2011: Science Re-Imagined

Speaker: Adam Bly

Coming from the perspective of science having the potential to improve the state of the world. We are in a moment of discovery of things we have never seen before. We also have the capabilities to manipulate nature, and it is prompting a need for an ethical framework. And, science is being done in cultures that have a rich scientific history, but are not part of the Western traditions.

The amount of data we are now creating is producing a moment of incredible opportunity. Information scientists have the opportunity to influence the literacy of society so that we can take ownership of and understand the data created, particularly that data about ourselves.

Science has the potential to change the world but only if two conditions exist. We need to think of science literacy as something for every single person on this planet. The second condition is open science — science can’t be proprietary for a company or a country.

We need a new philosophy of scientific literacy. Science is not just about its output. It is a way of thinking. Science is a lens through which we can solve the world’s problems. Science must engage through culture and ideas.

Art has a role in exploring the possibilities of science. Take the book Flatland as an example of exploring something we have trouble understanding.

We didn’t grow up hating science because we didn’t know it was science. We were once all scientists by our actions and experimentation as children. It is when we associate it with exams and challenging tasks that we began to hate it.

The future of science is open, not because it should be, but because it needs to be in order for it to progress. Nature does not recognize our disciplines — everything is complex and connected. Science is not a closed system — 65% of scientists cite that literature has an influence on their work.

The web that was created by science is not ideal for science. It is disorganized, fragmented, and inefficient. Info is organized by disciplines, decisions are made by lagging indications, and innovations are driven to preserve a business model and not for research. Scientists deserve better, and society needs better.

Scientists aren’t waiting. They hack things. They fix things. They don’t wait on the product cycle.

Open science that could work is based in a digital core that does not rely on the limitations of paper with mandated free flow of information with subsidized peer review, open standards and interoperability, knowledge from information, and modern metrics.

The information of today is complex and vast, and we need a new way to visualize it. We are bringing design and computer science together. Visualizing.org hopes to connect open data with open design.

There is no such thing as “open access,” “open data,” or “open science” — we solve society’s problems with science as a tool and a lens. A 21st century renaissance is science integrated with and not standing in opposition to other forces.

data-crunching librarian

Officially, my title is Electronic Resources Librarian, but lately I’ve been spending more of my time and energy on gathering and crunching data about our eresources than on anything else. It’s starting to bleed over into the print world, as well. Since we don’t have someone dedicated to managing our print journals, I’ve taken on the responsibility of directing discussions about their future, as well as gathering and providing e-only options to the selectors.

I like this work, but I’ve also been feeling a bit like my role is evolving and changing in ways I’m not entirely cognizant of, and that worries me. I came into this job without clear direction and made it my own, and even though I have a department head now, I still often feel like I’m the driver. This has both positives and negatives, and lately I’ve been wishing I could have more outside direction, in part so I don’t feel so much like I’m doing things that may not have much value to the people for whom I am doing them.

However, on Monday, something clicked. A simple comment about using SAS to analyze the print book collection use over time set all sorts of things firing away in my head. About all I know with SAS is that it’s some sort of data analysis tool, but I realized that I had come up with several of my professional goals for the next year in that moment.

For one, I want to explore whether or not I can learn and use SAS (or SPSS) effectively to analyze our collections (not just print books, as in the example above). For another, I want to explore whether or not I can learn R to more effectively visualize the data I gather.

Maybe some day down the road my title won’t be Electronic Resources Librarian anymore. Maybe some day it will be Data-Crunching Librarian.

Sounds good to me.

VLACRL Spring 2011: Patron-Driven Acquisitions panel

“Selectors are more fussy about the [ebook] platform than the students.” – Nancy Gibbs

Speakers from James Madison University, Duke University, and the College of William & Mary

James Madison University has done two trials of patron-driven acquisitions. The first one was mainly for print books that had been requested through interlibrary loan. If the book is a university press or new (past two years) imprint, they rush order it through an arrangement with the campus bookstore. The book arrives and is cataloged (actually, the book gets cataloged when it’s ordered, saving additional processing time) in about the same time it would take if it was coming through the ILL system, and most of these books ended up circulating frequently with renewals.

Their second trial was for ebooks through their book jobber, Coutts, and their MyiLibrary platform. They used the same parameters as their approval plan and set it up like most PDA ebook programs: drop the records in the catalog and after X number of “substantial uses” (i.e. not the table of contents, cover, etc.) the book is purchased using a deposit account fund. They excluded some publishers from the PDA process because they prefer to purchase the books on the publisher’s platform or have other arrangements (i.e. Gale or Wiley). If your library needs certain fields in the MARC record added, removed, or modified, they recommend that you have the vendor do that for you rather than touching every record locally, particularly given the volume of records involved.

The ebook PDA trial was initiated last calendar year, and they found that 75% of the ebooks purchased were used 5-19 times with an average of 14.77 per title. Surprisingly enough, they did not spend out their modest deposit account and were able to roll it over to this year. Already for 2011, they are seeing a 30% increase in purchases.

Duke University was one of the ARL libraries in the eBrary PDA pilot program. Out of the 90,000 titles offered, they culled the list down to 21,000 books published after 2006 with a $275 price per title limit. Even with that, they blew through the deposit account quickly. But, they found that the titles purchased were within the scope of what they would have collected anyway, so they added more funds to the deposit account. In the end, they purchased about 348 ebooks for $49,000 – mainly English-language titles from publishers like Wiley, Cambridge, and Oxford, and in areas like business and economics.

Other aspects of the Duke trial: They did not match up the 21,000 books with their approval plan, but used other criteria to select them. They negotiated 10 “clicks” to initiate a purchase (whatever the clicks mean). They were send approval slips for many of the titles that were purchased, but for whatever reason the selector did not choose them.

About 183 (over 50%) of the ebooks purchased were already owned in print by the library. One of their regrets is not capturing data about the time of day or day of week that the ebooks were accessed. It’s possible that the duplicates were accessed because the user was unable to access the print book for whatever reason (location, time of day, etc.). Also, two of the books purchased were already owned in electronic format in collections, but had not been cataloged individually.

Duke has also done a PDA program with interlibrary loan. The parameters are similar to JMU’s, and they are pushing OCLC to include preferred format in the ILLiad forms, as they would like to purchase ebooks if the user prefers that format.

They are also looking to do some topic-specific PDAs for new programs.

The College of William & Mary is a YBP customer for their print books, but they decided to go with Coutts’ MyiLibrary for their ebook PDA trial. This was initially the source of a great deal of frustration with de-duping records and preventing duplicate purchases. After several months and a duplication rate as much as 23%, they eventually determined that it was a time gap between when Coutts identified new titles for the PDA and when W&M sent them updates with what they had purchased in print or electronic from other sources.

In the end, they spent the $30,000 private Dean’s fund on 415 titles fairly evenly across the disciplines. About 45 titles had greater than 100 uses, and one title was used 1647 times (they think that was for a class). Despite that, they have not had to purchase a multi-user license for any title (neither has JMU), so either MyiLibrary is letting in multiple simultaneous users and not charging them, or it has not been an issue for a single user to access the titles at a time.

One thing to consider if you are looking to do patron-driven acquisitions with ebooks is the pricing. Ebooks are priced at the same rate as hardcover books, and multiple user licenses are usually 50% more. Plan to get less for the same money if you have been purchasing paperbacks.

There are pros and cons to publicizing the PDA trial during the process. In most cases, you want it to be seamless for the user, so there really isn’t much reason to tell them that they are initiating library purchases when they access the ebooks or request an interlibrary loan book. However, afterwards, it may be a good marketing tool to show how the library is working to remain relevant and spend funds on the specific needs of students/faculty.

COUNTER book reports are helpful for collection assessment, but they don’t quite match up with print use browse/circulation counts, so be careful when comparing them. Book Report 2 gives the number of successful section requests for each book, which can give you an idea of how much of the book was used, with a section being a chapter or other subdivision of a reference work.

Final thoughts: as we shift towards purchasing ebooks over print, we should be looking at revising and refining our workflow processes from selection to acquisition to assessment.

“Selectors are more fussy about the [ebook] platform than the students.” – Nancy Gibbs

VLACRL Spring 2011: Building an eReaders Collection at Duke University Libraries

They started lending ereaders because they wanted to provide a way for users to interact with new and emerging technologies.

Speaker: Nancy Gibbs

They started lending ereaders because they wanted to provide a way for users to interact with new and emerging technologies. The collection focus is on high circulation popular reading titles, and they do add patron requests. Recently, they added all of the Duke University Press titles, per the request of the university press. (Incidentally, not all of the Duke UP titles are available in Kindle format because Amazon won’t allow them to sell a book in Kindle format until it has sold 50 print copies.)

They marketed their ereader program through word of mouth, the library website, the student paper, and the communications office. The communications press release was picked up by the local newspaper. They also created a YouTube video explaining how to reserve/check-out the ereaders, and gave presentations to the teaching & learning technologists and faculty.

For the sake of consistency and availability of titles, they purchase one copy of a title for every pod of six Kindle ereaders. Amazon allows you to load and view a Kindle book on up to six devices, which is how they arrived at that number. For the Nooks, they can have a book loaded on apparently an unlimited number of devices, so they purchase only one copy of a title from Barnes & Noble. They try to have the same titles on both the Kindles and the Nooks, but not every title available for purchase on the Kindle is also available on the Nook. Each of the books purchased is cataloged individually, with the location as the device it is on, and they will appear to be checked out when the device is checked out.

When they first purchased the devices and were figuring out the local workflow of purchasing and loading the content, the tech services department (acquisitions, cataloging, etc.) were given the devices to experiment with them. In part, this was to sort out any kinks in workflow that they may discover, but also it was because these folks don’t often get the chance to play with new technology in the library as their public service counterparts do. Gibbs recommends that libraries purchase insurance options for the devices, because things can happen.

One of the frustrations with commercial ereader options like the Kindle and Nook is that they are geared towards individual users and not library use. So, unlike other ebook providers and platforms, they do not give the library any usage data regarding the books used, which can make collection development in these areas somewhat challenging. However, given that their scope is popular reading material and that they take patron requests, this is not as much of an issue as it could be.

Side note: Gibbs pointed out that ebook readers are still not yet greener than print books, mostly because of the toxicity of the materials and the amount of resources that go into producing them. EcoLibris has a great resource page with more information about this.

VLACRL Spring 2011: Clay Shirky, Fantasy Football, and the Future of Library Collections

As we shift to a demand-driven collection development approach, we will better be able to provide content at the point of need.

Speaker: Greg Raschke

Raschke started off with several assumptions about the future of library collections. These should not be a surprise to anyone who’s been paying attention: The economics of our collections is not sustainable – the cost and spend has gone up over the years, but there is a ceiling to funding, so we need to lower the costs of the entire system. We’re at a tipping point where just in case no longer delivers at the point of need. We must change the way we collect, and it will be hard, but not impossible.

The old system of supply-side collection development assumes that we’re working with limited resources (i.e. print materials), so we have to buy everything just in case someone needs it 10 years down the road when the book/journal/whatever is out of print. As a result, we judge the quality of a collection by its size, rather than by its relevance to the users. All of this contributes to an inelastic demand for journals and speculative buying.

The new system of demand-driven collections views them as drivers of research and teaching. It’s not really a new concept so much as a new workflow. There’s less tolerance in investing in a low-use collection, so there is an increase in the importance of use data and modifying what we collect based on that use data. The risks of not evolving and failing to innovate can be seen in the fate of the newspapers, many of whom held onto the old systems for too long and are dying or becoming irrelevant as a result.

Demand-driven collection development can create a tension between the philosophy of librarians as custodians of scholarship and librarians as enablers of a digital environment for scholars. Some think that this type of collection development may result in lower unit costs, but the reality is that unless the traditions of tenure and promotion change, the costs of publishing scholarly works will not go down. One of the challenging/difficult aspects of demand-driven collection development is that we won’t be getting new funds to do it – we must free funds from other areas in order to invest in these new methods (i.e. local digital production and patron-driven acquisitions).

The rewards of adapting are well worth it. The more our constituencies use the library and its resources, the more vital we become. Look at your data, and then bet on the numbers. Put resources into enabling a digital environment for your scholars.

Demand-driven collection development is not just patron-driven acquisitions! It’s about becoming an advanced analyst and increasing the precision in collection development. For NCSU‘s journal review, they look at downloads, impact factors, publications by NCSU authors, publications that cite NCSU authors, and gather feedback from the community. These bibliometrics are processed through a variety of formulas to standardize them for comparison and to identify outliers.

For print resources, they pulled circulation and bibliographic information out of their ILS and dropped it into SAS to assess the use of these materials over time. It was eye-opening to see what subject areas saw circulation greater than one over 10 years from the year they were added to the collection and those that saw no circulations. As a result, they were able to identify funds that could go towards supporting other areas of the collection, and they modified the scopes of their approval profiles. [A stacked graph showing the use of their collection, such as print circulation, ejournals/books downloads, reserves, and ILL has been one of their most popular promotional tools.]

As we shift to a demand-driven collection development approach, we will better be able to provide content at the point of need. This includes incorporating more than just our local collections (i.e. adding HathiTrust and other free resources to our catalog). Look to fund patron-driven acquisitions that occur both in the ebook purchasing models and through ILL requests. Integrate electronic profiling with your approval plans so that you are not just looking at purchasing print. Consider ebook packages to lower the unit costs, and use short-term loans for ebooks as an alternative to ILL. Get content to users in the mode they want to consume it. Do less speculative buying, and move money into new areas. It is imperative that libraries/librarians collaborate with each other in digital curation, digital collections, and collective bargaining for purchases.

There are challenges, of course. You will encounter the CAVE people. Data-driven and user-driven approaches can punish niche areas, disciplinary variation, and resources without data. The applications and devices we use to interact with digital content are highly personalized, which is a challenge for standardizing access.

I asked Raschke to explain how he evaluates resources that don’t have use data, and he says he’s more likely to stop buying them. For some resources, he can look at proxy logs and whether they are being cited by authors at his institution, but otherwise there isn’t enough data beyond user feedback.

wanna stay in the loop?

The godfather of libraryland news and information sharing, Blake Carver, has a new endeavor: LISEvents. Sure, there are library conferences, workshops, and other activities listed all over the ‘net, but few of them look this good. As a bonus, in addition to event organizers listing their information, speakers can make themselves known by adding expertise and contact information.

I forsee LISEvents becoming the go-to place for event organizers, speakers, and audiences. So, jump on the train now before it leaves the station!

css.php