Speakers: Doralyn Rossmann & Nathan Hosburgh, Montana State University
Proactive deselection is getting rid of things before you have to. It allows for meeting requests for new subscriptions, adjusting for curricular change, adjusting for research change, redirecting funds elsewhere, and reducing the budget if needed.
Step one: Identify core journals. This sets a positive tone. They created lists organized by LC class and provided them to the liaisons for departments. (They filtered out packages and JSTOR collections.) The communication with faculty varied by librarian, as well as the type of feedback provided. This resulted in some requests for new subscriptions, and enhance the credibility of the library as good stewards.
They kept track of who said what in the feedback, so that if down the road that person left, they could revisit the titles.
Step two: Journal coverage in a unified discovery tool. They identified the compartmentalized/marginalized titles that were not included in the unified discovery tool index (report from vendor).
Step three: Database coverage in a unified discovery tool. This can be challenging to make sure the comparison is even. Also, what is a database versus a journal package with a searchable interface? Not clear how they compared A&I information, since there is no good tool for that kind of overlap.
Step four: Usage statistics. Typical challenges (which COUNTER format, not COUNTER, no stats, changing platforms, etc.) along with timeliness and file format. This also identified resources that were not listed on the DB page.
Step five: Coverage in A&I databases. This may help identify A&I sources you should add, but it’s time consuming and may not have big payoffs if you are emphasizing a discovery service as a primary search interface.
Step six: Coverage in aggregators or freely available. Can be risky, though.
Step seven: Other considerations. Impact factor — does it matter? Cost metrics, alternative access options like PPV or ILL, swappability in big deal packages.
Step eight: Feedback from liaisons. Get input on titles considered for cancellation Share externally to make sure that everyone is on board and have time to comment.
Step nine: Do we have the right stuff? Review ILL statistics and compare with download stats (should be trending down as subscriptions go up). Citation studies, LibQual+, and liaison communication. Publicize what was added each year with freed funds, and which department requested it.
They plan to review this every year, and keep it updated with additions/deletions and coverage information. They are also considering the sustainability of high cost packages plus inflation.
Updates from Serials Solutions – mostly Resource Manager (Ashley Bass):
Keep up to date with ongoing enhancements for management tools (quarterly releases) by following answer #422 in the Support Center, and via training/overview webinars.
Populating and maintaining the ERM can be challenging, so they focused a lot of work this year on that process: license template library, license upload tool, data population service, SUSHI, offline date and status editor enhancements (new data elements for sort & filter, new logic, new selection elements, notes), and expanded and additional fields.
Workflow, communication, and decision support enhancements: in context help linking, contact tool filters, navigation, new Counter reports, more information about vendors, Counter summary page, etc. Her most favorite new feature is “deep linking” functionality (aka persistent links to records in SerSol). [I didn’t realize that wasn’t there before — been doing this for my own purposes for a while.]
Next up (in two weeks, 4th quarter release): new alerts, resource renewals feature (reports! and checklist!, will inherit from Admin data), Client Center navigation improvements (i.e. keyword searching for databases, system performance optimization), new license fields (images, public performance rights, training materials rights) & a few more, Counter updates, SUSHI updates (making customizations to deal with vendors who aren’t strictly following the standard), gathering stats for Springer (YTD won’t be available after Nov 30 — up to Sept avail now), and online DRS form enhancements.
In the future: license API (could allow libraries to create a different user interface), contact tools improvements, interoperability documentation, new BI tools and reporting functionality, and improving the Client Center.
Also, building a new KB (2014 release) and a web-scale management solution (Intota, also coming 2014). They are looking to have more internal efficiencies by rebuilding the KB, and it will include information from Ulrich’s, new content types metadata (e.g. A/V), metadata standardization, industry data, etc.
Summon Updates (Andrew Nagy):
I know very little about Summon functionality, so just listened to this one and didn’t take notes. Take-away: if you haven’t looked at Summon in a while, it would be worth giving it another go.
360 Link Customization via JavaScript and CSS (Liz Jacobson & Terry Brady, Georgetown University):
Goal #1: Allow users to easily link to full-text resources. Solution: Go beyond the out-of-the box 360 Link display.
Goal #2: Allow users to report problems or contact library staff at the point of failure. Solution: eresources problem report form
They created the eresources problem report form using Drupal. The fields include contact information, description of the resource, description of the problem, and the ability to attach a screenshot.
When they evaluated the slightly customized out of the box 360 Link page, they determined that it was confusing to users, with too many options and confusing links. So, they took some inspiration from other libraries (Matthew Reidsma’s GVUS jQuery code available on Github) and developed a prototype that uses custom JavaScript and CSS to walk the user through the process.
Some enhancements included: making the links for full-text (article & journal) butttons, hiding additional help information and giving some hover-over information, parsing the citation into the problem report page, and moving the citation below the links to full-text. For journal citations with no full-text, they made the links to the catalog search large buttons with more text detail in them.
Some of the challenges of implementing these changes is the lack of a test environment because of the limited preview capablities in 360 Link. Any changes actually made required an overnight refresh and they would be live, opening the risk of 24 hour windows of broken resource links. So, they created their own test environment by modifying test scenarios into static HTML files and wrapping them in their own custom PHP to mimic the live pages without having to work with the live pages.
[At this point, it got really techy and lost me. Contact the presenters for details if you’re interested. They’re looking to go live with this as soon as they figure out a low-use time that will have minimal impact on their users.]
Customizing 360 Link menu with jQuery (Laura Wrubel, George Washington University)
They wanted to give better visual clues for users, emphasize the full-text, have more local control over linkns, and visual integration with other library tools so it’s more seamless for users.
They started with Reidsma’s code, then then forked off from it. They added a problem link to a Google form, fixed ebook chapter links and citation formatting, created conditional links to the catalog, and linked to their other library’s link resolver.
They hope to continue to tweak the language on the page, particularly for ILL suggestion. The coverage date is currently hidden behind the details link, which is fine most of the time, but sometimes that needs to be displayed. They also plan to load the print holdings coverage dates to eliminate confusion about what the library actually has.
In the future, they would rather use the API and blend the link resolver functionality with catalog tools.
Custom document delivery services using 360 Link API (Kathy Kilduff, WRLC)
They facilitate inter-consortial loans (Consortium Loan Service), and originally requests were only done through the catalog. When they started using SFX, they added a link there, too. Now that they have 360 Link, they still have a link there, but now the request form is prepopulated with all of the citation information. In the background, they are using the API to gather the citation information, as well as checking to see if there are terms of use, and then checking to see if there are ILL permissions listed. They provide a link to the full-text in the staff client developed for the CLS if the terms of use allow for ILL of the electronic copy. If there isn’t a copy available in WRLC, they forward the citation information to the user’s library’s ILL form.
License information for course reserves for faculty (Shanyun Zhang, Catholic University)
Included course reserve in the license information, but then it became an issue to convey that information to the faculty who were used to negotiating it with publishers directly. Most faculty prefer to use Blackboard for course readings, and handle it themselves. But, they need to figure out how to incorporate the library in the workflow. Looking for suggestions from the group.
Advanced Usage Tracking in Summon with Google Anaytics (Kun Lin, Catholic University)
In order to tweak user experience, you need to know who, what, when, how, and most important, what were they thinking. Google Anayltics can help figure those things out in Summon. Parameters are easy ways to track facets, and you can use the data from Google Analytics to figure out the story based on that. Tracking things the “hard way,” you can use the conversion/goal function of Google Analytics. But, you’ll need to know a little about coding to make it work, because you have to add some javascripts to your Summon pages.
Use of ERM/KB for collection analysis (Mitzi Cole, NASA Goddard Library)
Used the overlap analysis to compare print holdings with electronic and downloaded the report. The partial overlap can actually be a full overlap if the coverage dates aren’t formatted the same, but otherwise it’s a decent report. She incorporated license data from Resource Manager and print collection usage pulled from her ILS. This allowed her to create a decision tool (spreadsheet), and denoted the print usage in 5 year increments, eliminating previous 5 years use with each increment (this showed a drop in use over time for titles of concern).
Discussion of KnowledgeWorks Management/Metadata (Ben Johnson, Lead Metadata Librarian, SerialsSolutions)
After they get the data from the provider or it is made available to them, they have a system to automatically process the data so it fits their specifications, and then it is integrated into the KB.
They deal with a lot of bad data. 90% of databases change every month. Publishers have their own editorial policies that display the data in certain ways (e.g., title lists) and deliver inconsistent, and often erroneous, metadata. The KB team tries to catch everything, but some things still slip through. Throught the data ingestion process, they apply rules based on past experience with the data source. After that, the data is normalized so that various title/ISSN/ISBN combinations can be associated with the authority record. Finally, the data is incorporated into the KB.
Authority rules are used to correct errors and inconsistencies. Rule automatically and consistently correct holdings, and they are often used to correct vendor reporting problems. Rules are condified for provider and database, with 76,000+ applied to thousands of databases, and 200+ new rules are added each month.
Why does it take two months for KB data to be corrected when I report it? Usually it’s because they are working with the data providers, and some respond more quickly than others. They are hoping that being involved with various initiatives like KBART will help fix data from the provider so they don’t have to worry about correcting it for us, but also making it easier to make those corrections by using standards.
Client Center ISSN/ISBN doesn’t always work in 360 Links, which may have something to do with the authority record, but it’s unclear. It’s possible that there are some data in the Client Center that haven’t been normalized, and could cause this disconnect. And sometimes the provider doesn’t send both print and electronic ISSN/ISBN.
What is the source for authority records for ISSN/ISBN? LC, Bowker, ISSN.org, but he’s not clear. Clarification: Which field in the MARC record is the source for the ISBN? It could be the source of the normalization problem, according to the questioner. Johnson isn’t clear on where it comes from.
His position is new for his library (July 2011), and when Barbara Fister saw the job posting, she lamented that user-centered collection development would relegate librarians to signing licenses and paying invoices, but Sowell doesn’t agree.
Values and assumptions: As an academic library, we derive our reason for existing from our students and faculty. Our collections are a means to an end, rather than an end to themselves. They can do this in part because they don’t have ARL-like expectations of themselves. A number of studies has shown that users do a better job of selecting materials than we do, and they’ve been moving to more of a just in time model than a just in case.
They have had to deal with less money and many needs, so they’ve gotten creative. The university recently realigned departments and positions, and part of that included the creation of the Collections & Resource Sharing Department (CRSD). It’s nicknamed the “get it” department. Their mission is to connect the community to the content.
PDA, POV, PPV, approval plans, shelf-ready, and shared preservation are just a few of the things that have changed how we collect and do budget planning.
CRSD includes collection development, electronic resources, collections management, resource sharing & delivery, and circulation (refocusing on customer service and self-servicing, as well as some IT services). However, this is a new department, and Sowell speaks more about what these things will be doing than about what they are doing or how the change has been effective or not.
One of the things they’ve done is to rewrite position descriptions to refocus on the department goals. They’ve also been focusing on group facilitation and change management through brainstorming, parking lot, and multi-voting systems. Staff have a lot of anxiety over feeling like an expert in something and moving to where they are a novice and having to learn something new. They had to say goodbye to the old routines, mix them with new, and then eventually make the full shift.
They are using process mapping to keep up with the workflow changes. They’re also using service design tools like journey mapping (visualization of the user’s experience with a service), five whys, personas, experience analogy, and storyboards (visualization of how you would like things to occur).
For the reference staff, they are working on strategic planning about the roles and relationships of the librarians with faculty and collections.
Change takes time. When he proposed this topic, he expected to be further along than he is. Good communication, system thinking, and staff involvement are very important. There is a delicate balance between uncertainty/abstract with a desire for concrete.
Some unresolved issues include ereaders, purchasing rather than borrowing via ILL and the impact on their partner libraries, role of the catalog as an inventory in the world of PDA/PPV. The re-envisioning of the collection budget as a just in time resource. Stakeholder involvement and assessment wrap up the next steps portion of his talk.
Questions:
In moving print to the collection maintenance area, how are you handling bundled purchases (print + online)? How are you handling the impression of importance or lack thereof for staff who still work with traditional print collection management? Delicately.
Question about budgeting. Not planning to tie PDA/PPV to specific subjects. They plan to do an annual review of what was purchased and what might have been had they followed their old model.
How are they doing assessment criteria? Not yet, but will take suggestions. Need to tie activities to student academic success and teaching/researching on campus. Planning for a budget cut if they don’t get an increase to cover inflation. Planning to do some assessment of resource use.
What will you do if people can’t do their new jobs? Hopefully they will after the retraining. Will find a seat for them if they can’t do what we hope they can do.
What are you doing to organize the training so they don’t get mired in the transitional period? Met with staff to reassure them that the details will be worked out in the process. They prepared the ground a bit, and the staff are ready for change.
Question about the digital divide and how that will be addressed. Content is available on university equipment, so not really an issue/barrier.
What outreach/training to academic departments? Not much yet. Will honor print requests. Subject librarians will still have a consultative role, but not necessarily item by item selection.
His library has left the GWLA Springer/Kluwer and Wiley-Blackwell consortia deals, and a smaller consortia deal for Elsevier. The end result is a loss of access to a little less than 2000 titles, but most of the titles had fewer than 1 download per month in the year prior to departure. So, they feel that ILL is a better price than subscription for them.
Because of the hoops jumped for ILL, he thinks those indicate more of a real need than downloading content available directly to the user. Because they retain archival access, withdrawing from the deals only impacts current volumes, and the time period has been too short to truly determine the impact, as they left the deals in 2009 and 2010. However, his conclusion based on the low ILL requests is that the download stats are not accurate due to incidental use, repeat use, convenience, and linking methods.
The other area of impact is reaction and response, and so far they have had only three complaints. It could be because faculty are sympathetic, or it could be because they haven’t needed the current content, yet. They have used this as an opportunity to educate faculty about the costs. They also opened up cancellations from the big publishers, spreading the pain more than they could in the past.
In the end, they saved the equivalent of half their monograph budget by canceling the big deals and additional serials. Will the collection be based on the contracts they have or by the needs of the community?
Moving forward, they have hit some issues. One is that a certain publisher will impose a 25% content fee to go title by title. Another issue is that title by title purchasing put them back at the list price which is much higher than the capped prices they had under the deal. They were able to alleviate some issues with negotiation and agreeing to multi-year deals that begin with the refreshed lists of titles.
The original GWLA deal with Springer allowed for LOCKSS as a means for archival access. However, they took the stance that they would not work with LOCKSS, so the lawyers got involved with the apparent breech of contract. In the end, Springer agreed to abide by the terms of the contract and make their content available to LOCKSS harvesting.
Make sure you address license issues before the end of the terms.
Speaker: David Fowler
They left the Elsevier and Wiley deals for their consortias. They have done cost savings measures in the past with eliminating duplication of format and high cost & low use titles, but in the end, they had to consider their big deals.
The first thing they eliminated was the pay per use access to Elsevier due to escalating costs and hacking abuse. The second thing they did was talk to OSU and PSU about collaborative collection development, including a shared collection deal with Elsevier. Essentially, they left the Orbis Cascade deal to make their own.
Elsevier tried to negotiate with the individual schools, but they stood together and were able to reduce the cancellations to 14% due to a reduced content fee. So far, the 2 year deal has been good, and they are working on a 4 year deal, and they won’t exceed their 2009 spend until 2014.
They think that ILL increase has more to do with WorldCat Local implementation, and few Elsevier titles were requested. Some faculty are concerned about the loss of low use high cost titles, so they are considering a library mediated pay-per-view option.
The Wiley deal was through GWLA, and when it came to the end, they determined that they needed to cancel titles that were not needed anymore, which meant leaving the deal. They considered going the same route they did with Elsevier, but were too burnt out to move forward. Instead, they have a single-site enhanced license.
We cannot continue to do business as usual. They expect to have to do a round of cancellations in the future.
As we shift to a demand-driven collection development approach, we will better be able to provide content at the point of need.
Speaker: Greg Raschke
Raschke started off with several assumptions about the future of library collections. These should not be a surprise to anyone who’s been paying attention: The economics of our collections is not sustainable – the cost and spend has gone up over the years, but there is a ceiling to funding, so we need to lower the costs of the entire system. We’re at a tipping point where just in case no longer delivers at the point of need. We must change the way we collect, and it will be hard, but not impossible.
The old system of supply-side collection development assumes that we’re working with limited resources (i.e. print materials), so we have to buy everything just in case someone needs it 10 years down the road when the book/journal/whatever is out of print. As a result, we judge the quality of a collection by its size, rather than by its relevance to the users. All of this contributes to an inelastic demand for journals and speculative buying.
The new system of demand-driven collections views them as drivers of research and teaching. It’s not really a new concept so much as a new workflow. There’s less tolerance in investing in a low-use collection, so there is an increase in the importance of use data and modifying what we collect based on that use data. The risks of not evolving and failing to innovate can be seen in the fate of the newspapers, many of whom held onto the old systems for too long and are dying or becoming irrelevant as a result.
Demand-driven collection development can create a tension between the philosophy of librarians as custodians of scholarship and librarians as enablers of a digital environment for scholars. Some think that this type of collection development may result in lower unit costs, but the reality is that unless the traditions of tenure and promotion change, the costs of publishing scholarly works will not go down. One of the challenging/difficult aspects of demand-driven collection development is that we won’t be getting new funds to do it – we must free funds from other areas in order to invest in these new methods (i.e. local digital production and patron-driven acquisitions).
The rewards of adapting are well worth it. The more our constituencies use the library and its resources, the more vital we become. Look at your data, and then bet on the numbers. Put resources into enabling a digital environment for your scholars.
Demand-driven collection development is not just patron-driven acquisitions! It’s about becoming an advanced analyst and increasing the precision in collection development. For NCSU‘s journal review, they look at downloads, impact factors, publications by NCSU authors, publications that cite NCSU authors, and gather feedback from the community. These bibliometrics are processed through a variety of formulas to standardize them for comparison and to identify outliers.
For print resources, they pulled circulation and bibliographic information out of their ILS and dropped it into SAS to assess the use of these materials over time. It was eye-opening to see what subject areas saw circulation greater than one over 10 years from the year they were added to the collection and those that saw no circulations. As a result, they were able to identify funds that could go towards supporting other areas of the collection, and they modified the scopes of their approval profiles. [A stacked graph showing the use of their collection, such as print circulation, ejournals/books downloads, reserves, and ILL has been one of their most popular promotional tools.]
As we shift to a demand-driven collection development approach, we will better be able to provide content at the point of need. This includes incorporating more than just our local collections (i.e. adding HathiTrust and other free resources to our catalog). Look to fund patron-driven acquisitions that occur both in the ebook purchasing models and through ILL requests. Integrate electronic profiling with your approval plans so that you are not just looking at purchasing print. Consider ebook packages to lower the unit costs, and use short-term loans for ebooks as an alternative to ILL. Get content to users in the mode they want to consume it. Do less speculative buying, and move money into new areas. It is imperative that libraries/librarians collaborate with each other in digital curation, digital collections, and collective bargaining for purchases.
There are challenges, of course. You will encounter the CAVE people. Data-driven and user-driven approaches can punish niche areas, disciplinary variation, and resources without data. The applications and devices we use to interact with digital content are highly personalized, which is a challenge for standardizing access.
I asked Raschke to explain how he evaluates resources that don’t have use data, and he says he’s more likely to stop buying them. For some resources, he can look at proxy logs and whether they are being cited by authors at his institution, but otherwise there isn’t enough data beyond user feedback.
In 1997, ebooks were on CD-ROM and came with large paper books to explain how to use them, along with the same concerns about platforms we have today.
Current sales models involve purchase by individual libraries or consortia, patron-driven acquisition models, and subscriptions. Most of this presentation is a sales pitch for EBSCO and nothing you don’t already know.
Speaker: Leslie Lees (ebrary)
Ebrary was founded a year after NetLibrary and was acquired by ProQuest last year. They have similar models, with one slight difference: short term loans, which will be available later this spring.
With no longer a need to acquire books because they may be hard to get later, do we need to be building collections, or can we move to an on-demand model?
He thinks that platforms will move towards focusing more on access needs than on reselling content.
Speaker: Bob Nardini (Coutts)
They are working with a variety of incoming files and outputting them in any format needed by the distributors they work with, both ebook and print on demand.
A recent study found that academic libraries have significant number of overlap with their ebook and print collections.
They are working on approval plans for print and ebooks. The timing of the releases of each format can complicate things, and he thinks their model mediates that better. They are also working on interlibrary loan of ebooks and local POD.
Because they work primarily with academic libraries, they are interested in models for archiving ebooks. They are also looking into download models.
Speaker: Mike (OverDrive)
He sees the company as an advocate for libraries. Promises that there will be more DRM-free books and options for self-published authors. He recommends their resource for sharing best practices among librarians.
Questions:
What is going on with DRM and ebooks? What mechanism does your products use?
Adobe Digital Editions is the main mechanism for OverDrive. Policies are set by the publishers, so all they can do is advocate for libraries. Ebrary and NetLibrary have proprietary software to manage DRM. Publishers are willing to give DRM-free access, but not consistently, and not for their “best” content.
It is hard to get content onto devices. Can you agree on a single standard content format?
No response, except to ask if they can set prices, too.
Adobe became the de facto solutions, but it doesn’t work with all devices. Should we be looking for a better solution?
That’s why some of them are working on their own platforms and formats. ePub has helped the growth of ebook publishing, and may be the direction.
Public libraries need full support for these platforms – can you do that?
They try the best they can. OverDrive offers secondary support. They are working on front-line tech support and hope to offer it soon.
Do publishers work with all platforms or are there exclusive arrangements?
It varies.
Do you offer more than 10 pages at a time for downloads of purchased titles?
Ebrary tries to do it at the chapter level, and the same is probably true of the rest. EBSCO is asking for the right to print up to 60 pages at a time.
What are the kinds of problems with collecting COUNTER and other reports? What do you do with them when you have them?
What is a good cost per use? Compare it to the alternative like ILL. For databases, trends are important.
Non-COUNTER stats can be useful to see trends, so don’t discount them.
Do you incorporate data about the university in makings decisions? Rankings in value from faculty or students (using star rating in LibGuides or something else)?
When usage is low and cost is high, that may be the best thing to cancel in budget cuts, even if everyone thinks it’s important to have the resource just in case.
How about using stats for low use titles to get out of a big deal package? Comparing the cost per use of core titles versus the rest, then use that to reconfigure the package as needed.
How about calculating the cost per use from month to month?
Article level, for those familiar with link resolvers, means the best link type to give to users. The article is the object of pursuit, and the library and the user collaborate on identifying it, locating it, and acquiring it.
In 1980, the only good article-level identification was the Medline ID. Users would need to go through a qualified Medline search to track down relevant articles, and the library would need the article level identifier to make a fast request from another library. Today, the user can search Medline on their own; use the OpenURL linking to get to the full text, print, or ILL request; and obtain the article from the source or ILL. Unlike in 1980, the user no longer needs to find the journal first to get to the article. Also, the librarian’s role is more in providing relevant metadata maintenance to give the user the tools to locate the articles themselves.
In thirty years, the library has moved from being a partner with the user in pursuit of the article to being the magician behind the curtain. Our magic is made possible by the technology we know but that our users do not know.
Unique identifiers solve the problem of making sure that you are retrieving the correct article. CrossRef can link to specific instances of items, but not necessarily the one the user has access to. The link resolver will use that DOI to find other instances of the article available to users of the library. Easy user authentication at the point of need is the final key to implementing article-level services.
One of the library’s biggest roles is facilitating access. It’s not as simple as setting up a link resolver – it must be maintained or the system will break down. Also, document delivery service provides an opportunity to generate goodwill between libraries and users. The next step is supporting the users preferred interface, through tools like LibX, Papers, Google Scholar link resolver integration, and mobile devices. The latter is the most difficult because much of the content is coming from outside service providers and the institutional support for developing applications or web interfaces.
We also need to consider how we deliver the articles users need. We need to evolve our acquisitions process. We need to be ready for article-level usage data, so we need to stop thinking about it as a single-institutional data problem. Aggregated data will help spot trends. Perhaps we could look at the ebook pay-as-you-use model for article-level acquisitions as well?
PIRUS & PIRUS 2 are projects to develop COUNTER-compliant article usage data for all article-hosting entities (both traditional publishers and institutional repositories). Projects like MESUR will inform these kinds of ventures.
Libraries need to be working on recommendation services. Amazon and Netflix are not flukes. Demand, adopt, and promote recommendation tools like bX or LibraryThing for Libraries.
Users are going beyond locating and acquiring the article to storing, discussing, and synthesizing the information. The library could facilitate that. We need something that lets the user connect with others, store articles, and review recommendations that the system provides. We have the technology (magic) to make it available right now: data storage, cloud applications, targeted recommendations, social networks, and pay-per-download.
How do we get there? Cover the basics of identify>locate>acquire. Demand tools that offer services beyond that, or sponsor the creation of desired tools and services. We also need to stay informed of relevant standards and recommendations.
Publishers will need to be a part of this conversation as well, of course. They need to develop models that allow us to retain access to purchased articles. If we are buying on the article level, what incentive is there to have a journal in the first place?
For tenure and promotion purposes, we need to start looking more at the impact factor of the article, not so much the journal-level impact. PLOS provides individual article metrics.
They have the standard patron-driven acquisitions (PDA) model through Coutts’ MyiLibrary service. What’s slightly different is that they are also working on a pilot program with a three college consortia with a shared collection of PDA titles. After the second use of a book, they are charged 1.2-1.6% of the list price of the book for a 4-SU, perpetual access license.
Issues with ebooks: fair use is replaced by the license terms and software restrictions; ownership has been replaced by licenses, so if Coutts/MyiLibrary were to go away, they would have to renegotiate with the publishers; there is a need for an archiving solution for ebooks much like Portico for ejournals; ILL is not feasible for permissible; potential for exclusive distribution deals; device limitations (computer screens v. ebook readers).
Speaker: Ellen Safley
Her library has been using EBL on Demand. They are only buying 2008-current content within specific subjects/LC classes (history and technology). They purchase on the second view. Because they only purchase a small subset of what they could, the number of records they load fluxuates, but isn’t overwhelming.
After a book has been browsed for more than 10 minutes, the play-per-view purchase is initiated. After eight months, they found that more people used the book at the pay-per-view level than at the purchase level (i.e. more than once).
They’re also a pilot for an Ebrary program. They had to deposit $25,000 for the 6 month pilot, then select from over 100,000 titles. They found that the sciences used the books heavily, but there were also indications that the humanities were popular as well.
The difficulty with this program is an overlap between selector print order requests and PDA purchases. It’s caused a slight modification of their acquisitions flow.
Speaker: Nancy Gibbs
Her library had a pilot with Ebrary. They were cautious about jumping into this, but because it was coming from their approval plan vendor, it was easier to match it up. They culled the title list of 50,000 titles down to 21,408, loaded the records, and enabled them in SFX. But, they did not advertise it at all. They gave no indication of the purchase of a book on the user end.
Within 14 days of starting the project, they had spent all $25,000 of the pilot money. Of the 347 titles purchased, 179 of the purchased titles were also owned in print, but those print only had 420 circulations. The most popularly printed book is also owned in print and has had only two circulations. The purchases leaned more towards STM, political science, and business/economics, with some humanities.
The library tech services were a bit overwhelmed by the number of records in the load. The MARC records lacked OCLC numbers, which they would need in the future. They did not remove the records after the trial ended because of other more pressing needs, but that caused frustration with the users and they do not recommend it.
They were surprised by how quickly they went through the money. If they had advertised, she thinks they may have spent the money even faster. The biggest challenge they had was culling through the list, so in the future running the list through the approval plan might save some time. They need better match routines for the title loads, because they ended up buying five books they already have in electronic format from other vendors.
Ebrary needs to refine circulation models to narrow down subject areas. YBP needs to refine some BISAC subjects, as well. Publishers need to communicate better about when books will be made available in electronic format as well as print. The library needs to revise their funding models to handle this sort of purchasing process.
They added the records to their holdings on OCLC so that they would appear in Google Scholar search results. So, even though they couldn’t loan the books through ILL, there is value in adding the holdings.
They attempted to make sure that the books in the list were not textbooks, but there could have been some, and professors might have used some of the books as supplementary course readings.
One area of concern is the potential of compromised accounts that may result in ebook pirates blowing through funds very quickly. One of the vendors in the room assured us they have safety valves for that in order to protect the publisher content. This has happened, and the vendor reset the download number to remove the fraudulent downloads from the library’s account.
Speakers: Amy Buckland, Kendra K. Levine, & Laura Harris (icanhaz.com/cloudylibs)
Cloud computing is a slightly complicated concept. Everyone approaches defining it from different perspectives. It’s about data and storage. For the purposes of this session, they mean any service that is on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service.
Cloud computing frees people to collaborate in many ways. Infrastructure is messy, so let someone else take care of that so you can focus on what you really need to do. USB sticks can do a lot of that, but they’re easy to lose, and data in the cloud will hopefully be migrated to new formats.
The downside of cloud computing is that it is so dependent upon constant connection and uptime. If your cloud computing source or network goes down, you’re SOL until it get fixed. Privacy can also be a legitimate concern, and the data could be vulnerable to hacking or leaks. Nothing lasts forever — for example, today, Geocities is closing.
Libraries are already in the cloud. We often store our ILS data, ILL, citation management, resource guides, institutional repositories, and electronic resource management tools on servers and services that do not live in the library. Should we be concerned about our vendors making money from us on a "recurring, perpetual basis" (Cory Doctorow)? Should we be concerned about losing the "face" of the library in all of these cloud services? Should we be concerned about the reliability of the services we are paying for?
Libraries can use the cloud for data storage (i.e. DuraSpace, Dropbox). They could also replace OS services & programs, allowing patron-access computers to b run using cloud applications.
His library is using four applications to serve video from the library, and one of them is TerraPod, which is for students to create, upload, and distribute videos. They outsourced the player to Blip.tv. This way, they don’t have to encode files or develop a player.
The way you can do mashups of cloud applications and locally developed applications is through the APIs that defines the rules for talking to the remote server. The cloud becomes the infrastructure that enables webscaling of projects. Request the data, receive it in some sort of structured format, and then parse it out into whatever you want to do with it.
Best practices for cloud computing: use the cloud architecture do the heavy lifting (file conversion, storage, distribution, etc.), archive locally if you must, and outsource conversion. Don’t be afraid. This is the future.
Presentation slides will be available later on his website.