ER&L 2012: Does the Use of P-books Impact the Use of E-Books?

sweets & swag
ER&L sweets & swag

Speakers: Michael Levine-Clark & Christopher C. Brown

In Dec 2008, they added all the print and ebooks for university press publisher A, and the duplication is primarily in the frontlist. They did the same for an STM publisher B in Jan 2009.

The have gathered circ data that is compiled annually. Comparing ebooks and print books is like comparing apples and oranges. pBook checkouts are for an extended period of time, and we have no way of knowing how many times they view a chapter or copy a page, the measures of ebook uses.

What a cataloger thinks a title is and what a vendor thinks a title is are two different things. How do you merge use and circ data if the comparison points may vary? Solution was to create an ISBN9, by stripping away the first three numbers of ISBN13 and the last number of the ISBN10, which worked pretty well.

Publisher A had about a third of the ebook titles used, but publisher B had only about 2% used. For print, two thirds of Publisher A titles were used and a little over a third of Publisher B were used.

The two most heavily used ebooks from the UP were probably used for a course. The print books were only checked out a few times, but there were thousands of uses for the e. For publisher B, they were used less but with no print circ. For the top two print, the UP ebook was used some, but not even as many as the checkouts, and for the STM print books, the e wasn’t touched. Overall, there was a high rate of use for both formats of a single title, I think (need to study the slides a little more).

They saw increased checkouts of print books over the time period, but it is inconclusive and could be related to the volume of titles purchased. There isn’t a clear impact of e on p or p on e use, but there does seem to be a connection, since when both formats are used, the rate is higher.

Might there be differences by subject or date? What sort of measure of time in a book can we look at? How does discovery play in?

Questions/Comments:
Date & time of usage rather than one year might tell more of a story.

Agree about the discoverability challenge, and have encouraged them to put in chapter level data in their catalog records to create their own discovery with the MARC record. Full text searching in eBrary is great, but get it in the catalog along with the pbooks.

Did you consider the format of the ebook? Some publishers give PDF chapter downloads, which may account for lower use of Publisher B.

What about ILLs? They get included in the circ stats and aren’t separated.

How much of Chris’s time was spent on this? Hard to tell. Was ongoing over time as other things took priority.

How will this affect your collection development practices moving forward? Trying to give users a choice of format.

ER&L 2012: Consortia On Trial — In Defense of the Shared Ebook

Hi, how are you?
an Austin classic

Speaker: Nancy Gibbs, Duke & TRLN

The consortia TRLN began in the 1930’s as a shared collection development strategy for print materials. They share a catalog, print repository, approval vendor database, and they collaborate on large and individual purchases. This was really easy in the print world. As of 2006, only 8% of print books were duplicated across all three schools (Duke, NCSU, & UNC-CH).

Then ebooks arrived. And duplication began to grow exponentially. Many of the collections can’t be lent to the consortia libraries, and as a result, everyone is having to buy copies rather than relying on the shared collections of the past.

Speaker: Michael Zeoli, YBP

YBP has seen a small increase in ebooks purchased by academic libraries, and a much larger decrease in the purchase of print books, despite acquiring Blackwell last year. This is true of the TRLN consoritum as well.

About 20% of the top 24 publishers are not working with PDA or consortia, and about half that do are not doing both. Zeoli tries to meet with publishers and show them the data that it’s in their best interest to make ebooks available at the same time as print, and that they need to also be include in PDA and consortia arrangements.

Consortias want PDA, but not all the content is available. Ebook aggregators have some solutions, but missing the workflow components. Publisher role is focused on content, not workflow. PDA alone for consortia is a disincentive for publishers, it ignores practical integration of appropriate strategies and tools, and it’s a headache for technical staff.

A hybrid model might look like Oxford University Press. There are digital collections, but not everything is available that way, so you need options for single-title purchases through several models. This requires the consortium, the book seller, and the publisher to work together.

Speaker: Rebecca Seger, OUP

The publishers see many challenges, not the least of which is the continued reliance on print books in the humanities and social sciences, although there is a demand for both formats. Platforms are not set up to enable sharing of ebooks, and would require a significant investment in time and resources to implement.

They have done a pilot program with MARLI to provide access to both the OUP platform and the books they do not host but make available through eBrary. [Sorry — not sure how this turned out — got distracted by a work email query. They’ll be presenting results at Charleston.]

Questions:
How do MARLI institutions represent access for the one copy housed at NYU? Can download through Oxford site. YBP can provide them. The challenge is for the books that appear on eBrary a month later, so they are using a match number to connect the new URL with the old record.

And more questions. I keep zoning out during this part of the presentations. Sorry.

ER&L 2012: ERM Workflow and Communications Panel

218/365 - communication problems?
photo by Josh Fassbind

Speakers: Annie Wu & Jeannie Castro

They used a HelpDesk Ticket for new subscriptions to manage the flow of information and tasks through several departments. Sadly, it’s not designed for ejournals management, and not enough information could be included in the ticket, or was inconsistently added. So, they needed to make some changes.

A self-initiated team decided a new workflow using a spreadsheet to keep the info and set up status alerts in SerialsSolutions. The alerts and spreadsheets facilitated the workflow through all departments.

A lengthy description of the process, spreadsheets, action logs, email alerts, and I’ve concluded that my paper checklist is still the best solution for my small library.

Challenges with their system included the use of color to indicate status (one staff is color blind, which is why the also use an action log), there is some overlap of work, and tracking unsolved problems is difficult. Despite that, they feel it is better than the old system. It’s a shared and transparent process, with decent tracking of subscriptions, and it’s easy to integrate additional changes in the process.

Speaker: Kate Montgomery

They initially had Meridian, and while it was great that they followed the ERMI standard, they didn’t need everything, so it was a sea of bits of data with lots of blank fields. Meridian is dead, so they had to look for alternatives. Considered Verde, but sensed that it was to be replaced by Alma. So, they had to decide whether to build their own tool, using an open source product, or purchasing something. They were limited by time, staffing, and money.

Ultimately, they decided to go with CORAL. They didn’t have to learn a lot of new skills (MySQL & PHP) to set it up and get it to work. Rather than looking at this as a whole lot of work, they took the opportunity to make a product that works for them. They reviewed and documented their workflows and set some standards.

CORAL can create workflows that trigger actions for each individual or group, depending on the item or situation. Hopes to use this to create buy-in from library departments and other small libraries around campus.

ER&L 2012: Knockdown/Dragout Webscale Discovery Service vs. Niche Databases — Data-Driven Evaluation Methods

tug-of-war
photo by TheGiantVermin

Speaker: Anne Prestamo

You will not hear the magic rational that will allow you to cancel all your A&I databases. The last three years of analysis at her institution has resulted in only two cancelations.

Background: she was a science librarian before becoming an administrator, and has a great appreciation for A&I searching.

Scenario: a subject-specific database with low use had been accessed on a per-search basis, but going forward it would be sole-sourced and subscription based. Given that, their cost per search was going to increase significantly. They wanted to know if Summon would provide a significant enough overlap to replace the database.

Arguments: it’s key to the discipline, specialized search functionality, unique indexing, etc… but there’s no data to support how these unique features are being used. Subject searches in the catalog were only 5% of what was being done, and most of them came from staff computers. So, are our users actually using the controlled vocabularies of these specialized databases. Finally, librarians think they just need to promote these more, but sadly, that ship’s already sailed.

Beyond usage data, you can also look at overlap with your discovery service, and also identify unique titles. For those, you’ll need to consider local holdings, ILL data, impact factors, language, format, and publication history.

Once they did all of that, they found that 92% of the titles were indexed in their discovery service. The depth of the backfile may be an issue, depending on the subject area. Also, you may need to look at the level of indexing (cover to cover vs. selective). In the end, they found that 8% of the titles not included, they owned most of them in print and they were rather old. 15% of the 8% had impact factors, which may or may not be relevant, but it is something to consider. And, most of the titles were non-English. They also found that there were no ILL requests for the non-owned unique titles, and less than half were scholarly and currently being published.

ER&L 2012: Taking the Guesswork Out of Demand-Driven Acquisition — Two Approaches

Tome Reader
photo by QQ Li

Speakers: Carol J. Cramer & Derrik Hiatt

They did an analysis of their circulating print collection to see what areas or books would have the equivalent uses to trigger a purchase if it were electronic. Only 2% of their entire circulating collection met the trigger point to where it would be more cost effective to purchase than to go with a short term loan option.

They announced the DDA trial, but deliberately did not tell the users that it would incur cost, just that it was there. They would pay short term loans up to the sixth use, and then they would purchase the title. The year of usage gave them an idea of what adjustments needed to be made to the trigger point. Eventually, the cost flattens out at the sixth use, and the difference between continuing to pay STLs and buying the book is small.

They were able to identify if the triggered purchase book was used by a single person (repeatedly), by a class (several people), or a mix of both, and it was split in almost even thirds.

They determined that 6 was a good trigger. The STL cost ended up being an average of 10.5% of the list cost. DDA doesn’t have to break the bank, and was lower than expected. The number of titles in the catalog didn’t have as much to do with the amount spent as the FTE. It also lead to questioning the value of firm ordering ebooks rather than letting DDA cover it

However, this is only 11 months of data, and more longitudinal studies are needed.

Speaker: Lea Currie

They loaded records for slip books, and then the users have the option to request them at various levels of speed. The users are notified when the print book arrives, and the full MARC record is not loaded until the book is returned.

They saved quit a bit of money per month using this method, and 88% of the titles purchased circulated. Only about 75% of their ILL titles will circulate, to put that into perspective.

Of course, librarians still had some concerns. First, the library catalog is not an adequate tool for discovering titles. Faculty were concerned about individuals doing massive requests for personal research topics. Also, faculty do not want to be selectors for the libraries. [ORLY? They want the books they want when they want them — how is that different?]

The next DDA project was for ebooks, using the typical trigger points. They convinced the Social Science and Sci/Tech librarians to put a price cap for DDA titles. Up to a certain price, the book would be included in the approval plan, between a range it would go in DDA, and then above that range it would require the librarian’s approval. These were written into their YBP profile.

For the pDDA, they discovered that as the books aged, it was harder to do rush orders since they were going out of print. They also modified their language to indicate that the books may not be available if they are out of print.

They have not done DDA for humanities or area studies. They based their decisions on the YBP profile on retrospective reports, which allowed them to get an idea of the average cost.

For FY12, they expect that the breakdown will be 23% eDDA, 50% pDDA, 20% approval, and 7% selected by subject bibliographers. They’ve also given the subject librarians the options to review the automatic approval ebooks — they have a week to reject or shift to DDA each title if they want. They can also shift the expensive titles to DDA if they want to see if anyone would use it before choosing to purchase it.

Questions:
Are you putting the records in your discovery service if you have one, and can you tell if the uses are coming from that or your catalog? Not yet. Implementing a discovery service. Some find resources through Google Scholar.

ER&L 2012: Trials by Juries — Suggested Practices for Database Trials

IBM System/370 Model 145
photo by John Keogh

Speakers: Annis Lee Adams, Jon Ritterbush, & Christine E. Ryan

The discussion topic began as an innocent question on ERIL-L listserv about tools or techniques used in gathering feedback on database trials, whether from librarians or library users.

Trials can come from many request sources — subject librarians, faculty, students, and the electronic resources or acquisitions librarian. Adams evaluates the source and their access points. Also, they try to trial things they are really interested in early enough in the year to be able to request funding for the next year if they choose to purchase it. She says they don’t include faculty in the evaluation unless they think they can afford the product.

Criteria for evaluation: content, ease of use/functionality, cost, and whether or not a faculty member requested it. One challenge is how to track and keep an institutional memory of the outcome. They use an internal blog on WordPress to house the information (access, cost, description, and evaluation comments) with password protection on each entry. After the trial ends, the blog entry is returned to draft status so it’s not there, and a note is added with the final decision.

The final thing Adams does is create a spreadsheet that tracks every trial over a year, and it includes some renewals of existing subscriptions.

Ritterbush… lots of no-brainer stuff. Is it relevant to your collection development policy? Can you afford it? Who is requesting it? And so on.

Avoid scheduling more than three trials simultaneously to avoid “trial fatigue.” Ritterbush says they only publicize extended trials (>3 months) — the rest are kept internal or only shared with targeted faculty.

For feedback, they found that email is a mediocre solution, in part because the responses weren’t very helpful. The found that short web forms have worked better, incorporating a mix of Likert scale and free-text questions. The tool they use is Qualtrics, but most survey products would be fine.

Ritterbush tries to compose the trial information as a press release, making it easy for librarians and faculty to share with colleagues. A webinar or live demonstration of the product can increase interest and participation in the evaluation.

Ryan says you need to know why you are doing the trial, because that tells you who it will impact and then what approach you’ll need to take. Understand your audience in order to reach them.

Regardless of who is setting up the trials, it would be good to have a set of guidelines for trials that spells out responsibilities.

Kind of tuning out, since it seems like Ryan doesn’t really do anything directly with trials — just gives all that over to the subject liaisons. This would be disastrous at my library. Also, really not happy about her negative attitude towards public trials. If it’s IP-based, then who cares if you post it on your website? I’ve received invaluable feedback from users that would never see the trials if I followed Ryan’s method.

Questions:
What about trials to avoid expensive subscriptions? Some libraries will do it, but some have policies that prohibit it. [We have had sales agents recommend it to us, which I’ve never understood.]

How do you have trials for things when you don’t know if you have funding for them? Manage expectations and keep a healthy wishlist. [We will also use trials to justify funding increases or for replacing existing subscriptions with something new.]

ER&L 2012: New ARL Best Practices in Fair Use

laptop lid comic relief + Dawn and Drew
photo by YayAdrian

Speaker: Brandon Butler (Peter Jaszi was absent)

The purpose of copyright is to promote the creation of culture. It is not to ensure that authors get a steady stream of income no matter what, or to pay them back for the hard work they do, or to show our respect for the value they add to society. It’s about getting the stuff into the culture, and giving the creators enough incentive to do it.

One way it does it is to give creators exclusive rights for a limited period of time. The limit encourages new makers to use and remix existing culture.

Fair use is the biggest balancing feature of copyright. It ensures that the rights provided to the creators don’t become oppressive to the users. Fair use is the legal, unauthorized use of copyrighted material… under some circumstances. And we’ve spent generations trying to figure out which circumstances apply.

Fair use is a space for creativity. It gives you the leeway to take the culture around you and incorporate it into your work. It allows you to quote other scholarship in your research. It allows you to incorporate art into new works.

There are four factors of fair use. Every judge should consider the reason for the use, the kind of work used, the amount used, and the effect on the market. But it doesn’t tell the judges how much to consider or which is more important. The good news is that judges love balancing features, and the Supreme Court has determined that fair use protects free speech. However, since copyright is automatically conferred as soon as the creation is fixed, the fair use judicial interpretations have shifted greatly since 1990 to be more in the balance of the users in certain circumstances.

Without fair use, copyright would be in conflict with the 1st Amendment.

Judges want to know if the use is transformative (i.e. for a new purpose, context, audience, insight) and if you used the right amount in that transformative process. For example, parody is making fun of the original work, not just reusing it. An appropriate amount can refer to both the quantity of the original in the transformative work, and also the audience who received your transformative work. For example, the many photographic memes that take pictures and alter them to fit a theme, like One Tiny Hand.

Judges care about you and what you think is fair. There is a pattern of judges deferring to the well-articulated norms of a practice community.

Best practices codes are a logical outgrowth of the things the communities have articulated as their values and the things they would consider to be legitimate transformative works. Documentary filmmakers, scholars, media literacy teachers, online video, dance collections, open course ware, poets… many groups are creating best practices for fair use.

The documentary filmmakers have had a code of best practice for a long time. They realized that without it, they were limiting themselves too much in what they could create. Once they codified their values, more broadcast sources were willing to take films and new kinds of films were being made. Insurers of errors and omissions insurance were able to accept fair use claims, and lawyers use the Statement to build their own best practices in the relevant areas.

Keep in mind, though, that these are best practices and not guidelines. Principles, not rules. Limitations, not bans. Reasoning, not rote. The numerical limits we once followed are not the law, and we need to keep them fresh to be relevant.

Licensing is a different thing all together. This means you may have less rights in some instances, and more rights in others, regardless of fair use.

For libraries, fair use enables our mission to serve knowledge past, present, and future. We have a duty to make copyrighted works real and accessible in the way people use things now. What will libraries be in the future? How will we stay relevant? We need to have some flexibility with the stuff we have in our collections.

Many librarians are discouraged. Insecurity and hesitation equal staff costs to hire someone to clear copyright questions. Fair use would help, but it’s underused. Risk aversion subsumes fair use analysis.

The ARL document took a lot of people from diverse institutions and many hours of discussion to create it, and it was reviewed by several legal experts. It’s not risk-free, since it would need to stand up in court first (and there are always lawsuit-happy people), but it seems okay based on past judgement.

They hope it will put legal risks into perspective, and will give librarians a tool to go to general counsels and administrations and let them know things are changing. It considered the views of librarians and their values, and they also hope that people will speak out publicly that they support the Code.

Fair use applies in these common situtations:

  • course reserves — digital access to teaching materials for students and faculty, although it should be limited to access by only the appropriate audience
  • both physical and virtual exhibits — if it highlights a theme or commonality, you’re doing something new to help people understand what’s in your library
  • digitizing to preserve at-risk items — you’re not a publisher or scam artist, you’re a librarian making sure the things are accessible over time (like VHS tapes)
  • digitizing special collections and archives — you’re keeping it alive
  • access to research and teaching materials for disabled users — i.e. Daisy
  • institutional repositories
  • creation of search indexes
  • making topically-based collections of web-based materials

Practice makes practice. It won’t work if you don’t use it.

ER&L 2012: Electronic Resources Workflow Analysis & Process Improvement

workflow at the most basic level
illustration by wlef70

Speakers: Ros Raeford & Beverly Dowdy

Users were unhappy with eresource management, due in part to their ad hoc approach, and they relied on users to notify them when there were access issues. A heavy reliance on email and memory means things slip through the cracks. They were not a train wreck waiting to happen, they were train wreck that had already occurred.

Needed to develop a deeper understanding of their workflows and processes to identify areas for improvement. The reason that earlier attempts have failed was due to not having all the right people at the table. Each stage of the lifecycle needs to be there.

Oliver Pesch’s 2009 presentation on “ERMS and the E-Resources Lifecycle” provided the framework they used. They created a staff responsibility matrix to determine exactly what they did, and then did interviews to get at how they did it. The narrative was translated to a workflow diagram for each kind of resource (ebooks, ejournals, etc.).

Even though some of the subject librarians were good about checking for dups before requesting things, acquisitions still had to repeat the process because they don’t know if it was done. This is just one example of a duplication of effort that they discovered in their workflow review.

For the ebook package process, they found it was so unclear they couldn’t even diagram it. It’s very linear, and it could have a number of processes happening in parallel.

Lots of words on screen with great ideas of things to do for quality control and user interface improvements. Presenter does not highlight any. Will have to look at it later.

One thing they mentioned is identifying essential tasks that are done by only one staff. They then did cross-training to make sure that if the one is out for the day, someone else can do it.

Surprisingly, they were not using EDI for firm orders, nor had they implemented tools like PromptCat.

Applications that make things work for them:

JTacq — using this for the acquisition/collections workflow. I’ve never heard of it, but will investigate.

ImageNow — not an ERM — a document management tool. Enterprise content management, and being used by many university departments but not many libraries.

They used SharePoint at a meeting space for the teams.

ER&L 2012: Next Steps in Transforming Academic Libraries — Radical Redesign to Mainstream E-Resource Management

The smallest details add up
photo by Garrett Coakley

Speaker: Steven Sowell

His position is new for his library (July 2011), and when Barbara Fister saw the job posting, she lamented that user-centered collection development would relegate librarians to signing licenses and paying invoices, but Sowell doesn’t agree.

Values and assumptions: As an academic library, we derive our reason for existing from our students and faculty. Our collections are a means to an end, rather than an end to themselves. They can do this in part because they don’t have ARL-like expectations of themselves. A number of studies has shown that users do a better job of selecting materials than we do, and they’ve been moving to more of a just in time model than a just in case.

They have had to deal with less money and many needs, so they’ve gotten creative. The university recently realigned departments and positions, and part of that included the creation of the Collections & Resource Sharing Department (CRSD). It’s nicknamed the “get it” department. Their mission is to connect the community to the content.

PDA, POV, PPV, approval plans, shelf-ready, and shared preservation are just a few of the things that have changed how we collect and do budget planning.

CRSD includes collection development, electronic resources, collections management, resource sharing & delivery, and circulation (refocusing on customer service and self-servicing, as well as some IT services). However, this is a new department, and Sowell speaks more about what these things will be doing than about what they are doing or how the change has been effective or not.

One of the things they’ve done is to rewrite position descriptions to refocus on the department goals. They’ve also been focusing on group facilitation and change management through brainstorming, parking lot, and multi-voting systems. Staff have a lot of anxiety over feeling like an expert in something and moving to where they are a novice and having to learn something new. They had to say goodbye to the old routines, mix them with new, and then eventually make the full shift.

They are using process mapping to keep up with the workflow changes. They’re also using service design tools like journey mapping (visualization of the user’s experience with a service), five whys, personas, experience analogy, and storyboards (visualization of how you would like things to occur).

For the reference staff, they are working on strategic planning about the roles and relationships of the librarians with faculty and collections.

Change takes time. When he proposed this topic, he expected to be further along than he is. Good communication, system thinking, and staff involvement are very important. There is a delicate balance between uncertainty/abstract with a desire for concrete.

Some unresolved issues include ereaders, purchasing rather than borrowing via ILL and the impact on their partner libraries, role of the catalog as an inventory in the world of PDA/PPV. The re-envisioning of the collection budget as a just in time resource. Stakeholder involvement and assessment wrap up the next steps portion of his talk.

Questions:
In moving print to the collection maintenance area, how are you handling bundled purchases (print + online)? How are you handling the impression of importance or lack thereof for staff who still work with traditional print collection management? Delicately.

Question about budgeting. Not planning to tie PDA/PPV to specific subjects. They plan to do an annual review of what was purchased and what might have been had they followed their old model.

How are they doing assessment criteria? Not yet, but will take suggestions. Need to tie activities to student academic success and teaching/researching on campus. Planning for a budget cut if they don’t get an increase to cover inflation. Planning to do some assessment of resource use.

What will you do if people can’t do their new jobs? Hopefully they will after the retraining. Will find a seat for them if they can’t do what we hope they can do.

What are you doing to organize the training so they don’t get mired in the transitional period? Met with staff to reassure them that the details will be worked out in the process. They prepared the ground a bit, and the staff are ready for change.

Question about the digital divide and how that will be addressed. Content is available on university equipment, so not really an issue/barrier.

What outreach/training to academic departments? Not much yet. Will honor print requests. Subject librarians will still have a consultative role, but not necessarily item by item selection.

ER&L 2012: Lightening Talks

Shellharbour; Lightening
photo by Steven

Due to a phone meeting, I spent the first 10 min snarfing down my lunch, so I missed the first presenters.

Jason Price: Libraries spend a lot of time trying to get accurate lists of the things we’re supposed to have access to. Publisher lists are marketing lists, and they don’t always include former titles. Do we even need these lists anymore? Should we be pushing harder to get them? Can we capture the loss from inaccurate access information and use that to make our case? Question: Isn’t it up to the link resolver vendors? No, they rely on the publishers/sources like we do. Question: Don’t you think something is wrong with the market when the publisher is so sure of sales that they don’t have to provide the information we want? Question: Haven’t we already done most of this work in OCLC, shouldn’t we use that?

Todd Carpenter: NISO recently launched the Open Discovery Initiative, which is trying to address the problems with indexed discovery services. How do you know what is being indexed in a discovery service? What do things like relevance ranking mean? What about the relationships between organizations that may impact ranking? The project is ongoing and expect to hear more in the fall (LITA, ALA Midwinter, and beyond).

Title change problem — uses xISSN service from OCLC to identify title changes through a Python script. If the data in OCLC isn’t good enough, and librarians are creating it, then how can we expect publishers to do better.

Dani Roach: Anyone seeing an unusual spike in use for 2011? Have you worked with them about it? Do you expect a resolution? They believe our users are doing group searches across the databases, even though we are sending them to specific databases, so they would need to actively choose to search more than one. Caution everyone to check their stats. And how is their explanation still COUNTER compliant.

Angel Black: Was given a mission at ER&L to find out what everyone is doing with OA journals, particularly those that come with traditional paid packages. They are manually adding links to MARC records, and use series fields (830) to keep track of them. But, not sure how to handle the OA stuff, particularly when you’re using a single record. Audience suggestion to use 856 subfield x. “Artesian, handcrafted serials cataloging”

Todd Carpenter part 2: How many of you think your patrons are having trouble finding the OA in a mixed access journal that is not exposed/labeled? KBs are at the journal or volume/issue level. About 1/3 of the room thinks it is a problem.

Has anyone developed their own local mobile app? Yes, there is a great way to do that, but more important to create a mobile-friendly website. PhoneGap will write an app for mobile OS that will wrap your web app in an app, and include some location services. Maybe look to include library in a university-wide app?

Adam Traub: Really into PPV/demand-driven. Some do an advance purchase model with tokens, and some of them will expire. Really wants to make it an unmediated process, but it opens up the library to increasing and spiraling costs. They went unmediated for a quarter, and the use skyrocketed. What’s a good way to do this without spending a ton of money? CCC’s Get It Now drives PPV usage through the link resolver. Another uses a note to indicate that the journal is being purchased by the library.

Kristin Martin: Temporarily had two discovery services, and they don’t know how to display this to users. Prime for some usability testing. Have results from both display side by side and let users “grade” them.

Michael Edwards: Part of a NE consortia, and thinks they should be able to come up with consortial pressure on vendors, and they’re basically telling them to take a leap. Are any of the smaller groups in pressuring vendors in making concessions to consortial acquisitions. Orbis-Cascade and Connect NY have both been doing good things for ebook pricing and reducing the multiplier for SU. Do some collection analysis on the joint borrowing/purchasing policies? The selectors will buy what they buy.

css.php