Next week is a two-day work week, and my schedule for those two days is almost completely wide open. This means, if all goes well, I might actually recover from being away for Charleston last week and being away two days this week for meetings. There are about 50 action items on my list, ranging from a few minutes attention to a few hours attention. And that’s just the “must deal with now” stuff. Forget doing any of my ongoing projects.
The blessing and curse of travel — you get to do cool things, see cool places, and meet cool people, but then you spend several days of work hell trying to atone for the sin of not being there.
Updates from Serials Solutions – mostly Resource Manager (Ashley Bass):
Keep up to date with ongoing enhancements for management tools (quarterly releases) by following answer #422 in the Support Center, and via training/overview webinars.
Populating and maintaining the ERM can be challenging, so they focused a lot of work this year on that process: license template library, license upload tool, data population service, SUSHI, offline date and status editor enhancements (new data elements for sort & filter, new logic, new selection elements, notes), and expanded and additional fields.
Workflow, communication, and decision support enhancements: in context help linking, contact tool filters, navigation, new Counter reports, more information about vendors, Counter summary page, etc. Her most favorite new feature is “deep linking” functionality (aka persistent links to records in SerSol). [I didn’t realize that wasn’t there before — been doing this for my own purposes for a while.]
Next up (in two weeks, 4th quarter release): new alerts, resource renewals feature (reports! and checklist!, will inherit from Admin data), Client Center navigation improvements (i.e. keyword searching for databases, system performance optimization), new license fields (images, public performance rights, training materials rights) & a few more, Counter updates, SUSHI updates (making customizations to deal with vendors who aren’t strictly following the standard), gathering stats for Springer (YTD won’t be available after Nov 30 — up to Sept avail now), and online DRS form enhancements.
In the future: license API (could allow libraries to create a different user interface), contact tools improvements, interoperability documentation, new BI tools and reporting functionality, and improving the Client Center.
Also, building a new KB (2014 release) and a web-scale management solution (Intota, also coming 2014). They are looking to have more internal efficiencies by rebuilding the KB, and it will include information from Ulrich’s, new content types metadata (e.g. A/V), metadata standardization, industry data, etc.
Summon Updates (Andrew Nagy):
I know very little about Summon functionality, so just listened to this one and didn’t take notes. Take-away: if you haven’t looked at Summon in a while, it would be worth giving it another go.
360 Link Customization via JavaScript and CSS (Liz Jacobson & Terry Brady, Georgetown University):
Goal #1: Allow users to easily link to full-text resources. Solution: Go beyond the out-of-the box 360 Link display.
Goal #2: Allow users to report problems or contact library staff at the point of failure. Solution: eresources problem report form
They created the eresources problem report form using Drupal. The fields include contact information, description of the resource, description of the problem, and the ability to attach a screenshot.
When they evaluated the slightly customized out of the box 360 Link page, they determined that it was confusing to users, with too many options and confusing links. So, they took some inspiration from other libraries (Matthew Reidsma’s GVUS jQuery code available on Github) and developed a prototype that uses custom JavaScript and CSS to walk the user through the process.
Some enhancements included: making the links for full-text (article & journal) butttons, hiding additional help information and giving some hover-over information, parsing the citation into the problem report page, and moving the citation below the links to full-text. For journal citations with no full-text, they made the links to the catalog search large buttons with more text detail in them.
Some of the challenges of implementing these changes is the lack of a test environment because of the limited preview capablities in 360 Link. Any changes actually made required an overnight refresh and they would be live, opening the risk of 24 hour windows of broken resource links. So, they created their own test environment by modifying test scenarios into static HTML files and wrapping them in their own custom PHP to mimic the live pages without having to work with the live pages.
[At this point, it got really techy and lost me. Contact the presenters for details if you’re interested. They’re looking to go live with this as soon as they figure out a low-use time that will have minimal impact on their users.]
Customizing 360 Link menu with jQuery (Laura Wrubel, George Washington University)
They wanted to give better visual clues for users, emphasize the full-text, have more local control over linkns, and visual integration with other library tools so it’s more seamless for users.
They started with Reidsma’s code, then then forked off from it. They added a problem link to a Google form, fixed ebook chapter links and citation formatting, created conditional links to the catalog, and linked to their other library’s link resolver.
They hope to continue to tweak the language on the page, particularly for ILL suggestion. The coverage date is currently hidden behind the details link, which is fine most of the time, but sometimes that needs to be displayed. They also plan to load the print holdings coverage dates to eliminate confusion about what the library actually has.
In the future, they would rather use the API and blend the link resolver functionality with catalog tools.
Custom document delivery services using 360 Link API (Kathy Kilduff, WRLC)
They facilitate inter-consortial loans (Consortium Loan Service), and originally requests were only done through the catalog. When they started using SFX, they added a link there, too. Now that they have 360 Link, they still have a link there, but now the request form is prepopulated with all of the citation information. In the background, they are using the API to gather the citation information, as well as checking to see if there are terms of use, and then checking to see if there are ILL permissions listed. They provide a link to the full-text in the staff client developed for the CLS if the terms of use allow for ILL of the electronic copy. If there isn’t a copy available in WRLC, they forward the citation information to the user’s library’s ILL form.
License information for course reserves for faculty (Shanyun Zhang, Catholic University)
Included course reserve in the license information, but then it became an issue to convey that information to the faculty who were used to negotiating it with publishers directly. Most faculty prefer to use Blackboard for course readings, and handle it themselves. But, they need to figure out how to incorporate the library in the workflow. Looking for suggestions from the group.
Advanced Usage Tracking in Summon with Google Anaytics (Kun Lin, Catholic University)
In order to tweak user experience, you need to know who, what, when, how, and most important, what were they thinking. Google Anayltics can help figure those things out in Summon. Parameters are easy ways to track facets, and you can use the data from Google Analytics to figure out the story based on that. Tracking things the “hard way,” you can use the conversion/goal function of Google Analytics. But, you’ll need to know a little about coding to make it work, because you have to add some javascripts to your Summon pages.
Use of ERM/KB for collection analysis (Mitzi Cole, NASA Goddard Library)
Used the overlap analysis to compare print holdings with electronic and downloaded the report. The partial overlap can actually be a full overlap if the coverage dates aren’t formatted the same, but otherwise it’s a decent report. She incorporated license data from Resource Manager and print collection usage pulled from her ILS. This allowed her to create a decision tool (spreadsheet), and denoted the print usage in 5 year increments, eliminating previous 5 years use with each increment (this showed a drop in use over time for titles of concern).
Discussion of KnowledgeWorks Management/Metadata (Ben Johnson, Lead Metadata Librarian, SerialsSolutions)
After they get the data from the provider or it is made available to them, they have a system to automatically process the data so it fits their specifications, and then it is integrated into the KB.
They deal with a lot of bad data. 90% of databases change every month. Publishers have their own editorial policies that display the data in certain ways (e.g., title lists) and deliver inconsistent, and often erroneous, metadata. The KB team tries to catch everything, but some things still slip through. Throught the data ingestion process, they apply rules based on past experience with the data source. After that, the data is normalized so that various title/ISSN/ISBN combinations can be associated with the authority record. Finally, the data is incorporated into the KB.
Authority rules are used to correct errors and inconsistencies. Rule automatically and consistently correct holdings, and they are often used to correct vendor reporting problems. Rules are condified for provider and database, with 76,000+ applied to thousands of databases, and 200+ new rules are added each month.
Why does it take two months for KB data to be corrected when I report it? Usually it’s because they are working with the data providers, and some respond more quickly than others. They are hoping that being involved with various initiatives like KBART will help fix data from the provider so they don’t have to worry about correcting it for us, but also making it easier to make those corrections by using standards.
Client Center ISSN/ISBN doesn’t always work in 360 Links, which may have something to do with the authority record, but it’s unclear. It’s possible that there are some data in the Client Center that haven’t been normalized, and could cause this disconnect. And sometimes the provider doesn’t send both print and electronic ISSN/ISBN.
What is the source for authority records for ISSN/ISBN? LC, Bowker, ISSN.org, but he’s not clear. Clarification: Which field in the MARC record is the source for the ISBN? It could be the source of the normalization problem, according to the questioner. Johnson isn’t clear on where it comes from.
I told a friend yesterday that I felt like I didn’t carpe enough diem at Charleston Conference. It was my first time attending, and I didn’t have a good sense of the flow. I wasn’t prepared for folks to be leaving so early on Saturday, I didn’t know about the vendor showcase on Wednesday until after I made my travel arrangements, and I felt like I didn’t make the most of the limited time I had.
Next time will be better. And yes, there will be a next time, but maybe after a year or two. I understand from some regulars that the plenary sessions were below average this year, which matched my disappointed expectations. Now knowing that there is little vetting of the concurrent sessions, I will be more particular in my choices the next time, and hopefully select sessions where the content matches my expectations based on the abstracts.
The food in Charleston definitely met my expectations. I had tasty shrimp & grits a couple times, variations on fried chicken nearly every day, and a yummy cup of she crab soup. Tried a few local brews, and a dark & stormy from a cool bar that brews their own ginger beer. I’d go back for the food for sure.
Speakers: Ladd Brown, Andi Ogier, and Annette Bailey, Virginia Tech
Libraries are not about the collections anymore, they’re about space. The library is a place to connect to the university community. We are aggressively de-selecting, buying digital backfiles in the humanities to clear out the print collections.
Guess what? We still have our legacy workflows. They were built for processing physical items. Then eresources came along, and there were two parallel processes. Ebooks have the potential of becoming a third process.
Along with the legacy workflows, they have a new Dean, who is forward thinking. The Dean says it’s time to rip off the bandaid. (Titanic = old workflow; iceberg = eresources; people in life boats = technical resources team) Strategic plans are living documents kept on top of the desk and not in the drawer.
With all of this in mind, acquisitions leaders began meeting daily in a group called Eresources Workflow Weekly Work, planning the changes they needed to make. They did process mapping with sharpies, post-its, and incorporated everyone in the library that had anything to do with eresources. After lots of meetings, position descriptions began to emerge.
Electronic Resource Supervisor is the title of the former book and serials acquisitions heads. The rest — wasn’t clear from the description.
They had a MARC record service for ejournals, but after this reorganization process, they realized they needed the same for ebooks, and could be handled by the same folks.
Two person teams were formed based on who did what in the former parallel processes, and they reconfigured their workspace to make this more functional. The team cubes are together, and they have open collaboration spaces for other groupings.
They shifted focus from maintaining MARC records in their ILS to maintaining accurate title lists and data in their ERMS. They’re letting the data from the ERMS populate the ILS with appropriate MARC records.
They use some Python scripts to help move data from system to system, and more staff are being trained to support it. They’re also using the Google Apps portal for collaborative projects.
They wanted to take risks, make mistakes, fail quickly, but also see successes come quickly. They needed someplace to start, and to avoid reinventing the wheel, so they borrowed heavily from the work done by colleagues at James Madison University. They also hired Carl Grant as a consultant to ask questions and facilitate cross-departmental work.
Big thing to keep in mind: Administration needs to be prepared to allow staff to spend time learning new processes and not keeping up with everything they used to do at the same time. And, as they let go of the work they used to do, please tell them it was important or they won’t adopt the new work.
There are two components — the recommender and hot articles.
This began in 2009 with the article recommended, and as of this year, it’s used by over 1100 institutions. This year they added the hot articles service, with “popularity reports”. And, there is a mobile app for the hot articles service. Behind the scenes, there is the bX Data Lab, where they run experiments and quality control. They’re also interested in data mining researchers who might want to take the data and use it for their own work.
The data for bX comes from SFX users who actively contribute the data from user clicks at their institutions. It’s content-neutral, coming from many institutions.
bX is attempting to add some serendipity to searches that by definition require some knowledge of what you are looking for. When you find something from your searching, the bX recommender will find other relevant articles for you, based on what other people have used in the past. The hot articles component will list the most used articles from the last month that are on the same topic as your search result.
It currently works only with articles, but they are collecting data on ebooks that may eventually lead to the ability to recommend them as well.
The hot articles component is based on HILCC subjects that have been assigned to journal titles, so it’s not as precise as the recommender.
You can choose to limit the recommendations to only your holdings, but that limits the discovery. You can have indicators that show whether the item is available locally or not.
It’s available in SFX, Primo, Scopus, and the Science Direct platform. Hot articles can be embedded in LibGuides.
Atmetrics – probably will be incorporated to enhance the recommender service.
They are looking at article metrics calculated as a percentile rank per topic, which is more relevant today than the citations that may come five years down the road. It’s based on usage through SFX and bX, but not direct links or DOI links.
Speakers: Matt Torrence, Audrey Powers, & Megan Sheffield, University of South Florida
Are collection development policies viable today? In order answer this, they sent out a survey to ARL libraries to see if they are using them or if they’re experimenting with something else. They were also interested to know when and how data is being used in the process.
The survey results will be published in the proceedings. I will note anything here that seems particularly interesting, but it looks like all they are doing now is reading that to us.
Are collection development policies being used? Yes, sort of. Although most libraries in the survey do have them, they tend to be used for accreditation and communication, and often they are not consistently available either publicly or internally.
What are the motivations for using collection development policies? Tends to be more for external/marketing than for internal workflows.
They think that a collection development “philosophy” may be a more holistic response to the changing nature of collection development.
Speakers: two people from the University of Arkansas at Little Rock, but they had four names on the PPT, and I didn’t catch who was who
They recently decided to revise their collection development policy/guidelines based on a recommendation from a strategic planning ARL Collection Analysis Project. They also had quite a few new librarians who needed to work with faculty selectors.
They did a literature review and gathered information on practices from peer institutions. They actually talked to the Office of Institutional Research about data on academic degree programs. And, like students, they looked online to see if they could borrow from existing documents.
One thing they took away from the review of what other libraries have out there was that they needed to have the document live on the web, and not just on paper in a binder in someone’s office.
Policies/guidelines should be continuously updating, flexible, acknowledge consortia memberships, acknowledge new formats, and strike a balance between being overly detailed and too general.
They see that the project has had some benefits, not only to themselves but also to provide a guide for current and future users of the policies. It is also a valuable tool for transmitting institutional memory.
“One size fits all. Welcome to the 80’s” by Stephan van Es
Speaker: Anne McKee, GWLA
SERU was heavily involved in putting this session together. SERU hopes to put away with the madness of licensing and come up with mutually agreeable terms.
Most libraries purchase ebooks in order to make them available 24/7 to their users. While they haven’t grown to proportion sizes larger than print in library collections, they are heading there.
Researchers like ebooks because they don’t have to return them, and are more accessible than print books in the developing world. Students appreciate the ease of accessibility, particularly distance learnings, but given the choice they would take print over e every time. Libraries like them because there are easier/better ways of assessing usage and value to their users, but there are licensing and DRM headaches.
Speaker: Adam Chesler, Business Expert Press / Momentum Press
He has worked for large publishers, but now works for a small, new publisher.
What’s hard for a new publisher to break into the library market? Creating awareness, providing value — acquisition librarians are already overwhelmed with sales pitches via email. Authors may be wary of working with an unknown outlet when there are so many other options. They have to figure out ways to do this creatively.
Gaining budget shares in library materials budgets is challenging, where established publishers have long-standing space. Setting up trials for libraries and following up on them is challenging when one person is responsible for every business/science library in North America. “If you set up a trial, it would be much better to tell me to go to hell than ignore me.”
What’s easy? Nothing.
Well, being an e-only publisher means they don’t have responsibility for a print legacy that needs to be converted to online. That’s easy. They also have more freedom to experiment, particularly with pricing models. And SERU. That’s easy. They also don’t have their own platform, so they make the books available on established providers libraries are already comfortable using.
Speaker: Kimberly Steinle, Duke University Press
When they created the ebook side of the press, they modeled it after the ejournal side, with similar tiered pricing. They also work with the other ebook platforms and their pricing and licensing models.
While the ejournal collection sales are significant, they were surprised to find that ebook collections were not as popular as individual title sales.
They thought selling ebooks would be easy, since they already had existing relationships. MARC records, pricing, technology — not as easy as they thought. Squeezing the ebook model into the ejournal model doesn’t quite fit.
It’s easy to set up multiple sales models, but harder to get information about who the customers are and using that to make business decisions.
They’re a little worried that if they give up DRM it will impact print sales, but it’s obviously pretty unpopular and they do want the books to be used. They’re thinking about future formats — EPUB3, HTML5 — they need to keep up. They’re thinking about new ways to sell the content, and increasing the number of platforms and partners they work with.
Speaker: Bob Boissey, Springer
Serials come first at Springer (because they’re 80% of your materials budget). But, he’ll talk about ebooks today.
The STM publisher’s preference is to sell ebooks in packages directly to libraries, but there are other models based on library or patron selection that have some appeal. Eventually, market forces will probably mean they’ll have to do something with PDA.
In the post-PDA world, maybe we stop selecting and make sure that our systems are solid for allowing our users to find the best, most relevant content in an un-scoped collection. Might also mean giving up some of our concepts about what librarianship is.
The easy stuff: Libraries are the traditional purchasers of scholarly books, and publishers know how many print books we’ve purchased from them in the past. Many eresource issues were resolved with ejournals. SERU. The volume discount approach to selling ebook packages can work if the per unit cost is low, the percentage of portfolio used is high, and the spend is commensurate with print spend, but with more titles. Include textbooks and reference books in the package. Remove DRM, pair with liberal use and ILL permissions.
The not so easy, but not so hard stuff: Editors and authors have not had an easy time coming to terms with ebooks, much like print on demand. Discovery layer for ebooks is still the catalog, and it’s not down to the full text quite yet. Tablets are great for ebooks, and as they get more popular on campuses, ebooks get used more. Might have to give up the concept of book as a full thing and be okay with chapter-level reading. Most scholarly books outside of the humanities and social sciences are not read as a whole.
Speaker: Doug Armato, the ghost of university presses past, University of Minnesota Press
The first book published at a university was in 1836 at Harvard. The AAUP began in 1928 when UP directors met in NYC to talk about marketing and sales for their books. Arguably, UP have been in some form of crisis since the 1970s, between the serials crisis and the current ebook crisis.
Libraries now account for only 20-25% of UP sales, with more than half of the sales coming from retail sources. UP worry about the library budget ecology and university funding as a whole.
“Books possessed of such little popular appeal but at the same time such real importance” from a 1937 publication called Some Presses You will Be Glad to Know About. Armato says, “A monograph is a scholarly book that fails to sell.”
Libraries complain that their students don’t read monographs. University Presses complain that libraries don’t buy monographs. And some may wonder why authors write them in the first place. UP rely on libraries to buy the books they publish for mission, not to recover the cost of production by being popular enough to be sold in the retail market.
Armato sees the lack of library concern over the University of Missouri Press potential closure and the UP role in the Georgia State case as bellwethers of the devolving relationship between the two, and we should be concerned.
But, there is hope. The evolving relationships with Project Muse and JSTOR to incorporate UP monographs is a sign of new life. UP have evolved, but they need to evolve much faster. UP press publications need better technology that incorporates the manual hyperlinks of footnotes and references into a highly linked database. A policy for copyright that favors authors over publishers is necessary.
Speaker: Alison Mudditt, ghost of university presses present, University of California Press
[Zoned out when it became clear this would be another dense essay lecture with very little interesting/innovative content, rather than what I’d consider to be a keynote. Maybe it’s an age thing? I just don’t have the attention span for a lecture anymore, and I certainly don’t expect one at a library conference. As William Gunn from Mendeley tweeted, “To hear people read speeches and not ask questions, that’s why we’re all in the same room.”]
And so does the WordPress app for iPad, or at least the current version. I had drafts of the three sessions I attended this afternoon, ready to publish as soon as I returned to my room, which is the only place I can connect to the wifi. As soon as the WordPress connected to update, the contents of all three posts reverted to the blank drafts I had created as placeholders.
Yeah. Pissed. That’d be me right now.
In short:
Eresources librarians need to demonstrate their value to the library/university, and they either need more staff to do the increasing work, or other departments need to suck it up and process e-stuff like they should. And yes, someone needs to handle licensing, but that someone shouldn’t also be responsible for every little tiny detail of eresources management (i.e. cataloging, trouble-shooting, invoices, etc.) when there are staff already handling similar processes for other materials.
Librarians need to learn how to market eresources effectively, and assess their marketing strategies effectively. Marie Kennedy has a book coming out next year that can help you with that.
Eresources librarians (or licensing librarians) need to make sure language supporting text mining is included in their license agreements with publishers. Your researchers will thank you for it later, and your future self will be happy to not have to go back and renegotiate it into existing contracts.
Hypothesis: Rapid publishing output and a wide disparity of publishing sources and formats has made finding the right content at the right time harder for librarians.
Old model of publishing was based on scarcity, with publishers as mediators for everything. Publishers aren’t in the business of publishing books, they are in the business of selling books, so they really focus more on what books they think readers want to read. Ebook self publishing overcomes many of the limitations of traditional publishing.
Users want flexibility. Authors want readers. Libraries want books accessible to anyone, and they deliver readership.
The tools for self publishing are now free and available to anyone around the world. The printing press is now in the cloud. Smashwords will release about 100,000 new books in 2012, and they are hitting best seller lists at major retailers and the New York Times.
How do you curate this flood? Get involved at the beginning. Libraries need to also promote a culture of authorship. Connect local writers with local readers. Give users the option to publish to the library. Emulate the best practices of the major retailers. Readers are the new curators, not publishers.
Smashwords Library Direct is a new service they are offering.
[Missed the first part as I sought a more comfortable seat.]
They look for zero margin distribution solutions by connecting publishers and libraries. They do it by running crowd-funded pledge drive for every book offer, much like Kickstarter. They’ve been around since May 2012.
For example, Oral Literature in Africa was published by Oxford UP in 1970, and it’s now out of print with the rights reverted to the author. The rights holder set a target amount needed to make the ebook available free to anyone. The successful book is published with a Creative Commons license and made available to anyone via archive.org.
Unglue.it verifies that the rights holder really has the rights and that they can create an ebook. The rights holder retains copyright, and the ebook format is neutral. Books are distributed globally, and distribution rights are not restricted to anyone. No DRM is allowed, so the library ebook vendors are having trouble adopting these books.
This is going to take a lot of work to make it happen, if we just sit and watch it won’t. Get involved.
Why would a library want to become a publisher? It incentivizes the open access model. It provides services that scholars need and value. It builds collaborations with partners around the world. It improves efficiencies and encourages innovation in scholarly communications.
Began by collaborating with the university press, but it focuses more on books and monographs than journals. The library manages several self-archiving repositories, and they got into journal publishing because the OJS platform looked like something they could handle.
They targeted diminishing circulation journals that the university was already invested in (authors, researchers, etc.) and helped them get online to increase their circulation. They did not charge the editors/publishers of the journals to do it, and encouraged them to move to open access.