ER&L: Innovative eResource Workflow

Speaker: Kelly Smith and Laura Edwards

Their redesign of workflow was prompted by a campus-wide move to Drupal. They are now using it to drive the public display of eresources. They are grouping the resources by status as well as by the platform. On the back end, they add information about contacts, admin logins, etc. They can trigger real-time alerts and notes for the front-end. They track fund codes and cost information. In addition, there are triggers that prompt the next steps in workflow as statuses are changed, from trials to renewals.

Speaker: Xan Arch

They needed a way to standardize the eresource lifecycle, and a place to keep all the relevant information about new resources as they move through the departments. They also wanted to have more transparency about where the resource is in the process.

They decided to use a bug/issue tracker called Jira because that’s what another department had already purchased. They changed the default steps to map to the workflow and notify the appropriate people. The eresource order form is the start, and they ask for as much as they can from the selector. They then put up a display in a wiki using Confluence to display the status of each resource, including additional info about it.

Speaker: Ben Heet

The University of Notre Dame has been developing their own ERMS called CORAL. The lifecycle of an eresource is complex, so they took the approach of creating small workflows to give the staff direction for what to do next, depending on the complexity of the resource (ie free versus paid).

You can create reminder tasks assigned to specific individuals or groups depending on the needs of the workflow. They don’t go into every little thing to be done, but mainly just trigger reminders of the next group of activities. There is an admin view that shows the pending activities for each staff member, and when they are done, they can mark it complete to trigger the next step.

Not every resource is going to need every step. One size does not fit all.

Speaker: Lori Duggan

Kuali OLE is a partnership among academic libraries creating the next generation of library management software and systems. It has a very complex financial platform for manual entry of information about purchase. It looks less like a traditional ERMS and more like a PeopleSoft/Banner/ILS acquisition module, mostly because that is what it is and they are still developing the ERM components.

ER&L: Library Renewal

Speaker: Michael Porter

Libraries are content combined with community. Electronic content access is making it more challenging for libraries to accomplish their missions.

It’s easy to complain but hard to do. Sadly, we tend to complain more than doing. If we get a reputation for being negative, that will be detrimental. That doesn’t mean we should be Sally Sunshine, but we need to approach things with a more positive attitude to make change happen.

Libraries have an identity problem. We are tied to content (ie books). 95% of people in poverty have cable television. They can’t afford it, but they want it, so they get it. Likewise, mobile access to content is moving to ubiquitous.

Our identity needs to be moved back to content. We need to circ electronic content better than Netflix, Amazon, iTunes, etc.

Electronic content distribution is a complicated issue. Vendors in our market don’t have the same kind of revenue as companies like Apple. We aren’t getting the best people or solutions — we’re getting the good enough, if we’re lucky.

Could libraries became the distribution hub for media and other electronic content?

ER&L: Buzz Session – Usage Data and Assessment

What are the kinds of problems with collecting COUNTER and other reports? What do you do with them when you have them?

What is a good cost per use? Compare it to the alternative like ILL. For databases, trends are important.

Non-COUNTER stats can be useful to see trends, so don’t discount them.

Do you incorporate data about the university in makings decisions? Rankings in value from faculty or students (using star rating in LibGuides or something else)?

When usage is low and cost is high, that may be the best thing to cancel in budget cuts, even if everyone thinks it’s important to have the resource just in case.

How about using stats for low use titles to get out of a big deal package? Comparing the cost per use of core titles versus the rest, then use that to reconfigure the package as needed.

How about calculating the cost per use from month to month?

my presentation for Internet Librarian 2010

I’ve uploaded my presentation to SlideShare and will be sending it to the ITI folks shortly. Check the speaker notes for the actual content, as the slides are more for visualization.

IL 2010: Dashboards, Data, and Decisions

[I took notes on paper because my netbook power cord was in my checked bag that SFO briefly lost on the way here. This is an edited transfer to electronic.]

presenter: Joseph Baisano

Dashboards pull information together and make it visible in one place. They need to be simple, built on existing data, but expandable.

Baisano is at SUNY Stonybrook, and they opted to go with Microsoft SharePoint 2010 to create their dashboards. The content can be made visible and editable through user permissions. Right now, their data connections include their catalog, proxy server, JCR, ERMS, and web statistics, and they are looking into using the API to pull license information from their ERMS.

In the future, they hope to use APIs from sources that provide them (Google Analytics, their ERMS, etc.) to create mashups and more on-the-fly graphs. They’re also looking at an open source alternative to SharePoint called Pentaho, which already has many of the plugins they want and comes in free and paid support flavors.

presenter: Cindi Trainor

[Trainor had significant technical difficulties with her Mac and the projector, which resulted in only 10 minutes of a slightly muddled presentation, but she had some great ideas for visualizations to share, so here’s as much as I captured of them.]

Graphs often tell us what we already know, so look at it from a different angle to learn something new. Gapminder plots data in three dimensions – comparing two components of each set over time using bubble graphs. Excel can do bubble graphs as well, but with some limitations.

In her example, Trainor showed reference transactions along the x-axis, the gate count along the y-axis, and the size of the circle represented the number of circulation transactions. Each bubble represented a campus library and each graph was for the year’s totals. By doing this, she was able to suss out some interesting trends and quirks to investigate that were hidden in the traditional line graphs.

WordCamp Richmond: Exploiting Your Niche – Making Money with Affiliate Marketing

presenter: Robert Sterling

Affiliate marketing is a practice of rewarding an affiliate for directing customers to the brand/seller that then results in a sale.

“If you’re good at something, never do it for free.” If you have a blog that’s interesting and people are coming to you, you’re doing something wrong if you’re not making money off of it.

Shawn Casey came up with a list of hot niches for affiliate marketing, but that’s not how you find what will work for you. Successful niches tend to be what you already have a passion for and where it intersects with affiliate markets. Enthusiasm provokes a positive response. Enthusiasm sells. People who are phoning it in don’t come across the same and won’t develop a loyal following.

Direct traffic, don’t distract from it. Minimize the number of IAB format ads – people don’t see them anymore. Maximize your message in the hot spots – remember the Google heat map. Use forceful anchor text like “click here” to direct users to the affiliate merchant’s site. Clicks on images should move the user towards a sale.

Every third or fourth blog post should be revenue-generating. If you do it with every post, people will assume it’s a splog. Instapundit is a good example of how to do a link post that directs users to relevant content from affiliate merchants. Affiliate datafeeds can be pulled in using several WP plugins. If your IAB format ads aren’t performing from day one, they never will.

Plugins (premium): PopShops works with a number of vendors. phpBay/phpZon works with eBay and Amazon, respectively. They’re not big revenue sources, but okay for side money.

Use magazine themes that let you prioritize revenue-generating content. Always have a left-sidebar and search box, because people are more comfortable with that navigation.

Plugins (free): W3 Total Cache (complicated, buggy, but results in fast sites, which Google loves), Regenerate Thumbnails, Ad-minister, WordPress Mobile, and others mentioned in previous sessions. Note: if you change themes, make sure you go back and check old posts. You want them to look good for the people who find them via search engines.

Forum marketing can be effective. Be a genuine participant, make yourself useful, and link back to your site only occasionally. Make sure you optimize your profile and use the FeedBurner headline animator.

Mashups are where you can find underserved niches (i.e. garden tools used as interior decorations). Use Google’s keyword tools to see if there is a demand and who may be your competition. Check for potential affiliates on several networks (ClickBank, ShareASale, Pepperjam, Commission Junction, and other niche-appropriate networks). Look for low conversion rates, and if the commission rate is less than 20%, don’t bother.

Pay for performance (PPP) advertising is likely to replace traditional retail sales. Don’t get comfortable – it’s easy for people to copy what works well for you, and likewise you can steal from your competition.

Questions:

What’s a good percentage to shoot for? 50% is great, but not many do that. Above 25% is a good payout. Unless the payout is higher, avoid the high conversion rate affiliate programs. Look for steady affiliate marketing campaigns from companies that look like they’re going to be sticking around.

What about Google or Technorati ads? The payouts have gone down. People don’t see them, and they (Google) aren’t transparent enough.

How do you do this not anonymously and maintain integrity in the eyes of your readers? One way to do it is a comparison post. Look at two comparable products, list their features against each other.

NASIG 2010 reflections

When I was booking my flights and sending in my registration during the snow storms earlier this year, Palm Springs sounded like a dream. Sunny, warm, dry — all the things that Richmond was not. This would also be my first visit to Southern California, so I may be excused for my ignorance of the reality, and more specifically, the reality in early June. Palm Springs was indeed sunny, but not as dry and far hotter than I expected.

Despite the weather, or perhaps because of the weather, NASIGers came together for one of the best conferences we’ve had in recent years. All of the sessions were held in rooms that emptied out into the same common area, which also held the coffee and snacks during breaks. The place was constantly buzzing with conversations between sessions, and many folks hung back in the rooms, chatting with their neighbors about the session topics. Not many were eager to skip the sessions and the conversations in favor of drinks/books by the pools, particularly when temperatures peaked over 100°F by noon and stayed up there until well after dark.

As always, it was wonderful to spend time with colleagues from all over the country (and elsewhere) that I see once a year, at best. I’ve been attending NASIG since I was a wee serials librarian in 2002, and this conference/organization has been hugely instrumental in my growth as a librarian. Being there again this year felt like a combination of family reunion and summer camp. At one point, I choked up a little over how much I love being with all of them, and how much I was going to miss them until we come together again next year.

I’ve already blogged about the sessions I attended, so I won’t go into those details so much here. However, there were a few things that stood out to me and came up several times in conversations over the weekend.

One of the big things is a general trend towards publishers handling subscriptions directly, and in some cases, refusing to work with subscription agents. This is more prevalent in the electronic journal subscription world than in print, but that distinction is less significant now that so many libraries are moving to online-only subscriptions. I heard several librarians express concern over the potential increase in their workload if we go back to the era of ordering directly from hundreds of publishers rather than from one (or a handful) of subscription agents.

And then there’s the issue of invoicing. Electronic invoices that dump directly into a library acquisition system have been the industry standard with subscription agents for a long time, but few (and I can’t think of any) publishers are set up to deliver invoices to libraries using this method. In fact, my assistant who processes invoices must manually enter each line item of a large invoice of one of our collections of electronic subscriptions every year, since this publisher refuses to invoice through our agent (or will do so in a way that increases our fees to the point that my assistant would rather just do it himself). I’m not talking about mom & pop society publisher — this is one of the major players. If they aren’t doing EDI, then it’s understandable that librarians are concerned about other publishers following suit.

Related to this, JSTOR and UC Press, along with several other society and small press publishers have announced a new partnership that will allow those publishers to distribute their electronic journals on the JSTOR platform, from issue one to the current. JSTOR will handle all the hosting, payments, and library technical support, leaving the publishers to focus on generating the content. Here’s the kicker: JSTOR will also be handling billing for print subscriptions of these titles.

That’s right – JSTOR is taking on the role of subscription agent for a certain subset of publishers. They say, of course, that they will continue to accept orders through existing agents, but if libraries and consortia are offered discounts for going directly to JSTOR, with whom they are already used to working directly for the archive collections, then eventually there will be little incentive to use a traditional subscription agent for titles from these publishers. On the one hand, I’m pleased to see some competition emerging in this aspect of the serials industry, particularly as the number of players has been shrinking in recent years, but on the other hand I worry about the future of traditional agents.

In addition to the big picture topics addressed above, I picked up a few ideas to add to my future projects list:

  • Evaluate the “one-click” rankings for our link resolver and bump publisher sites up on the list. These sources “count” more when I’m doing statistical reports, and right now I’m seeing that our aggregator databases garner more article downloads than from the sources we pay for specifically. If this doesn’t improve the stats, then maybe we need to consider whether or not access via the aggregator is sufficient. Sometimes the publisher site interface is a deterrent for users.
  • Assess the information I currently provide to liaisons regarding our subscriptions and discuss with them what additional data I could incorporate to make the reports more helpful in making collection development decisions. Related to this is my ongoing project of simplifying the export/import process of getting acquisitions data from our ILS and into our ERMS for cost per use reports. Once I’m not having to do that manually, I can use that time/energy to add more value to the reports.
  • Do an inventory of our holdings in our ERMS to make sure that we have turned on everything that should be turned on and nothing that shouldn’t. I plan to start with the publishers that are KBART participants and move on from there (and yes, Jason Price, I will be sure to push for KBART compliance from those who are not already in the program).
  • Begin documenting and sharing workflow, SQL, and anything else that might help other electronic resource librarians who use our ILS or our ERMS, and make myself available as a resource. This stood out to me during the user group meeting for our ERMS, where I and a handful of others were the experts of the group, and by no means do I feel like an expert, but clearly there are quite a few people who could learn from my experience the way I learned from others before me.

I’m probably forgetting something, but I think those are big enough to keep me busy for quite a while.

If you managed to make it this far, thanks for letting me yammer on. To everyone who attended this year and everyone who couldn’t, I hope to see you next year in St. Louis!

NASIG 2010: Serials Management in the Next-Generation Library Environment

Panelists: Jonathan Blackburn, OCLC; Bob Bloom (?), Innovative Interfaces, Inc.; Robert McDonald, Kuali OLE Project/Indiana University

Moderator: Clint Chamberlain, University of Texas, Arlington

What do we really mean when we are talking about a “next-generation ILS”?

It is a system that will need to be flexible enough to accommodate increasingly changing and complex workflows. Things are changing so fast that systems can’t wait several years to release updates.

It also means different things to different stakeholders. The underlying thing is being flexible enough to manage both print and electronic, as well as better reporting tools.

How are “next-generation ILS” interrelated to cloud computing?

Most of them have components in the cloud, and traditional ILS systems are partially there, too. Networking brings benefits (shared workloads).

What challenges are facing libraries today that could be helped by the emerging products you are working on?

Serials is one of the more mature items in the ILS. Automation as a result of standardization of data from all information sources is going to keep improving.

One of the key challenges is to deal with things holistically. We get bogged down in the details sometimes. We need to be looking at things on the collection/consortia level.

We are all trying to do more with less funding. Improving flexibility and automation will offer better services for the users and allow libraries to shift their staff assets to more important (less repetitive) work.

We need better tools to demonstrate the value of the library to our stakeholders. We need ways of assessing resource beyond comparing costs.

Any examples of how next-gen ILS will improve workflow?

Libraries are increasing spending on electronic resources, and many are nearly eliminating their print serials spending. Next gen systems need reporting tools that not only provide data about electronic use/cost, but also print formats, all in one place.

A lot of workflow comes from a print-centric perspective. Many libraries still haven’t figured out how to adjust that to include electronic without saddling all of that on one person (or a handful). [One of the issues is that the staff may not be ready/willing/able to handle the complexities of electronic.]

Every purchase should be looked at independently of format and more on the cost/process for acquiring and making it available to the stakeholders.

[Not taking as many notes from this point on. Listening for something that isn’t fluffy pie in the sky. Want some sold direction that isn’t pretty words to make librarians happy.]

NASIG 2010: Integrating Usage Statistics into Collection Development Decisions

Presenters: Dani Roach, University of St. Thomas and Linda Hulbert, University of St. Thomas

As with most libraries, they are faced with needing to downsize their purchases in order to fit within reduced budgets, so good tools must be employed to determine which stuff to remove or acquire.

The statistics for impact factor means little to librarians, since the “best” journals may not be appropriate for the programs the library supports. Quantitative data like cost per use, historical trends, and ILL data are more useful for libraries. Combine these with reviews, availability, features, user feedback, and the dust layer on the materials, and then you have some useful information for making decisions.

Usage statistics are just one component that we can use to analyze the value of resources. There are other variables than cost and other methods than cost per use, but these are what we most often apply.

Other variables can include funds/subjects, format, and identifiers like ISSN. Cost needs to be defined locally, as libraries manage them differently for annual subscriptions, multiple payments/funds, one-time archive fees, hosting fees, and single title databases or ebooks. Use is also tricky. A PDF download in a JR1 report is different from a session count in a DB1 report is different from a reshelve count for a bound journal. Local consistency with documentation is best practice for sorting this out.

Library-wide SharePoint service allows them to drop documents with subscription and analysis information into one location for liaisons to use. [We have a shared network folder that I do some of this with — I wonder if SharePoint would be better at managing all of the files?]

For print statistics, they track separately bound volume use versus new issue use, scanning barcodes into their ILS to keep a count. [I’m impressed that they have enough print journal use to do that rather than hash marks on a sheet of paper. We had 350 reshelved in last year, including ILL use, if I remember correctly.]

Once they have the data, they use what they call a “fairness factor” formula to normalize the various subject areas to determine if materials budgets are fairly allocated across all disciplines and programs. Applying this sort of thing now would likely shock budgets, so they decided to apply new money using the fairness factor, and gradually underfunded areas are being brought into balance without penalizing overfunded areas.

They have stopped trying to achieve a balance between books and periodicals. They’ve left that up to the liaisons to determine what is best for their disciplines and programs.

They don’t hide their cancellation list, and if any of the user community wants to keep something, they’ve been willing to retain it. However, they get few requests to retain content, and they think it is in part because the user community can see the cost, use, and other factors that indicate the value of the resource for the local community.

They have determined that it costs them around $52 a title to manage a print subscription, and over $200 a title to manage an online subscription, mainly because of the level of expertise involved. So, there really are no “free” subscriptions, and if you want to get into the cost of binding/reshelving, you need to factor in the managerial costs of electronic titles, as well.

Future trends and issues: more granularity, more integration of print and online usage, interoperability and migration options for data and systems, continued standards development, and continued development of tools and systems.

Anything worth doing is worth overdoing. You can gather Ulrich’s reports, Eigen factors, relative price indexes, and so much more, but at some point, you have to decide if the return is worth the investment of time and resources.

ER&L 2010: Developing a methodology for evaluating the cost-effectiveness of journal packages

Speaker: Nisa Bakkalbasi

Journal packages offer capped price increases, access to non-subscribed content, and it’s easier to manage than title-by-title subscriptions. But, the economic downturn has resulted in even the price caps not being enough to sustain the packages.

Her library only seriously considers COUNTER reports, which is handy, since most package publishers provide them. They add to that the publisher’s title-by-title list price, as well as some subject categories and fund codes. Their analysis includes quantitative and qualitative variables using pivot tables.

In addition, they look at the pricing/sales model for the package: base value, subscribed/non-subscribed titles, cancellation allowance, price cap/increase, deep discount for print rate, perpetual/post-cancellation access rights, duration of the contract, transfer titles, and third-party titles.

So, the essential question is, are we paying more for the package than for specific titles (perhaps fewer than we currently have) if we dissolved the journal package?

She takes the usage reports for at least the past three years in order to look at trends, and excludes titles that are based on separate pricing models, and also excluded backfile usage if that was a separate purchase (COUNTER JR1a subtracted from JR1 – and you will need to know what years the publisher is calling the backfile). Then she adds list prices for all titles (subscribed & non-subscribed). Then, she calculates the cost-per-use of the titles, and uses the ILL cost (per the ILL department) as a threshold for possible renewals or cancellations.

The final decision depends on the base value paid by the library, the collection budget increase/decrease, price cap, and the quality/consistency of ILL service (money is not everything). This method is only about the costs, and it does not address the value of the resources to the users beyond what they may have looked at. There may be other factors that contributed to non-use.

css.php