ER&L: Library Renewal

Speaker: Michael Porter

Libraries are content combined with community. Electronic content access is making it more challenging for libraries to accomplish their missions.

It’s easy to complain but hard to do. Sadly, we tend to complain more than doing. If we get a reputation for being negative, that will be detrimental. That doesn’t mean we should be Sally Sunshine, but we need to approach things with a more positive attitude to make change happen.

Libraries have an identity problem. We are tied to content (ie books). 95% of people in poverty have cable television. They can’t afford it, but they want it, so they get it. Likewise, mobile access to content is moving to ubiquitous.

Our identity needs to be moved back to content. We need to circ electronic content better than Netflix, Amazon, iTunes, etc.

Electronic content distribution is a complicated issue. Vendors in our market don’t have the same kind of revenue as companies like Apple. We aren’t getting the best people or solutions — we’re getting the good enough, if we’re lucky.

Could libraries became the distribution hub for media and other electronic content?

WordCamp Richmond: Exploiting Your Niche – Making Money with Affiliate Marketing

presenter: Robert Sterling

Affiliate marketing is a practice of rewarding an affiliate for directing customers to the brand/seller that then results in a sale.

“If you’re good at something, never do it for free.” If you have a blog that’s interesting and people are coming to you, you’re doing something wrong if you’re not making money off of it.

Shawn Casey came up with a list of hot niches for affiliate marketing, but that’s not how you find what will work for you. Successful niches tend to be what you already have a passion for and where it intersects with affiliate markets. Enthusiasm provokes a positive response. Enthusiasm sells. People who are phoning it in don’t come across the same and won’t develop a loyal following.

Direct traffic, don’t distract from it. Minimize the number of IAB format ads – people don’t see them anymore. Maximize your message in the hot spots – remember the Google heat map. Use forceful anchor text like “click here” to direct users to the affiliate merchant’s site. Clicks on images should move the user towards a sale.

Every third or fourth blog post should be revenue-generating. If you do it with every post, people will assume it’s a splog. Instapundit is a good example of how to do a link post that directs users to relevant content from affiliate merchants. Affiliate datafeeds can be pulled in using several WP plugins. If your IAB format ads aren’t performing from day one, they never will.

Plugins (premium): PopShops works with a number of vendors. phpBay/phpZon works with eBay and Amazon, respectively. They’re not big revenue sources, but okay for side money.

Use magazine themes that let you prioritize revenue-generating content. Always have a left-sidebar and search box, because people are more comfortable with that navigation.

Plugins (free): W3 Total Cache (complicated, buggy, but results in fast sites, which Google loves), Regenerate Thumbnails, Ad-minister, WordPress Mobile, and others mentioned in previous sessions. Note: if you change themes, make sure you go back and check old posts. You want them to look good for the people who find them via search engines.

Forum marketing can be effective. Be a genuine participant, make yourself useful, and link back to your site only occasionally. Make sure you optimize your profile and use the FeedBurner headline animator.

Mashups are where you can find underserved niches (i.e. garden tools used as interior decorations). Use Google’s keyword tools to see if there is a demand and who may be your competition. Check for potential affiliates on several networks (ClickBank, ShareASale, Pepperjam, Commission Junction, and other niche-appropriate networks). Look for low conversion rates, and if the commission rate is less than 20%, don’t bother.

Pay for performance (PPP) advertising is likely to replace traditional retail sales. Don’t get comfortable – it’s easy for people to copy what works well for you, and likewise you can steal from your competition.

Questions:

What’s a good percentage to shoot for? 50% is great, but not many do that. Above 25% is a good payout. Unless the payout is higher, avoid the high conversion rate affiliate programs. Look for steady affiliate marketing campaigns from companies that look like they’re going to be sticking around.

What about Google or Technorati ads? The payouts have gone down. People don’t see them, and they (Google) aren’t transparent enough.

How do you do this not anonymously and maintain integrity in the eyes of your readers? One way to do it is a comparison post. Look at two comparable products, list their features against each other.

NASIG 2010: Publishing 2.0: How the Internet Changes Publications in Society

Presenter: Kent Anderson, JBJS, Inc

Medicine 0.1: in dealing with the influenza outbreak of 1837, a physician administered leeches to the chest, James’s powder, and mucilaginous drinks, and it worked (much like take two aspirin and call in the morning). All of this was written up in a medical journal as a way to share information with peers. Journals have been the primary source of communicating scholarship, but what the journal is has become more abstract with the addition of non-text content and metadata. Add in indexes and other portals to access the information, and readers have changed the way they access and share information in journals. “Non-linear” access of information is increasing exponentially.

Even as technology made publishing easier and more widespread, it was still producers delivering content to consumers. But, with the advent of Web 2.0 tools, consumers now have tools that in many cases are more nimble and accessible than the communication tools that producers are using.

Web 1.0 was a destination. Documents simply moved to a new home, and “going online” was a process separate from anything else you did. However, as broadband access increases, the web becomes more pervasive and less a destination. The web becomes a platform that brings people, not documents, online to share information, consume information, and use it like any other tool.

Heterarchy: a system of organization replete with overlap, multiplicity, mixed ascendandacy and/or divergent but coextistent patterns of relation

Apomediation: mediation by agents not interposed between users and resources, who stand by to guide a consumer to high quality information without a role in the acquisition of the resources (i.e. Amazon product reviewers)

NEJM uses terms by users to add related searches to article search results. They also bump popular articles from searches up in the results as more people click on them. These tools improved their search results and reputation, all by using the people power of experts. In addition, they created a series of “results in” publications that highlight the popular articles.

It took a little over a year to get to a million Twitter authors, and about 600 years to get to the same number of book authors. And, these are literate, savvy users. Twitter & Facebook count for 1.45 million views of the New York Times (and this is a number from several years ago) — imagine what it can do for your scholarly publication. Oh, and NYT has a social media editor now.

Blogs are growing four times as fast as traditional media. The top ten media sites include blogs and the traditional media sources use blogs now as well. Blogs can be diverse or narrow, their coverage varies (and does not have to be immediate), they are verifiably accurate, and they are interactive. Blogs level that media playing field, in part by watching the watchdogs. Blogs tend to investigate more than the mainstream media.

It took AOL five times as long to get to twenty million users than it did for the iPhone. Consumers are increasingly adding “toys” to their collection of ways to get to digital/online content. When the NEJM went on the Kindle, more than just physicians subscribed. Getting content into easy to access places and on the “toys” that consumers use will increase your reach.

Print digests are struggling because they teeter on the brink of the daily divide. Why wait for the news to get stale, collected, and delivered a week/month/quarter/year later? People are transforming. Our audiences don’t think of information as analogue, delayed, isolated, tethered, etc. It has to evolve to something digital, immediate, integrated, and mobile.

From the Q&A session:

The article container will be here for a long time. Academics use the HTML version of the article, but the PDF (static) version is their security blanket and archival copy.

Where does the library as source of funds when the focus is more on the end users? Publishers are looking for other sources of income as library budgets are decreasing (i.e. Kindle, product differentiation, etc.). They are looking to other purchasing centers at institutions.

How do publishers establish the cost of these 2.0 products? It’s essentially what the market will bear, with some adjustments. Sustainability is a grim perspective. Flourishing is much more positive, and not necessarily any less realistic. Equity is not a concept that comes into pricing.

The people who bring the tremendous flow of information under control (i.e. offer filters) will be successful. One of our tasks is to make filters to help our users manage the flow of information.

CIL 2010: The Power in Your Browser – LibX & Zotero

Speaker: Krista Godfrey

She isn’t going to show how to create LibX or Zotero access, but rather how to use them to create life-long learners. Rather than teaching students how to use proprietary tools like Refworks, teaching them tools they can use after graduation will help support their continued research needs.

LibX works in IE and Firefox. They are working on a Chrome version as well. It fits into the search and discovery modules in the research cycle. The toolbar connects to the library catalog and other tools, and right-click menu search options are available on any webpage.  It will also embed icons in places like Amazon that will link to catalog searches, and any page with a document identifier (DOI, ISSN) will now present that identifier as a link to the catalog search.

Zotero is only in Firefox, unfortunately. It’s a records management tool that allows you to collect, manage, cite, and share, which fill in the rest of the modules in the research cycle. It will collect anything, archive anything, and store any attached documents. You can add notes, tags, and enhance the metadata. The citation process works in Word, Open Office, and Google Docs, with a program similar to Write-N-Cite that can be done by dragging and dropping the citation where you want it to go.

One of the down-sides to Zotero when it first came out was that it lived only in one browser on one machine, but the new version comes with server space that you can sync your data to, which allows you to access your data on other browsers/machines. You can create groups and share documents within them, which would be great for a class project.

Why aren’t we teaching Zotero/LibX more? Well, partially because we’ve spent money on other stuff, and we tend to push those more. Also, we might be worried that if we give our users tools to access our content without going through our doors, they may never come back. But, it’s about creating life-long learners, and they won’t be coming through our doors when they graduate. So, we need to teach them tools like these.

ER&L 2010: Adventures at the Article Level

Speaker: Jamene Brooks-Kieffer

Article level, for those familiar with link resolvers, means the best link type to give to users. The article is the object of pursuit, and the library and the user collaborate on identifying it, locating it, and acquiring it.

In 1980, the only good article-level identification was the Medline ID. Users would need to go through a qualified Medline search to track down relevant articles, and the library would need the article level identifier to make a fast request from another library. Today, the user can search Medline on their own; use the OpenURL linking to get to the full text, print, or ILL request; and obtain the article from the source or ILL. Unlike in 1980, the user no longer needs to find the journal first to get to the article. Also, the librarian’s role is more in providing relevant metadata maintenance to give the user the tools to locate the articles themselves.

In thirty years, the library has moved from being a partner with the user in pursuit of the article to being the magician behind the curtain. Our magic is made possible by the technology we know but that our users do not know.

Unique identifiers solve the problem of making sure that you are retrieving the correct article. CrossRef can link to specific instances of items, but not necessarily the one the user has access to. The link resolver will use that DOI to find other instances of the article available to users of the library. Easy user authentication at the point of need is the final key to implementing article-level services.

One of the library’s biggest roles is facilitating access. It’s not as simple as setting up a link resolver – it must be maintained or the system will break down. Also, document delivery service provides an opportunity to generate goodwill between libraries and users. The next step is supporting the users preferred interface, through tools like LibX, Papers, Google Scholar link resolver integration, and mobile devices. The latter is the most difficult because much of the content is coming from outside service providers and the institutional support for developing applications or web interfaces.

We also need to consider how we deliver the articles users need. We need to evolve our acquisitions process. We need to be ready for article-level usage data, so we need to stop thinking about it as a single-institutional data problem. Aggregated data will help spot trends. Perhaps we could look at the ebook pay-as-you-use model for article-level acquisitions as well?

PIRUS & PIRUS 2 are projects to develop COUNTER-compliant article usage data for all article-hosting entities (both traditional publishers and institutional repositories). Projects like MESUR will inform these kinds of ventures.

Libraries need to be working on recommendation services. Amazon and Netflix are not flukes. Demand, adopt, and promote recommendation tools like bX or LibraryThing for Libraries.

Users are going beyond locating and acquiring the article to storing, discussing, and synthesizing the information. The library could facilitate that. We need something that lets the user connect with others, store articles, and review recommendations that the system provides. We have the technology (magic) to make it available right now: data storage, cloud applications, targeted recommendations, social networks, and pay-per-download.

How do we get there? Cover the basics of identify>locate>acquire. Demand tools that offer services beyond that, or sponsor the creation of desired tools and services. We also need to stay informed of relevant standards and recommendations.

Publishers will need to be a part of this conversation as well, of course. They need to develop models that allow us to retain access to purchased articles. If we are buying on the article level, what incentive is there to have a journal in the first place?

For tenure and promotion purposes, we need to start looking more at the impact factor of the article, not so much the journal-level impact. PLOS provides individual article metrics.