WordCamp Richmond: Strategery!

presenter: Bradley Robb

“A couple of tips for improving your blog’s readership and like 26 pictures of kittens”

A comprehensive digital strategy is what you are going to use when you build anything online. When you start a blog, you are committing yourself to putting out content forever.

The field of dreams fallacy: just because you blog it doesn’t mean anyone will read it. Knowing your visitors means knowing your visitor types. Referral traffic is your goal. Blog readership is not a zero-sum game; your fellow bloggers are your peers.

Quantitative analysis like page ranks compares apples to apples. But if you want to compare apples to oranges, you need to look at different things. Post frequency will increase popularity, particularly for those who do not read via RSS. Comment frequency is an indicator of post frequency. You also want to pay attention to whether the commenters are responding to the post or responding to each other (i.e. creating a community).

Amass, prioritize, track, repeat: Find all of the people who are talking about your niche in a full-time manner. Evaluate your own blog, then develop a rubric to compare your site to peers. Create a list of blogs where you’d like to guest post. Track your successes and failures – Robb suggests using a spreadsheet (blogs tracked, comments, linkbacks, etc.). Keep adding to your amassed list, keep evaluating your standing, and keep tracking.

You need to be reading the blogs in your community, but that can take a lot of time. Following their Twitter feeds might be faster. And if you’re not using RSS, you should be.

“Commenting on blogs is like working a room at a party with one major exception: nobody knows if you’re wearing pants.”

Make your comment relevant, short, interesting, but don’t steal the show. Make sure you put your blog anchor page in the URL field of the comment form. You want people to track back to your blog, right? If there is an option to track the comments, do it. It’s okay to disagree, but be intelligent about it. Be yourself, but better (and sign with your name, not your blog/book/etc.). Count to ten before you hit send, not just for keeping a cool head, but also for correcting grammatical errors.

Guest posting: write the post before you pitch it. It indicates that you understand the blog and it’s content, and that you can write. Plus, they won’t be waiting on you for a deadline.

Measure twice, cut once: If your commenting strategy isn’t working, then figure out how to change it up. Are you getting traffic? Are your comments being responded to?

Give them something to talk about. If you’re doing all this strategy, make sure you have something worth reading.

Questions:

Recommended features & widgets? Robb doesn’t use many widgets. Trackbacks is a big backend feature. Disqus can aggregate reactions, which you can publish with the post.

What are easy ways to get people to comment on your blog? There are several methods. One is to be wrong, because the internet will tell you that you’re wrong, and that can drive comment traffic. Another is to publish a list.

How do you know what to write about? By following the niche/industry, you can get a feel for hot topics and trends.

Do you have any specific strategies for using Facebook for publicizing your blog? Robb hates Facebook and it’s personal data-stealing soul. He recommends the same strategy as Twitter: for every ten posts about something else, post one promoting your blog.

What about communities like Digg or Reddit? Unless you hit the front page, you don’t really get enough traffic to warrant the time.

How many ads are too many? Depends on how big of a boat you want. If you build your theme to incorporate ads smartly, you don’t need as many of them to be successful with them. In print journalism, the page is designed for the ads with the news filling the rest.

it could be worse

Have you noticed the changes Google has been making to the way they display search results? Google Instant has been the latest, but before that, there was the introduction of the “Everything” sidebar. And that one in particular seems to have upset numerous Google search fans. If you do a search in Google for “everything sidebar,” the first few results are about removing or hiding it.

Not only that, but the latest offering from the Funny Music Project is a song all about hating the Google “Everything” sidebar. The creator, Jesse Smith, expresses a frustration that many of us can identify with, “It’s hard to find a product that does what it does really well. In a world of mediocrity, it’s the exception that excels. Then some jerk has to justify his job by tinkering and jiggering and messing up the whole thing.”

Tech folks like to tinker. We like making things work better, or faster, or be more intuitive. I’ll bet that there are a lot of Google users who didn’t know about the different kinds of content-specific searches that Google offered, or had never used the advanced search tools. And they’re probably happy with the introduction of the “Everything” sidebar.

But there’s another group of folks who are evidently very unhappy with it. Some say it takes up too much room on the screen, that it adds complexity, and that they just don’t like the way it looks.

Cue ironic chuckling from me.

Let’s compare the Google search results screen with search results from a few of the major players in libraryland:

Google

ProQuest EBSCOhost

CSA Illumina ISI Web of Knowledge

So, who’s going to write a song about how much they hate <insert library database platform of choice>?

NASIG 2010: Publishing 2.0: How the Internet Changes Publications in Society

Presenter: Kent Anderson, JBJS, Inc

Medicine 0.1: in dealing with the influenza outbreak of 1837, a physician administered leeches to the chest, James’s powder, and mucilaginous drinks, and it worked (much like take two aspirin and call in the morning). All of this was written up in a medical journal as a way to share information with peers. Journals have been the primary source of communicating scholarship, but what the journal is has become more abstract with the addition of non-text content and metadata. Add in indexes and other portals to access the information, and readers have changed the way they access and share information in journals. “Non-linear” access of information is increasing exponentially.

Even as technology made publishing easier and more widespread, it was still producers delivering content to consumers. But, with the advent of Web 2.0 tools, consumers now have tools that in many cases are more nimble and accessible than the communication tools that producers are using.

Web 1.0 was a destination. Documents simply moved to a new home, and “going online” was a process separate from anything else you did. However, as broadband access increases, the web becomes more pervasive and less a destination. The web becomes a platform that brings people, not documents, online to share information, consume information, and use it like any other tool.

Heterarchy: a system of organization replete with overlap, multiplicity, mixed ascendandacy and/or divergent but coextistent patterns of relation

Apomediation: mediation by agents not interposed between users and resources, who stand by to guide a consumer to high quality information without a role in the acquisition of the resources (i.e. Amazon product reviewers)

NEJM uses terms by users to add related searches to article search results. They also bump popular articles from searches up in the results as more people click on them. These tools improved their search results and reputation, all by using the people power of experts. In addition, they created a series of “results in” publications that highlight the popular articles.

It took a little over a year to get to a million Twitter authors, and about 600 years to get to the same number of book authors. And, these are literate, savvy users. Twitter & Facebook count for 1.45 million views of the New York Times (and this is a number from several years ago) — imagine what it can do for your scholarly publication. Oh, and NYT has a social media editor now.

Blogs are growing four times as fast as traditional media. The top ten media sites include blogs and the traditional media sources use blogs now as well. Blogs can be diverse or narrow, their coverage varies (and does not have to be immediate), they are verifiably accurate, and they are interactive. Blogs level that media playing field, in part by watching the watchdogs. Blogs tend to investigate more than the mainstream media.

It took AOL five times as long to get to twenty million users than it did for the iPhone. Consumers are increasingly adding “toys” to their collection of ways to get to digital/online content. When the NEJM went on the Kindle, more than just physicians subscribed. Getting content into easy to access places and on the “toys” that consumers use will increase your reach.

Print digests are struggling because they teeter on the brink of the daily divide. Why wait for the news to get stale, collected, and delivered a week/month/quarter/year later? People are transforming. Our audiences don’t think of information as analogue, delayed, isolated, tethered, etc. It has to evolve to something digital, immediate, integrated, and mobile.

From the Q&A session:

The article container will be here for a long time. Academics use the HTML version of the article, but the PDF (static) version is their security blanket and archival copy.

Where does the library as source of funds when the focus is more on the end users? Publishers are looking for other sources of income as library budgets are decreasing (i.e. Kindle, product differentiation, etc.). They are looking to other purchasing centers at institutions.

How do publishers establish the cost of these 2.0 products? It’s essentially what the market will bear, with some adjustments. Sustainability is a grim perspective. Flourishing is much more positive, and not necessarily any less realistic. Equity is not a concept that comes into pricing.

The people who bring the tremendous flow of information under control (i.e. offer filters) will be successful. One of our tasks is to make filters to help our users manage the flow of information.

NASIG 2010: Linked Data and Libraries

Presenter: Eric Miller, Zepheira, LCC

Nowadays, we understand what the web is and the impact it has had on information sharing, but before it was developed, it was in a “vague but exciting” stage and few understood it. When we got started with the web, we really didn’t know what we were doing, but more importantly, the web was being developed so that it was flexible enough for smarter and more creative people to do amazing things.

“What did your website look like when you were in the fourth grade?” Kids are growing up with the web and it’s hard for them to comprehend life without it. [Dang, I’m old.]

This talk will be about linked data, its legacy, and how libraries can lead linked data. We have a huge opportunity to weave libraries into the fabric of libraries, and vice versa.

About five years ago, the BBC started making their content available in a service that allowed others to use and remix the delivery of the content in new ways. Rather than developing alternative platforms and creating new spaces, they focus on generating good content and letting someone else frame it. Other sources like NPR, the World Bank, and Data.gov are doing the same sorts of things. Within the library community, these things are happening, as well. OCLC’s APIs are getting easier to use, and several national libraries are putting their OPACs on the web with APIs.

Obama’s open government initiative is another one of those “vague but exciting” things, and it charged agencies to come up with their own methods of making their content available via the web. Agencies are now struggling with the same issues and desires that libraries have been tackling for years. We need to recognize our potential role in moving this forward.

Linked data is a best practice for sharing data, connecting data, and uses the semantic web. Rather than leaving the data in their current formats, let’s put them together in ways they can be used on the wider web. It’s not the databases that make the web possible, it’s the web that makes the databases usable.

Human computation can be put to use in ways that assist computers to make information more usable. Captcha systems are great for blocking automated programs when needed, and by using human computation to decipher scanned text that is undecipherable by computers, ReCaptcha has been able to turn unusable data into a fantastic digital repository of old documents.

LEGOs have been around for decades, and their simple design ensures that new blocks work with old blocks. Most kids end up dumping all of their sets into one bucket, so no matter where the individual building blocks come from, they can be put together and rebuild in any way you can imagine. We could do this with our blocks of data, if they are designed well enough to fit together universally.

Our current applications, for the most part, are not designed to allow for the portability of data. We need to rethink application design so that the data becomes more portable. Web applications have, by neccesity, had to have some amount of portability. Users are becoming more empowered to use the data provided to them in their own way, and if they don’t get that from your service/product, then they go elsewhere.

Digital preservation repositories are discussing ways to open up their data so that users can remix and mashup data to meet their needs. This requires new ways of archiving, cataloging, and supplying the content. Allow users to select the facets of the data that they are interested in. Provide options for visualizing the raw data in a systematic way.

Linked data platforms create identifiers for every aspect of the data they contain, and these are the primary keys that join data together. Other content that is created can be combined to enhance the data generated by agencies and libraries, but we don’t share the identifiers well enough to allow others to properly link their content.

Web architecture starts with web identifiers. We can use URLs to identify things other than just documents, but we need to be consistent and we can’t change the URL structures if we want it to be persistent. A lack of trust in identifiers is slowing down linked data. Libraries have the opportunity to leverage our trust and data to provide control points and best practices for identifier curation.

A lot of work is happening in W3C. Libraries should be more involved in the conversation.

Enable human computation by providing the necessary identifiers back to data. Empower your users to use your data, and build a community around it. Don’t worry about creating the best system — wrap and expose your data using the web as a platform.

CIL 2010: Google Wave

Presenters: Rebecca Jones & Bob Keith

Jones was excited to have something that combined chat with cloud applications like Google Docs. Wave is a beginning, but still needs work. Google is not risk-averse, so they put it out and let us bang on it to shape it into something useful.

More people joined Google Wave and abandoned it than those who stuck with it (less than 10% of the room). We needed something that would push us over to incorporating it into our workflows, and we didn’t see that happen.

The presenters created a public wave, which you can find by searching “with:public tag:cil2010”. Ironically, they had to close Wave in order to have enough virtual memory to play the video about Wave.

Imagine that! Google Wave works better in Google Chrome than in other browsers (including Firefox with the Gears extension).

Gadgets add functionality to waves. [note: I’ve also seen waves that get bogged down with too many gadgets, so use them sparingly.] There are also robots that can do tasks, but it seems to be more like text-based games, which have some retro-chic, but no real workflow application.

Wave is good for managing a group to-do list or worklog, planning events, taking and sharing meeting notes, and managing projects. However, all participants need to be Wave users. And, it’s next to impossible to print or otherwise archive a Wave.

The thing to keep in mind with Wave is that it’s not a finished product and probably shouldn’t be out for public consumption yet.

The presentation (available at the CIL website and on the wave) also includes links to a pile of resources for Wave.

ER&L 2010: Where are we headed? Tools & Technologies for the future

Speakers: Ross Singer & Andrew Nagy

Software as a service saves the institution time and money because the infrastructure is hosted and maintained by someone else. Computing has gone from centralized, mainframe processing to an even mix of personal computers on an networked enterprise to once again a very centralized environment with cloud applications and thin clients.

Library resource discovery is, to a certain extent, already in the cloud. We use online databases and open web search, WorldCat, and next gen catalog interfaces. The next gen catalog places the focus on the institution’s resources, but it’s not the complete solution. (People see a search box and they want to run queries on it – doesn’t matter where it is or what it is.) The next gen catalog is only providing access to local resources, and while it looks like modern interfaces, the back end is still old-school library indexing that doesn’t work well with keyword searching.

Web-scale discovery is a one-stop shop that provides increased access, enhances research, and provides and increase ROI for the library. Our users don’t use Google because it’s Google, they use it because it’s simple, easy, and fast.

How do we make our data relevant when administration doesn’t think what we do is as important anymore? Linked data might be one solution. Unfortunately, we don’t do that very well. We are really good at identifying things but bad at linking them.

If every component of a record is given identifiers, it’s possible to generate all sorts of combinations and displays and search results via linking the identifiers together. RDF provides a framework for this.

Also, once we start using common identifiers, then we can pull in data from other sources to increase the richness of our metadata. Mashups FTW!

IL2009: Mashups for Library Data

Speakers: Nicole Engard

Mashups are easy ways to provide better services for our patrons. They add value to our websites and catalogs. They promote our services in the places our patrons frequent. And, it’s a learning experience.

We need to ask our vendors for APIs. We’re putting data into our systems, so we should be able to get it out. Take that data and mash it up with popular web services using RSS feeds.

Yahoo Pipes allows you to pull in many sources of data and mix it up to create something new with a clean, flow chart like interface. Don’t give up after your first try. Jody Fagan wrote an article in Computers in Libraries that inspired Engard to go back and try again.

Reading Radar takes the NYT Bestseller lists and merges it with data from Amazon to display more than just sales information (ratings, summaries, etc.). You could do that, but instead of having users go buy the book, link it to your library catalog. The New York Times has opened up a tremendous amount of content via APIs.

Bike Tours in CA is a mashup of Google Maps and ride data. Trulia, Zillow, and HousingMaps use a variety of sources to map real estate information. This We Know pulls in all sorts of government data about a location. Find more mashups at ProgrammableWeb.

What mashups should libraries be doing? First off, if you have multiple branches, create a Google Maps mashup of library locations. Share images of your collection on Flickr and pull that into your website (see Access Ceramics), letting Flickr do the heavy lifting of resizing the images and pulling content out via machine tags. Delicious provides many options for creating dynamically updating lists with code snippets to embed them in your website.

OPAC mashups require APIs, preferably those that can generate JavaScript, and finally you’ll need a programmer if you can’t get the information out in a way you can easily use it. LexisNexis Academic, WorldCat, and LibraryThing all have APIs you can use.

Ideas from Librarians: Mashup travel data from circulation data and various travel sources to provide better patron services. Grab MARC location data to plot information on a map. Pull data about media collection and combine it with IMDB and other resources. Subject RSS feeds from all resources for current articles (could do that already with a collection of journals with RSS feeds and Yahoo Pipes).

Links and more at her book website.

IL2009: Creating Connections & Social Reference in Libraries

Presenter: Margaret Smith

Traditional reference has been one-on-one, but now there are options online for many-to-one reference, such as Yahoo! Answers, Askville, AskMetafilter, etc. The problem is that not all of the hives are equal in the quality of the answers they provide. For an example, look up "where do deer sleep?" sometime.

One of the benefits of social reference sites is that they generate a reference bank of questions and answers that can be linked to when/if someone asks the same question again. These can be both public forums like AskMetafilter, or a private forum like something you develop internally for your library or organization. Similarly, you can use wiki software to create an interactive social reference tool, but unlike a forum, it isn’t designed to make new content the most prominent.

One of the biggest challenges of implementing social reference sites is getting answers to the questions. A frustrating aspect of some social reference sources is an overwhelming number of unanswered questions. Your library can use any of the "free" services that are out there, or go with one of the vendor services like LibAnswers, just make sure you actively engage with it.

Internet Librarian 2009 begins

Yesterday was my first time touching California soil (I had previously spent some time in LAX, but I don’t think that counts), and I have to say, Monterey is as beautiful as everyone says it is. Also, the Crown & Anchor is a fantastic place to gather with friends who arrived and left through the evening last night. Good times.

I arrived too late this morning to get a seat at the opening keynote session with Vint Cerf, Chief Internet Evangelist for Google, so I stood in the back and listened for most of it. Look around and you’ll probably find some good write-ups, and it was streamed live and the recording is available on Ustream. Pay attention to the Ustream channel to catch more of IL 2009!

This afternoon, I will be co-presenting on some of (IMHO) the best tools for collaboration using cloud computing resources. We have our presentation posted on Slide Share already, if you’re interested (and that way, you don’t have to be there and see how nervous I can be when speaking in front of a group of people who are probably smarter than me).

o hai!

No posts since July? No, this blog isn’t dead yet, but it’s certainly on life support. I am finally admitting that with all of the other venues for getting my thoughts and opinions out to the world, this blog is falling by the wayside.

I like having a space to share things that need more than 140 characters, but as you can see, I don’t always have time to make use of it. I encourage you to keep this in your feed reader, as I do intend to continue to do summary notes of conference sessions I attend, and occasionally throw in a book review or a rant about some current news item.

If you want to keep up with my other activities, check the sidebar for links and such. I’m also on Twitter, although I’ve been relatively quiet there, too.

css.php