snowballing debt

My parents have been talking to me off an on over the past five years or so about their budget plan that is allowing them to pay off debt (they’ll be completely debt-free in August, for the first time since the early 70s) and still live pretty well. They’re following the advice of financial guru Dave Ramsey, and have encouraged me to attend one of his Financial Peace University seminars. Considering that these things aren’t cheap, I’ve opted to make use of free resources, their advice/experience, and the advice/experience of friends.

Recently, a friend was commenting on how by budgeting only $1000 a month and paying down debt using a snowball plan, she could be completely debt-free in just a few years. My initial thought was, “sure, you must not have nearly as much debt as me, or at as high of an interest rate.” And while I was partially correct, I was very surprised to discover that with the same amount of money, I could be completely credit card debt-free in two years and have my student loans paid off in six years.

Of course, this will require a level of discipline I have yet to master, and I’ll need to be more creative about planning for big purchases that occur in frequently. However, seeing the plan laid out before me and realizing that it’s not some unattainable dream has made me much more motivated to just do it already.

The plan starts July 1. I’m going to reassess where I am at that point, and then start tracking my progress, which is also a good motivator.

NASIG 2010 reflections

When I was booking my flights and sending in my registration during the snow storms earlier this year, Palm Springs sounded like a dream. Sunny, warm, dry — all the things that Richmond was not. This would also be my first visit to Southern California, so I may be excused for my ignorance of the reality, and more specifically, the reality in early June. Palm Springs was indeed sunny, but not as dry and far hotter than I expected.

Despite the weather, or perhaps because of the weather, NASIGers came together for one of the best conferences we’ve had in recent years. All of the sessions were held in rooms that emptied out into the same common area, which also held the coffee and snacks during breaks. The place was constantly buzzing with conversations between sessions, and many folks hung back in the rooms, chatting with their neighbors about the session topics. Not many were eager to skip the sessions and the conversations in favor of drinks/books by the pools, particularly when temperatures peaked over 100°F by noon and stayed up there until well after dark.

As always, it was wonderful to spend time with colleagues from all over the country (and elsewhere) that I see once a year, at best. I’ve been attending NASIG since I was a wee serials librarian in 2002, and this conference/organization has been hugely instrumental in my growth as a librarian. Being there again this year felt like a combination of family reunion and summer camp. At one point, I choked up a little over how much I love being with all of them, and how much I was going to miss them until we come together again next year.

I’ve already blogged about the sessions I attended, so I won’t go into those details so much here. However, there were a few things that stood out to me and came up several times in conversations over the weekend.

One of the big things is a general trend towards publishers handling subscriptions directly, and in some cases, refusing to work with subscription agents. This is more prevalent in the electronic journal subscription world than in print, but that distinction is less significant now that so many libraries are moving to online-only subscriptions. I heard several librarians express concern over the potential increase in their workload if we go back to the era of ordering directly from hundreds of publishers rather than from one (or a handful) of subscription agents.

And then there’s the issue of invoicing. Electronic invoices that dump directly into a library acquisition system have been the industry standard with subscription agents for a long time, but few (and I can’t think of any) publishers are set up to deliver invoices to libraries using this method. In fact, my assistant who processes invoices must manually enter each line item of a large invoice of one of our collections of electronic subscriptions every year, since this publisher refuses to invoice through our agent (or will do so in a way that increases our fees to the point that my assistant would rather just do it himself). I’m not talking about mom & pop society publisher — this is one of the major players. If they aren’t doing EDI, then it’s understandable that librarians are concerned about other publishers following suit.

Related to this, JSTOR and UC Press, along with several other society and small press publishers have announced a new partnership that will allow those publishers to distribute their electronic journals on the JSTOR platform, from issue one to the current. JSTOR will handle all the hosting, payments, and library technical support, leaving the publishers to focus on generating the content. Here’s the kicker: JSTOR will also be handling billing for print subscriptions of these titles.

That’s right – JSTOR is taking on the role of subscription agent for a certain subset of publishers. They say, of course, that they will continue to accept orders through existing agents, but if libraries and consortia are offered discounts for going directly to JSTOR, with whom they are already used to working directly for the archive collections, then eventually there will be little incentive to use a traditional subscription agent for titles from these publishers. On the one hand, I’m pleased to see some competition emerging in this aspect of the serials industry, particularly as the number of players has been shrinking in recent years, but on the other hand I worry about the future of traditional agents.

In addition to the big picture topics addressed above, I picked up a few ideas to add to my future projects list:

  • Evaluate the “one-click” rankings for our link resolver and bump publisher sites up on the list. These sources “count” more when I’m doing statistical reports, and right now I’m seeing that our aggregator databases garner more article downloads than from the sources we pay for specifically. If this doesn’t improve the stats, then maybe we need to consider whether or not access via the aggregator is sufficient. Sometimes the publisher site interface is a deterrent for users.
  • Assess the information I currently provide to liaisons regarding our subscriptions and discuss with them what additional data I could incorporate to make the reports more helpful in making collection development decisions. Related to this is my ongoing project of simplifying the export/import process of getting acquisitions data from our ILS and into our ERMS for cost per use reports. Once I’m not having to do that manually, I can use that time/energy to add more value to the reports.
  • Do an inventory of our holdings in our ERMS to make sure that we have turned on everything that should be turned on and nothing that shouldn’t. I plan to start with the publishers that are KBART participants and move on from there (and yes, Jason Price, I will be sure to push for KBART compliance from those who are not already in the program).
  • Begin documenting and sharing workflow, SQL, and anything else that might help other electronic resource librarians who use our ILS or our ERMS, and make myself available as a resource. This stood out to me during the user group meeting for our ERMS, where I and a handful of others were the experts of the group, and by no means do I feel like an expert, but clearly there are quite a few people who could learn from my experience the way I learned from others before me.

I’m probably forgetting something, but I think those are big enough to keep me busy for quite a while.

If you managed to make it this far, thanks for letting me yammer on. To everyone who attended this year and everyone who couldn’t, I hope to see you next year in St. Louis!

NASIG 2010: Serials Management in the Next-Generation Library Environment

Panelists: Jonathan Blackburn, OCLC; Bob Bloom (?), Innovative Interfaces, Inc.; Robert McDonald, Kuali OLE Project/Indiana University

Moderator: Clint Chamberlain, University of Texas, Arlington

What do we really mean when we are talking about a “next-generation ILS”?

It is a system that will need to be flexible enough to accommodate increasingly changing and complex workflows. Things are changing so fast that systems can’t wait several years to release updates.

It also means different things to different stakeholders. The underlying thing is being flexible enough to manage both print and electronic, as well as better reporting tools.

How are “next-generation ILS” interrelated to cloud computing?

Most of them have components in the cloud, and traditional ILS systems are partially there, too. Networking brings benefits (shared workloads).

What challenges are facing libraries today that could be helped by the emerging products you are working on?

Serials is one of the more mature items in the ILS. Automation as a result of standardization of data from all information sources is going to keep improving.

One of the key challenges is to deal with things holistically. We get bogged down in the details sometimes. We need to be looking at things on the collection/consortia level.

We are all trying to do more with less funding. Improving flexibility and automation will offer better services for the users and allow libraries to shift their staff assets to more important (less repetitive) work.

We need better tools to demonstrate the value of the library to our stakeholders. We need ways of assessing resource beyond comparing costs.

Any examples of how next-gen ILS will improve workflow?

Libraries are increasing spending on electronic resources, and many are nearly eliminating their print serials spending. Next gen systems need reporting tools that not only provide data about electronic use/cost, but also print formats, all in one place.

A lot of workflow comes from a print-centric perspective. Many libraries still haven’t figured out how to adjust that to include electronic without saddling all of that on one person (or a handful). [One of the issues is that the staff may not be ready/willing/able to handle the complexities of electronic.]

Every purchase should be looked at independently of format and more on the cost/process for acquiring and making it available to the stakeholders.

[Not taking as many notes from this point on. Listening for something that isn’t fluffy pie in the sky. Want some sold direction that isn’t pretty words to make librarians happy.]

NASIG 2010: What Counts? Assessing the Value of Non-Text Resources

Presenters: Stephanie Krueger, ARTstor and Tammy S. Sugarman, Georgia State University

Anyone who does anything with use statistics or assessment knows why use statistics are important and the value of standards like COUNTER. But, how do we count the use of non-text content that doesn’t fit in the categories of download, search, session, etc.? What does it mean to “use” these resources?

Of the libraries surveyed that collect use stats for non-text resources, they mainly use them to report to administrators and determine renewals. A few use it to evaluate the success of training or promote the resource to the user community. More than a third of the respondents indicated that the stats they have do not adequately meet the needs they have for the data.

ARTstor approached COUNTER and asked that the technical advisory group include representatives from vendors that provide non-text content such as images, video, etc. Currently, the COUNTER reports are either about Journals or Databases, and do not consider primary source materials. One might think that “search” and “sessions” would be easy to track, but there are complexities that are not apparent.

Consider the Database 1 report. With a primary source aggregator like ARTstor, who is the “publisher” of the content? For ARTstor, search is only 27% of the use of the resource. 47% comes from image requests (includes thumbnail, full-size, printing, download, etc.) and the rest is from software utilities within the resource (creation of course folders, passwords creation, organizing folders, annotations of images, emailing content/URLs, sending information to bibliographic management tools, etc.).

The missing metric is the non-text full content unit request (i.e. view, download, print, email, stream, etc.). There needs to be some way of measuring this that is equivalent to the full-text download of a journal article. Otherwise, cost per use analysis is skewed.

What is the equivalent of the ISSN? Non-text resources don’t even have DOIs assigned to them.

On top of all of that, how do you measure the use of these resources beyond the measurable environment? For example, once an image is downloaded, it can be included in slides and webpages for classroom use more than once, but those uses are not counted. ARTstor doesn’t use DRM, so they can’t track that way.

No one is really talking about how to assess this kind of usage, at least not in the professional library literature. However, the IT community is thinking about this as well, so we may be able to find some ideas/solutions there. They are being asked to justify software usage, and they have the same lack of data and limitations. So, instead of going with the traditional journal/database counting methods, they are attempting to measure the value of the services provided by the software. The IT folk identify services, determine the cost of those services, and identify benchmarks for those costs.

A potential report could have the following columns: collection (i.e. an art collection within ARTstor, or a university collection developed locally), content provider, platform, and then the use numbers. This is basic, and can increase in granularity over time.

There are still challenges, even with this report. Time-based objects need to have a defined value of use. Resources like data sets and software-like things are hard to define as well (i.e. SciFinder Scholar). And, it will be difficult to define a report that is one size fits all.

NASIG 2010: Publishing 2.0: How the Internet Changes Publications in Society

Presenter: Kent Anderson, JBJS, Inc

Medicine 0.1: in dealing with the influenza outbreak of 1837, a physician administered leeches to the chest, James’s powder, and mucilaginous drinks, and it worked (much like take two aspirin and call in the morning). All of this was written up in a medical journal as a way to share information with peers. Journals have been the primary source of communicating scholarship, but what the journal is has become more abstract with the addition of non-text content and metadata. Add in indexes and other portals to access the information, and readers have changed the way they access and share information in journals. “Non-linear” access of information is increasing exponentially.

Even as technology made publishing easier and more widespread, it was still producers delivering content to consumers. But, with the advent of Web 2.0 tools, consumers now have tools that in many cases are more nimble and accessible than the communication tools that producers are using.

Web 1.0 was a destination. Documents simply moved to a new home, and “going online” was a process separate from anything else you did. However, as broadband access increases, the web becomes more pervasive and less a destination. The web becomes a platform that brings people, not documents, online to share information, consume information, and use it like any other tool.

Heterarchy: a system of organization replete with overlap, multiplicity, mixed ascendandacy and/or divergent but coextistent patterns of relation

Apomediation: mediation by agents not interposed between users and resources, who stand by to guide a consumer to high quality information without a role in the acquisition of the resources (i.e. Amazon product reviewers)

NEJM uses terms by users to add related searches to article search results. They also bump popular articles from searches up in the results as more people click on them. These tools improved their search results and reputation, all by using the people power of experts. In addition, they created a series of “results in” publications that highlight the popular articles.

It took a little over a year to get to a million Twitter authors, and about 600 years to get to the same number of book authors. And, these are literate, savvy users. Twitter & Facebook count for 1.45 million views of the New York Times (and this is a number from several years ago) — imagine what it can do for your scholarly publication. Oh, and NYT has a social media editor now.

Blogs are growing four times as fast as traditional media. The top ten media sites include blogs and the traditional media sources use blogs now as well. Blogs can be diverse or narrow, their coverage varies (and does not have to be immediate), they are verifiably accurate, and they are interactive. Blogs level that media playing field, in part by watching the watchdogs. Blogs tend to investigate more than the mainstream media.

It took AOL five times as long to get to twenty million users than it did for the iPhone. Consumers are increasingly adding “toys” to their collection of ways to get to digital/online content. When the NEJM went on the Kindle, more than just physicians subscribed. Getting content into easy to access places and on the “toys” that consumers use will increase your reach.

Print digests are struggling because they teeter on the brink of the daily divide. Why wait for the news to get stale, collected, and delivered a week/month/quarter/year later? People are transforming. Our audiences don’t think of information as analogue, delayed, isolated, tethered, etc. It has to evolve to something digital, immediate, integrated, and mobile.

From the Q&A session:

The article container will be here for a long time. Academics use the HTML version of the article, but the PDF (static) version is their security blanket and archival copy.

Where does the library as source of funds when the focus is more on the end users? Publishers are looking for other sources of income as library budgets are decreasing (i.e. Kindle, product differentiation, etc.). They are looking to other purchasing centers at institutions.

How do publishers establish the cost of these 2.0 products? It’s essentially what the market will bear, with some adjustments. Sustainability is a grim perspective. Flourishing is much more positive, and not necessarily any less realistic. Equity is not a concept that comes into pricing.

The people who bring the tremendous flow of information under control (i.e. offer filters) will be successful. One of our tasks is to make filters to help our users manage the flow of information.

NASIG 2010: Let the Patron Drive: Purchase on Demand of E-books

Presenters: Jonathan Nabe, Southern Illinois University-Carbondale and Andrea Imre, Southern Illinois University-Carbondale

As resources have dwindled over the years, libraries want to make sure every dollar spent is going to things patrons will use. Patron-driven acquisition (PDA) means you’re only buying things that your users want.

With the Coutts MyiLibrary, they have access to over 230,000 titles from more than 100 publishers, but they’ve set up some limitations and parameters (LC class, publication year, price, readership level) to determine which titles will be made available to users for the PDA program. You can select additional title after the initial setup, so the list is constantly being revised and enhanced. And, they were able to upload their holdings to eliminate duplications.

[There are, of course, license issues that you should consider for your local use, as with any electronic resource. eBooks come with different sorts of use concerns than journals, but by now most of us are familiar with them. However, those of us in the session are blessed with a brief overview of these concerns. I recommend doing a literature study if this interests you.]

They opted for a deposit account to cover the purchases, and when a title is purchased, they add a purchase order to the bibliographic record already in the catalog. (Records for available titles in the program are added to the catalog to begin with, and titles are purchased after they have been accessed three times.)

[At this point, my attention waned even further. More interested in hearing about how it’s working for them than about the processes they use to set up and manage it, as I’m familiar with how that’s supposed to work.]

They’ve spent over $54,000 since November 2008 and purchased 470 titles (approx $115/title on average). On average, 95 pages are viewed per purchased title, which is a stat you can’t get from print. Half of the titles have been used after the initial purchase, and over 1,000 titles were accessed once or twice (prior to purchase and not enough to initiate purchase).

Social sciences and engineering/technology are the high users, with music and geography at the low end. Statistically, other librarians have pushed back against PDA more than users, and in their case, the humanities librarian decided this wasn’t a good process and withdrew those titles from the program.

During the same time period, they purchased almost 17,000 print titles, and due to outside factors that delayed purchases 77% of those titles have never circulated. Only 1% circulated more than four times. [Hard to compare the two, since ebooks may be viewed several times by one person as they refer back to it, when a print book only has the checkout stat and no way to count the number of times it is “viewed” in the same way.]

Some issues to consider:

  • DRM (digital rights management) can cause problems with using the books for classroom/course reserves. DRM also often prevents users from downloading the books to preferred portable, desktop, or other ebook readers. There are also problems with incompatible browsers or operating systems.
  • Discovery options also provide challenges. Some publishers are better than other at making their content discoverable through search tools.
  • ILL is non-existent for ebooks. We’ve solved this for ejournals, but ebooks are still a stumbling block for traditional borrowing and lending.
  • There are other ebook purchasing options, and the “big deal” may actually be more cost-effective. They provide the wide access options, but at a lower per-book cost.
  • Archival copies may not be provided, and if it is, there are issues with preservation and access that shift long-term storage from free to an undetermined cost.

NASIG 2010: Integrating Usage Statistics into Collection Development Decisions

Presenters: Dani Roach, University of St. Thomas and Linda Hulbert, University of St. Thomas

As with most libraries, they are faced with needing to downsize their purchases in order to fit within reduced budgets, so good tools must be employed to determine which stuff to remove or acquire.

The statistics for impact factor means little to librarians, since the “best” journals may not be appropriate for the programs the library supports. Quantitative data like cost per use, historical trends, and ILL data are more useful for libraries. Combine these with reviews, availability, features, user feedback, and the dust layer on the materials, and then you have some useful information for making decisions.

Usage statistics are just one component that we can use to analyze the value of resources. There are other variables than cost and other methods than cost per use, but these are what we most often apply.

Other variables can include funds/subjects, format, and identifiers like ISSN. Cost needs to be defined locally, as libraries manage them differently for annual subscriptions, multiple payments/funds, one-time archive fees, hosting fees, and single title databases or ebooks. Use is also tricky. A PDF download in a JR1 report is different from a session count in a DB1 report is different from a reshelve count for a bound journal. Local consistency with documentation is best practice for sorting this out.

Library-wide SharePoint service allows them to drop documents with subscription and analysis information into one location for liaisons to use. [We have a shared network folder that I do some of this with — I wonder if SharePoint would be better at managing all of the files?]

For print statistics, they track separately bound volume use versus new issue use, scanning barcodes into their ILS to keep a count. [I’m impressed that they have enough print journal use to do that rather than hash marks on a sheet of paper. We had 350 reshelved in last year, including ILL use, if I remember correctly.]

Once they have the data, they use what they call a “fairness factor” formula to normalize the various subject areas to determine if materials budgets are fairly allocated across all disciplines and programs. Applying this sort of thing now would likely shock budgets, so they decided to apply new money using the fairness factor, and gradually underfunded areas are being brought into balance without penalizing overfunded areas.

They have stopped trying to achieve a balance between books and periodicals. They’ve left that up to the liaisons to determine what is best for their disciplines and programs.

They don’t hide their cancellation list, and if any of the user community wants to keep something, they’ve been willing to retain it. However, they get few requests to retain content, and they think it is in part because the user community can see the cost, use, and other factors that indicate the value of the resource for the local community.

They have determined that it costs them around $52 a title to manage a print subscription, and over $200 a title to manage an online subscription, mainly because of the level of expertise involved. So, there really are no “free” subscriptions, and if you want to get into the cost of binding/reshelving, you need to factor in the managerial costs of electronic titles, as well.

Future trends and issues: more granularity, more integration of print and online usage, interoperability and migration options for data and systems, continued standards development, and continued development of tools and systems.

Anything worth doing is worth overdoing. You can gather Ulrich’s reports, Eigen factors, relative price indexes, and so much more, but at some point, you have to decide if the return is worth the investment of time and resources.

NASIG 2010: It’s Time to Join Forces: New Approaches and Models that Support Sustainable Scholarship

Presenters: David Fritsch, JSTOR and Rachel Lee, University of California Press

JSTOR has started working with several university press and other small scholarly publishers to develop sustainable options.

UC Press is one of the largest university press in the US (36 journals in the humanities, biological & social sciences), publishing both UC titles and society titles. Their prices range from $97-422 for annual subscriptions, and they are SHERPA Green. One of the challenges they face on their own platform is keeping up with libraries expectations.

ITHAKA is a merger of JSTOR, ITHAKA, Portico, and Alkula, so JSTOR is now a service rather than a separate company. Most everyone here knows what the JSTOR product/service is, and that hasn’t changed much with the merger.

Scholar’s use of information is moving online, and if it’s not online, they’ll use a different resource, even if it’s not as good. And, if things aren’t discoverable by Google, they are often overlooked. More complex content is emerging, including multimedia and user-generated content. Mergers and acquisitions in publishing are consolidating content under a few umbrellas, and this threatens smaller publishers and university presses that can’t keep up with the costs on a smaller scale.

The serials crisis has impacted smaller presses more than larger ones. Despite good relationships with societies, it is difficult to retain popular society publications when larger publishers can offer them more. It’s also harder to offer the deep discounts expected by libraries in consortial arrangements. University presses and small publishers are in danger of becoming the publisher of last resort.

UC Press and JSTOR have had a long relationship, with JSTOR providing long-term archiving that UC Press could not have afforded to maintain on their own. Not all of the titles are included (only 22), but they are the most popular. They’ve also participated in Portico. JSTOR is also partnering with 18 other publishers that are mission-driven rather than profit-driven, with experience at balancing the needs of academia and publishing.

By partnering with JSTOR for their new content, UC Press will be able to take advantage of the expanded digital platform, sales teams, customer service, and seamless access to both archive and current content. There are some risks, including the potential loss of identity, autonomy, and direct communication with libraries. And then there is the bureaucracy of working within a larger company.

The Current Scholarship Program seeks to provide a solution to the problems outlined above that university presses and small scholarly publishers are facing. The shared technology platform, Portico preservation, sustainable business model, and administrative services potentially free up these small publishers to focus on generating high-quality content and furthering their scholarly communication missions.

Libraries will be able to purchase current subscriptions either through their agents or JSTOR (who will not be charging a service fee). However, archive content will be purchased directly from JSTOR. JSTOR will handle all of the licensing, and current JSTOR subscribers will simply have a rider adding title to their existing licenses. For libraries that purchase JSTOR collections through consortia arrangements, it will be possible to add title by title subscriptions without going through the consortia if a consortia agreement doesn’t make sense for purchase decisions. They will be offering both single-title purchases and collections, with the latter being more useful for large libraries, consortia, and those who want current content for titles in their JSTOR collections.

They still don’t know what they will do about post-cancellation access. Big red flag here for potential early adopters, but hopefully this will be sorted out before the program really kicks in.

Benefits for libraries: reasonable pricing, more efficient discovery, single license, and meaningful COUNTER-compliant statistics for the full run of a title. Renewal subscriptions will maintain access to what they have already, and new subscriptions will come with access to the first online year provided by the publisher, which may not be volume one, but certainly as comprehensive as what most publishers offer now.

UC Press plans to start transitioning in January 2011. New orders, claims, etc. will be handled by JSTOR (including print subscriptions), but UC Press will be setting their own prices. Their platform, Caliber, will remain open until June 30, 2011, but after that content will be only on the JSTOR platform. UC Press expects to move to online-only in the next few years, particularly as the number of print subscriptions are dwindling to the point where it is cost-prohibitive to produce the print issues.

There is some interest from the publishers to add monographic content as well, but JSTOR isn’t ready to do that yet. They will need to develop some significant infrastructure in order to handle the order processing of monographs.

Some in the audience are concerned that the cost of developing platform enhancements and other tools, mostly that these costs will be passed on in subscription prices. They will be, to a certain extent, only in that the publishers will be contributing to the developments and they set the prices, but because it is a shared system, the costs will be spread out and likely impact libraries no more than they have already.

One big challenge all will face is unlearning the mindset that JSTOR is only archive content and not current content.

NASIG 2010: Linked Data and Libraries

Presenter: Eric Miller, Zepheira, LCC

Nowadays, we understand what the web is and the impact it has had on information sharing, but before it was developed, it was in a “vague but exciting” stage and few understood it. When we got started with the web, we really didn’t know what we were doing, but more importantly, the web was being developed so that it was flexible enough for smarter and more creative people to do amazing things.

“What did your website look like when you were in the fourth grade?” Kids are growing up with the web and it’s hard for them to comprehend life without it. [Dang, I’m old.]

This talk will be about linked data, its legacy, and how libraries can lead linked data. We have a huge opportunity to weave libraries into the fabric of libraries, and vice versa.

About five years ago, the BBC started making their content available in a service that allowed others to use and remix the delivery of the content in new ways. Rather than developing alternative platforms and creating new spaces, they focus on generating good content and letting someone else frame it. Other sources like NPR, the World Bank, and Data.gov are doing the same sorts of things. Within the library community, these things are happening, as well. OCLC’s APIs are getting easier to use, and several national libraries are putting their OPACs on the web with APIs.

Obama’s open government initiative is another one of those “vague but exciting” things, and it charged agencies to come up with their own methods of making their content available via the web. Agencies are now struggling with the same issues and desires that libraries have been tackling for years. We need to recognize our potential role in moving this forward.

Linked data is a best practice for sharing data, connecting data, and uses the semantic web. Rather than leaving the data in their current formats, let’s put them together in ways they can be used on the wider web. It’s not the databases that make the web possible, it’s the web that makes the databases usable.

Human computation can be put to use in ways that assist computers to make information more usable. Captcha systems are great for blocking automated programs when needed, and by using human computation to decipher scanned text that is undecipherable by computers, ReCaptcha has been able to turn unusable data into a fantastic digital repository of old documents.

LEGOs have been around for decades, and their simple design ensures that new blocks work with old blocks. Most kids end up dumping all of their sets into one bucket, so no matter where the individual building blocks come from, they can be put together and rebuild in any way you can imagine. We could do this with our blocks of data, if they are designed well enough to fit together universally.

Our current applications, for the most part, are not designed to allow for the portability of data. We need to rethink application design so that the data becomes more portable. Web applications have, by neccesity, had to have some amount of portability. Users are becoming more empowered to use the data provided to them in their own way, and if they don’t get that from your service/product, then they go elsewhere.

Digital preservation repositories are discussing ways to open up their data so that users can remix and mashup data to meet their needs. This requires new ways of archiving, cataloging, and supplying the content. Allow users to select the facets of the data that they are interested in. Provide options for visualizing the raw data in a systematic way.

Linked data platforms create identifiers for every aspect of the data they contain, and these are the primary keys that join data together. Other content that is created can be combined to enhance the data generated by agencies and libraries, but we don’t share the identifiers well enough to allow others to properly link their content.

Web architecture starts with web identifiers. We can use URLs to identify things other than just documents, but we need to be consistent and we can’t change the URL structures if we want it to be persistent. A lack of trust in identifiers is slowing down linked data. Libraries have the opportunity to leverage our trust and data to provide control points and best practices for identifier curation.

A lot of work is happening in W3C. Libraries should be more involved in the conversation.

Enable human computation by providing the necessary identifiers back to data. Empower your users to use your data, and build a community around it. Don’t worry about creating the best system — wrap and expose your data using the web as a platform.

css.php