social & scholarly communications, mixing it up

Scientific publisher Springer has been doing several things lately that make me sit up and pay attention. Providing DRM-free PDF files of their ebooks is one, and now I see they are providing rather useful bits of scholarly information in a rather social media format.

Springer Realtime gives currently trending topics and downloads for content they are serving out to subscribers around the world. The only thing that’s missing is a way to embed these nifty widgets elsewhere, like on subject guide pages.

NASIG 2010: Publishing 2.0: How the Internet Changes Publications in Society

Presenter: Kent Anderson, JBJS, Inc

Medicine 0.1: in dealing with the influenza outbreak of 1837, a physician administered leeches to the chest, James’s powder, and mucilaginous drinks, and it worked (much like take two aspirin and call in the morning). All of this was written up in a medical journal as a way to share information with peers. Journals have been the primary source of communicating scholarship, but what the journal is has become more abstract with the addition of non-text content and metadata. Add in indexes and other portals to access the information, and readers have changed the way they access and share information in journals. “Non-linear” access of information is increasing exponentially.

Even as technology made publishing easier and more widespread, it was still producers delivering content to consumers. But, with the advent of Web 2.0 tools, consumers now have tools that in many cases are more nimble and accessible than the communication tools that producers are using.

Web 1.0 was a destination. Documents simply moved to a new home, and “going online” was a process separate from anything else you did. However, as broadband access increases, the web becomes more pervasive and less a destination. The web becomes a platform that brings people, not documents, online to share information, consume information, and use it like any other tool.

Heterarchy: a system of organization replete with overlap, multiplicity, mixed ascendandacy and/or divergent but coextistent patterns of relation

Apomediation: mediation by agents not interposed between users and resources, who stand by to guide a consumer to high quality information without a role in the acquisition of the resources (i.e. Amazon product reviewers)

NEJM uses terms by users to add related searches to article search results. They also bump popular articles from searches up in the results as more people click on them. These tools improved their search results and reputation, all by using the people power of experts. In addition, they created a series of “results in” publications that highlight the popular articles.

It took a little over a year to get to a million Twitter authors, and about 600 years to get to the same number of book authors. And, these are literate, savvy users. Twitter & Facebook count for 1.45 million views of the New York Times (and this is a number from several years ago) — imagine what it can do for your scholarly publication. Oh, and NYT has a social media editor now.

Blogs are growing four times as fast as traditional media. The top ten media sites include blogs and the traditional media sources use blogs now as well. Blogs can be diverse or narrow, their coverage varies (and does not have to be immediate), they are verifiably accurate, and they are interactive. Blogs level that media playing field, in part by watching the watchdogs. Blogs tend to investigate more than the mainstream media.

It took AOL five times as long to get to twenty million users than it did for the iPhone. Consumers are increasingly adding “toys” to their collection of ways to get to digital/online content. When the NEJM went on the Kindle, more than just physicians subscribed. Getting content into easy to access places and on the “toys” that consumers use will increase your reach.

Print digests are struggling because they teeter on the brink of the daily divide. Why wait for the news to get stale, collected, and delivered a week/month/quarter/year later? People are transforming. Our audiences don’t think of information as analogue, delayed, isolated, tethered, etc. It has to evolve to something digital, immediate, integrated, and mobile.

From the Q&A session:

The article container will be here for a long time. Academics use the HTML version of the article, but the PDF (static) version is their security blanket and archival copy.

Where does the library as source of funds when the focus is more on the end users? Publishers are looking for other sources of income as library budgets are decreasing (i.e. Kindle, product differentiation, etc.). They are looking to other purchasing centers at institutions.

How do publishers establish the cost of these 2.0 products? It’s essentially what the market will bear, with some adjustments. Sustainability is a grim perspective. Flourishing is much more positive, and not necessarily any less realistic. Equity is not a concept that comes into pricing.

The people who bring the tremendous flow of information under control (i.e. offer filters) will be successful. One of our tasks is to make filters to help our users manage the flow of information.

NASIG 2010: Integrating Usage Statistics into Collection Development Decisions

Presenters: Dani Roach, University of St. Thomas and Linda Hulbert, University of St. Thomas

As with most libraries, they are faced with needing to downsize their purchases in order to fit within reduced budgets, so good tools must be employed to determine which stuff to remove or acquire.

The statistics for impact factor means little to librarians, since the “best” journals may not be appropriate for the programs the library supports. Quantitative data like cost per use, historical trends, and ILL data are more useful for libraries. Combine these with reviews, availability, features, user feedback, and the dust layer on the materials, and then you have some useful information for making decisions.

Usage statistics are just one component that we can use to analyze the value of resources. There are other variables than cost and other methods than cost per use, but these are what we most often apply.

Other variables can include funds/subjects, format, and identifiers like ISSN. Cost needs to be defined locally, as libraries manage them differently for annual subscriptions, multiple payments/funds, one-time archive fees, hosting fees, and single title databases or ebooks. Use is also tricky. A PDF download in a JR1 report is different from a session count in a DB1 report is different from a reshelve count for a bound journal. Local consistency with documentation is best practice for sorting this out.

Library-wide SharePoint service allows them to drop documents with subscription and analysis information into one location for liaisons to use. [We have a shared network folder that I do some of this with — I wonder if SharePoint would be better at managing all of the files?]

For print statistics, they track separately bound volume use versus new issue use, scanning barcodes into their ILS to keep a count. [I’m impressed that they have enough print journal use to do that rather than hash marks on a sheet of paper. We had 350 reshelved in last year, including ILL use, if I remember correctly.]

Once they have the data, they use what they call a “fairness factor” formula to normalize the various subject areas to determine if materials budgets are fairly allocated across all disciplines and programs. Applying this sort of thing now would likely shock budgets, so they decided to apply new money using the fairness factor, and gradually underfunded areas are being brought into balance without penalizing overfunded areas.

They have stopped trying to achieve a balance between books and periodicals. They’ve left that up to the liaisons to determine what is best for their disciplines and programs.

They don’t hide their cancellation list, and if any of the user community wants to keep something, they’ve been willing to retain it. However, they get few requests to retain content, and they think it is in part because the user community can see the cost, use, and other factors that indicate the value of the resource for the local community.

They have determined that it costs them around $52 a title to manage a print subscription, and over $200 a title to manage an online subscription, mainly because of the level of expertise involved. So, there really are no “free” subscriptions, and if you want to get into the cost of binding/reshelving, you need to factor in the managerial costs of electronic titles, as well.

Future trends and issues: more granularity, more integration of print and online usage, interoperability and migration options for data and systems, continued standards development, and continued development of tools and systems.

Anything worth doing is worth overdoing. You can gather Ulrich’s reports, Eigen factors, relative price indexes, and so much more, but at some point, you have to decide if the return is worth the investment of time and resources.

ER&L 2010: Usage Statistics for E-resources – is all that data meaningful?

Speaker: Sally R. Krash, vendor

Three options: do it yourself, gather and format to upload to a vendor’s collection database, or have the vendor gather the data and send a report (Harrassowitz e-Stats). Surprisingly, the second solution was actually more time-consuming than the first because the library’s data didn’t always match the vendor’s data. The third is the easiest because it’s coming from their subscription agent.

Evaluation: review cost data; set cut-off point ($50, $75, $100, ILL/DocDel costs, whatever); generate list of all resources that fall beyond that point; use that list to determine cancellations. For citation databases, they want to see upward trends in use, not necessarily cyclical spikes that average out year-to-year.

Future: Need more turnaway reports from publishers, specifically journal publishers. COUNTER JR5 will give more detail about article requests by year of publication. COUNTER JR1 & BR1 combined report – don’t care about format, just want download data. Need to have download information for full-text subscriptions, not just searches/sessions.

Speaker: Benjamin Heet, librarian

He is speaking about University of Notre Dame’s statistics philosophy. They collect JR1 full text downloads – they’re not into database statistics, mostly because fed search messes them up. Impact factor and Eigen factors are hard to evaluate. He asks, “can you make questionable numbers meaningful by adding even more questionable numbers?”

At first, he was downloading the spreadsheets monthly and making them available on the library website. He started looking for a better way, whether that was to pay someone else to build a tool or do it himself. He went with the DIY route because he wanted to make the numbers more meaningful.

Avoid junk in junk out: HTML vs. PDF downloads depends on the platform setup. Pay attention to outliers to watch for spikes that might indicate unusual use by an individual. The reports often have bad data or duplicate data on the same report.

CORAL Usage Statistics – local program gives them a central location to store user names & passwords. He downloads reports quarterly now, and the public interface allows other librarians to view the stats in readable reports.

Speaker: Justin Clarke, vendor

Harvesting reports takes a lot of time and requires some administrative costs. SUSHI is a vehicle for automating the transfer of statistics from one source to another. However, you still need to look at the data. Your subscription agent has a lot more data about the resources than just use, and can combine the two together to create a broader picture of the resource use.

Harrassowitz starts with acquisitions data and matches the use statistics to that. They also capture things like publisher changes and title changes. Cost per use is not as easy as simple division – packages confuse the matter.

High use could be the result of class assignments or hackers/hoarders. Low use might be for political purchases or new department support. You need a reference point of cost. Pricing from publishers seems to have no rhyme or reason, and your price is not necessarily the list price. Multi-year analysis and subject-based analysis look at local trends.

Rather than usage statistics, we need useful statistics.

library day in the life – round 4

Hello. I’m the electronic resources librarian at the University of Richmond, a small private liberal arts university nestled on the edge of suburbia in a medium-sized mid-Atlantic city. Today I am participating in the Library Day in the Life Project for its fourth round. Enjoy!

8:30am Arrive, turn on computer, and go get a cup of coffee from the coffee shop attached to the library. By the time I return, the login screen is displayed, and thus begins the 5 minute long process of logging in and then opening Outlook, Firefox, and TweetDeck. Pidgin starts on its own, thankfully. Update location on FourSquare. (Gotta keep my mayorship!)

8:40am Check schedule on Outlook, note the day’s meeting times, and then check the tasks for the day. At this point, I see that it’s time for a DILO, so I start this entry.

8:50am Weed through the new emails that arrived over the weekend. Note that there is more spam than normal. In the middle of this, my boss cancels one of two meetings today. (w00t!)

9:15am Email processed and sorted into folders and labels. Time to dig into the day’s tasks and action items. Chatty coworkers in the cube farm prompt me to load Songbird and don headphones.

9:25am Send a reminder to the LIB 101 students registered for my seminar on Friday. Work out switching reference desk shifts because my Wednesday LIB 101 seminar conflicts with my regular Wednesday shift. Also send out a note requesting trades for next week’s shifts, since I’ll be away at ER&L.

9:40am Cleared all action items and to-do items, so now it’s time to dig into my current project — gathering 2009 use statistics.

10:30am Electronic resources workflow planning meeting for the next year with an eye towards the next five years.

11:00am Back to gathering use stats. I’ve been working on this for over two weeks, and I’m a little over half-way through. I’d be further along if I could dedicate all my time to it, but unfortunately, meetings, desk schedules, and other action items get in the way.

12:15pm Hunger overrides my obsessive hunt for stats. I brought my lunch with me today, but often I end up grabbing something on the go while I run errands.

1:10pm Process the email that has come in over the past two hours. Only two action items added (yay!) and both are responses to request for information from this morning (yay!), so I’m happy to see them.

1:15pm Back to the side-yet-related project that I started on shortly before lunch. We have a bunch of journals in the “Multiple Vendors :: Single Journals” category in our ERMS, and I’m moving them over to their specific publisher listings if I can, checking to see if we have use stats for them, and requesting admin info when we don’t. There are only about 55 titles, so I’m hoping to get most of this done before my reference desk shift at 3.

3:00pm I’m only half-way through the side-yet-related project, but I have to set it down and go to my reference desk shift now. Answering many technology questions from a retired woman who is attempting to take a class that requires her to use Blackboard and view PowerPoints and things that are highly confusing to her. Checking out netbooks to students and showing them how to scan documents to PDF using the copiers rather than making a bunch of copies. Also, catching up on RSS feeds between the questions.

5:00pm Desk shift over. I have just enough time to wrap up my projects for the day and prep for tomorrow, grab a quick bite to eat, and then I’m off to the other side of campus where I have choir rehearsal until 7pm.

Thank you for reading!

DILO: electronic resources librarian

9:00am Arrive at work. Despite getting to bed early, I still overslept. Great way to start a Monday, I tell you.

9:00-9:20am I was out of the office for most of last week, so I spent some time catching up with my assistant. This also gave my computer plenty of time to boot up.

9:20-9:30am Logged into the network, and then went to get some iced tea from the library coffee shop. It takes several minutes for all of the start-up programs to load, so that’s a perfect time to acquire my first dose of work-time caffeine.

9:30-9:35am Start this post.

9:35-10:20am Sifting through the 100+ new messages in my mailbox from the time while I was gone. I followed-up on the ones that looked urgent while I was out, but the rest were left for today. In the end, three messages went into the to-do category and a few more into the use statistics category. The rest were read and deleted.

10:20-10:45am Filled out an order form for a new database. PDF form is printable only, so this required the use of a typewriter (my handwriting is marginally legible). I also discovered in the middle of the process that I did not have all of the necessary information, which required further investigation and calculations.

10:45-11:05am Sent email reminders to the students LIB 101 class that I will be teaching on Friday. Created a class roster for all four sections I’m teaching this spring.

11:05-11:15am Mental break. Read Twitter and left a birthday greeting for a friend in Facebook.

11:15-11:20am Added use stats login info for a new resource to our ERM and the shared spreadsheet of admin logins that we have been using since before the ERM (still implementing ERM, so it’s best to put it in both places).

11:20-11:25am Processed incoming email.

11:25am-12:40pm Was going to run some errands over my lunch hour, but instead was snagged by some colleagues who were going out to my favorite Mexican restaurant.

12:40-1:00pm Sorting through the email that came in while I was gone. Answered a call from a publisher sales person.

1:00-3:00pm Main Service Desk shift, covering the reference side of it. During the slow times, I accessed my work station PC via remote desktop and worked on the scanned license naming standardization project I started last week. In the process, I’m also breaking apart multiple contracts that were accidentally scanned together. As usual, the busy times involved a sudden influx of in-person, email, and IM questions, most often at the same time.

3:00-3:15pm Got a refill of ice tea from the coffee shop, processed email, and read through the Twitter feed.

3:15-4:00pm Organized recently scanned license agreements and created labels for the folders. Filed the licenses in the file drawer next to my cubicle.

4:00-4:20pm Checked in with co-workers and revised my to-do list.

4:20-5:15pm Responded to email and followed-up on action items related to the recent NASIG executive board meeting.

And that, my friends, is my rather unusual day in the life of an electronic resources librarian. Most of the time, I bounce between actual ER work, meetings, and email.

Read more DILOs like this one.

search your opac with firefox

Attention systems administrators for libraries that use III’s Millenium or INNOPAC! If you haven’t heard about it already, there is a way to create a Firefox/Mozilla plugin that will make your catalog an option within the browser’s search box. Corey Seeman has the instructions posted on his website, as well as a slideshow-turned PDF graphical … Continue reading “search your opac with firefox”

Attention systems administrators for libraries that use III’s Millenium or INNOPAC! If you haven’t heard about it already, there is a way to create a Firefox/Mozilla plugin that will make your catalog an option within the browser’s search box. Corey Seeman has the instructions posted on his website, as well as a slideshow-turned PDF graphical overview.

css.php