Android app recommendations

I’ve had my HTC Incredible for about 10 months now, and over that time I have added (and removed) quite a few apps. Here’s a list of the apps that I’m currently using on a regular basis and would recommend to other Android users:

Books & Reference

Communication

Finance

Games

Health & Fitness

Music

News

Photography

Productivity

Shopping

Social

Tools

Travel & Local

View this Android app list on AppBrain

LibFest: Telling your Story with Usage Statistics — Making data work

presenter: Jamene Brooks-Kieffer

She won’t be talking about complex tools or telling you to hire more staff. Rather, she’ll be looking at ways we can use what we have to do it better.

Right now, we have too much data from too many sources, and we don’t have enough time or staff to deal with it. And, nobody cares about it anyway. Instead of feeling blue about this, change your attitude.

Start by looking at smaller chunks. Look at all of the data types and sources, then choose one to focus on. Don’t stress about the rest. How to pick which one? Select data that has been consistently collected over time. If it’s focused on a specific activity, it’ll be easier to create a story about it. And finally, the data should be both interesting and accessible to you.

By selecting only one source of data, you have reduced the stress on time. You also need to acknowledge your limits in order to move forward. You can’t work miracles, but you can show enough impact to get others on board. Tie the data to your organizational goals. Analyze the data using the tools you already have (i.e. Excel), and then publicize the results of your work.

Why use Excel? It’s pretty universal, and there are free alternatives for spreadsheets if you need them. Three useful Excel tools: import & manipulate files of various formats (CSV files), consolidate similar information (total annual data from monthly worksheets), and conditional formatting (identify cost/use over thresholds).

The spreadsheets are for you, not the stakeholders. Stop relying on them to communicate your data. The trouble with spreadsheets is that although they contain a lot of data, it’s challenging for those unfamiliar with the sources to understand the meaning of the data. Sending a summary/story will get your message across faster and more clearly.

Data has context, settings, complexities, and conflicts. One of the best ways of communicating it is through a story. Give stakeholders the context to hang the numbers on and a way to remember why they are important. Write what you know, focus on the important things, and keep it brief and meaningful. Here is an example: Data Stories: A dirty job.

Data stories are everywhere. It’s not strictly for usage or financial data. If you have a specific question you want answered through data, it makes it easier to compose the story.

Convince yourself to act; your actions will persuade others.

presenter: Katy Silberger

She will be showing three scenarios for observing user behavior through statistics: looking at the past with vendor supplied statistics, assessing current user behavior with Google Analytics, and anticipating user behavior with Google Analytics.

They started looking at usage patterns before and after implementing federated searching. It was hard to answer the question of how federated searching changed user behavior. They used vendor usage reports and website visits to calculate the number of articles retrieved per website visit and articles retrieved per search. They found that the federated search tool generated an increase in article/use. The ratios take into account the fluctuation in user populations.

Google Analytics could be used to identify use from students abroad. It’s also helpful for identifying trends in mobile web access.

IL 2010: Personal Content Management

speaker: Gary Price

Giving generalities about mobile devices is challenging because there are so many options. If your library doesn’t already have a mobile website, go for a web app rather than something platform specific.

The cloud can be a good backup for when your devices fail, since you can access it from other places. But, choose a cloud service or backup service carefully – consider reputation and longevity. If you see something you want to preserve for future use, save it now because it could be gone later. Capture it yourself and keep it local.

Backup your computer (pay now or pay later). Price recommends Mozy and Carbonite. Also, pay attention to the restore options (internet vs. DVD).

[I kinda zoned out at this point, as I’m pretty sure he’s not going to talk about much of anything I don’t already know about or will read about on Lifehacker. Unfortunately, choosing a seat in the front row prevents me from politely leaving to attend a different session.]

WordCamp Richmond: Blogging for Business

moderator: Kate Hall
panelists: Dr. Arnold Kim, John Petersik, and Jason Guard

All started blogging because they had a passion for the topic, and were subsequently surprised by the popularity of their blogs. Both Kim & Petersik now blog fulltime, but Guard doesn’t expect to make a significant income from his blog. Kim noted that there are many other blogs like his now, so what sets his apart is the community that has developed around it.

Many bloggers have commented that since they started tweeting, their blog writing has decreased. Hall is disappointed in herself by this, but also enjoys the interactivity with readers. Kim notes that if your job is to be a blogger, then anything else that takes time away from your blog should be approached with caution; however, it can be a great tool for building a personal brand. For Petersik, it’s just another forum for connecting with their audience, much like Facebook.

How do you deal with the public sucker punches? People have opinions and sometimes they can be expressed strongly. It helps to have a comments policy to keep the conversation civil and not distracted by trolls. Guard tries to be provocative and push buttons, so he expects the sucker punches. Generally he lets the trolls fly their troll flags. Hall commented that some people are out there just to be haters.

it could be worse

Have you noticed the changes Google has been making to the way they display search results? Google Instant has been the latest, but before that, there was the introduction of the “Everything” sidebar. And that one in particular seems to have upset numerous Google search fans. If you do a search in Google for “everything sidebar,” the first few results are about removing or hiding it.

Not only that, but the latest offering from the Funny Music Project is a song all about hating the Google “Everything” sidebar. The creator, Jesse Smith, expresses a frustration that many of us can identify with, “It’s hard to find a product that does what it does really well. In a world of mediocrity, it’s the exception that excels. Then some jerk has to justify his job by tinkering and jiggering and messing up the whole thing.”

Tech folks like to tinker. We like making things work better, or faster, or be more intuitive. I’ll bet that there are a lot of Google users who didn’t know about the different kinds of content-specific searches that Google offered, or had never used the advanced search tools. And they’re probably happy with the introduction of the “Everything” sidebar.

But there’s another group of folks who are evidently very unhappy with it. Some say it takes up too much room on the screen, that it adds complexity, and that they just don’t like the way it looks.

Cue ironic chuckling from me.

Let’s compare the Google search results screen with search results from a few of the major players in libraryland:

Google

ProQuest EBSCOhost

CSA Illumina ISI Web of Knowledge

So, who’s going to write a song about how much they hate <insert library database platform of choice>?

RALC Lightening Round Micro-Conference: Morning Sessions

Andy Morton:  “5-minute madness – The Madness Concept
He’s on the desk at the moment, so he made a video.

Teresa Doherty: “Cool sounds for Aleph Circ Transactions”
Originally presented at ELUNA as a poster session. They use custom sounds and colors to indicate specific circulation transaction alerts, i.e. checkin/checkout alerts. The sounds were selected because they’re short and fairly expressive without being offensive to users who may hear them.

Amanda Hartman: “Reaching Millennials: Understanding and Teaching the Next Generation” 
Those born 1980-1996-ish. These are generalizations, so they don’t describe everyone fully. They’re special and sheltered, team and goal oriented, more likely to be involved in community service, digital natives (mainly mobile tech) but don’t necessarily understand all of the implications or functions, impatient, and multi-taskers. They consider themselves to be relatively savvy searchers, so they may be less likely to ask for help. They have certain expectations about tech that libraries often can’t keep up with. They want learning to be participatory and active, with opportunities to express themselves online, and they have a sense of entitlement – get good grades for hard work, not necessarily for the product of the work. Libraries should have a mobile website. Hire staff that can support tech questions. Provide group workspaces. Explain why, not just how.

Deborah Vroman: “Errors, errors, everywhere! Common citation errors in Literature Resources from Gale”
Until recently, Gale was giving incorrect page ranges for citations for articles reprinted in their collections.  The problem is now fixed by removing the page numbers.

Anna Creech: “Lies, Damn Lies, and Statistics
Uh, that’s me.

Suzanne Sherry: “Goodreads: I read, you read, everybody READS”
Social networking site for readers. You start off with read, to-read, and currently reading, but you can add other tags that then form collections. Once you’ve read a book, you can rate it and write a review. While you’re reading the book, you can leave comments with updates of your progress. The social element is handy for recommending books to friends and discussing the books you read. There are tools for virtual book clubs and online communities for local book clubs.

Nell Chenault: “Scanning to Save or Send”
They have 12 scanning stations, both Mac and PC, including two slide scanners. Also, they have microform scanners instead of the old light box machines. In the past five years, they’ve seen use increase 325%.

Abiodun Solanke: “Netbooks or Laptops” 
In the last hardware replacement cycle, they replaced circulating laptops with netbooks. Cost, capabilities, and portability were factors considered. Some specialized programs could not be loaded, but there are many desktop computer alternatives. Student reaction appears to be divided along gender – male students thought they were too small, but female students liked them. They did a survey of users borrowing the netbooks, and found that over time the negative comments reduced. They concluded that initial reactions to new things aren’t always indicative of their success. Currently would like to add netbooks with Mac OS.

Darnell Law: “Up In The Air: Text-A-Librarian and Mobile Technologies at Johnston Memorial Library”
Implemented service at the end of the spring semester, so they haven’t seen much use yet. They’re using a service called Text a Librarian. Users enter a specific number and a short code at the beginning of the message. The questions are answered through the service website. The phone numbers are anonymized. Some of the advantages of this service include working with any carrier, not requiring a cell phone to answer the texts, relatively inexpensive (~$1100/yr), answer templates for quick responses, and promotional materials.

NASIG 2010: Serials Management in the Next-Generation Library Environment

Panelists: Jonathan Blackburn, OCLC; Bob Bloom (?), Innovative Interfaces, Inc.; Robert McDonald, Kuali OLE Project/Indiana University

Moderator: Clint Chamberlain, University of Texas, Arlington

What do we really mean when we are talking about a “next-generation ILS”?

It is a system that will need to be flexible enough to accommodate increasingly changing and complex workflows. Things are changing so fast that systems can’t wait several years to release updates.

It also means different things to different stakeholders. The underlying thing is being flexible enough to manage both print and electronic, as well as better reporting tools.

How are “next-generation ILS” interrelated to cloud computing?

Most of them have components in the cloud, and traditional ILS systems are partially there, too. Networking brings benefits (shared workloads).

What challenges are facing libraries today that could be helped by the emerging products you are working on?

Serials is one of the more mature items in the ILS. Automation as a result of standardization of data from all information sources is going to keep improving.

One of the key challenges is to deal with things holistically. We get bogged down in the details sometimes. We need to be looking at things on the collection/consortia level.

We are all trying to do more with less funding. Improving flexibility and automation will offer better services for the users and allow libraries to shift their staff assets to more important (less repetitive) work.

We need better tools to demonstrate the value of the library to our stakeholders. We need ways of assessing resource beyond comparing costs.

Any examples of how next-gen ILS will improve workflow?

Libraries are increasing spending on electronic resources, and many are nearly eliminating their print serials spending. Next gen systems need reporting tools that not only provide data about electronic use/cost, but also print formats, all in one place.

A lot of workflow comes from a print-centric perspective. Many libraries still haven’t figured out how to adjust that to include electronic without saddling all of that on one person (or a handful). [One of the issues is that the staff may not be ready/willing/able to handle the complexities of electronic.]

Every purchase should be looked at independently of format and more on the cost/process for acquiring and making it available to the stakeholders.

[Not taking as many notes from this point on. Listening for something that isn’t fluffy pie in the sky. Want some sold direction that isn’t pretty words to make librarians happy.]

NASIG 2010: Publishing 2.0: How the Internet Changes Publications in Society

Presenter: Kent Anderson, JBJS, Inc

Medicine 0.1: in dealing with the influenza outbreak of 1837, a physician administered leeches to the chest, James’s powder, and mucilaginous drinks, and it worked (much like take two aspirin and call in the morning). All of this was written up in a medical journal as a way to share information with peers. Journals have been the primary source of communicating scholarship, but what the journal is has become more abstract with the addition of non-text content and metadata. Add in indexes and other portals to access the information, and readers have changed the way they access and share information in journals. “Non-linear” access of information is increasing exponentially.

Even as technology made publishing easier and more widespread, it was still producers delivering content to consumers. But, with the advent of Web 2.0 tools, consumers now have tools that in many cases are more nimble and accessible than the communication tools that producers are using.

Web 1.0 was a destination. Documents simply moved to a new home, and “going online” was a process separate from anything else you did. However, as broadband access increases, the web becomes more pervasive and less a destination. The web becomes a platform that brings people, not documents, online to share information, consume information, and use it like any other tool.

Heterarchy: a system of organization replete with overlap, multiplicity, mixed ascendandacy and/or divergent but coextistent patterns of relation

Apomediation: mediation by agents not interposed between users and resources, who stand by to guide a consumer to high quality information without a role in the acquisition of the resources (i.e. Amazon product reviewers)

NEJM uses terms by users to add related searches to article search results. They also bump popular articles from searches up in the results as more people click on them. These tools improved their search results and reputation, all by using the people power of experts. In addition, they created a series of “results in” publications that highlight the popular articles.

It took a little over a year to get to a million Twitter authors, and about 600 years to get to the same number of book authors. And, these are literate, savvy users. Twitter & Facebook count for 1.45 million views of the New York Times (and this is a number from several years ago) — imagine what it can do for your scholarly publication. Oh, and NYT has a social media editor now.

Blogs are growing four times as fast as traditional media. The top ten media sites include blogs and the traditional media sources use blogs now as well. Blogs can be diverse or narrow, their coverage varies (and does not have to be immediate), they are verifiably accurate, and they are interactive. Blogs level that media playing field, in part by watching the watchdogs. Blogs tend to investigate more than the mainstream media.

It took AOL five times as long to get to twenty million users than it did for the iPhone. Consumers are increasingly adding “toys” to their collection of ways to get to digital/online content. When the NEJM went on the Kindle, more than just physicians subscribed. Getting content into easy to access places and on the “toys” that consumers use will increase your reach.

Print digests are struggling because they teeter on the brink of the daily divide. Why wait for the news to get stale, collected, and delivered a week/month/quarter/year later? People are transforming. Our audiences don’t think of information as analogue, delayed, isolated, tethered, etc. It has to evolve to something digital, immediate, integrated, and mobile.

From the Q&A session:

The article container will be here for a long time. Academics use the HTML version of the article, but the PDF (static) version is their security blanket and archival copy.

Where does the library as source of funds when the focus is more on the end users? Publishers are looking for other sources of income as library budgets are decreasing (i.e. Kindle, product differentiation, etc.). They are looking to other purchasing centers at institutions.

How do publishers establish the cost of these 2.0 products? It’s essentially what the market will bear, with some adjustments. Sustainability is a grim perspective. Flourishing is much more positive, and not necessarily any less realistic. Equity is not a concept that comes into pricing.

The people who bring the tremendous flow of information under control (i.e. offer filters) will be successful. One of our tasks is to make filters to help our users manage the flow of information.

NASIG 2010: Linked Data and Libraries

Presenter: Eric Miller, Zepheira, LCC

Nowadays, we understand what the web is and the impact it has had on information sharing, but before it was developed, it was in a “vague but exciting” stage and few understood it. When we got started with the web, we really didn’t know what we were doing, but more importantly, the web was being developed so that it was flexible enough for smarter and more creative people to do amazing things.

“What did your website look like when you were in the fourth grade?” Kids are growing up with the web and it’s hard for them to comprehend life without it. [Dang, I’m old.]

This talk will be about linked data, its legacy, and how libraries can lead linked data. We have a huge opportunity to weave libraries into the fabric of libraries, and vice versa.

About five years ago, the BBC started making their content available in a service that allowed others to use and remix the delivery of the content in new ways. Rather than developing alternative platforms and creating new spaces, they focus on generating good content and letting someone else frame it. Other sources like NPR, the World Bank, and Data.gov are doing the same sorts of things. Within the library community, these things are happening, as well. OCLC’s APIs are getting easier to use, and several national libraries are putting their OPACs on the web with APIs.

Obama’s open government initiative is another one of those “vague but exciting” things, and it charged agencies to come up with their own methods of making their content available via the web. Agencies are now struggling with the same issues and desires that libraries have been tackling for years. We need to recognize our potential role in moving this forward.

Linked data is a best practice for sharing data, connecting data, and uses the semantic web. Rather than leaving the data in their current formats, let’s put them together in ways they can be used on the wider web. It’s not the databases that make the web possible, it’s the web that makes the databases usable.

Human computation can be put to use in ways that assist computers to make information more usable. Captcha systems are great for blocking automated programs when needed, and by using human computation to decipher scanned text that is undecipherable by computers, ReCaptcha has been able to turn unusable data into a fantastic digital repository of old documents.

LEGOs have been around for decades, and their simple design ensures that new blocks work with old blocks. Most kids end up dumping all of their sets into one bucket, so no matter where the individual building blocks come from, they can be put together and rebuild in any way you can imagine. We could do this with our blocks of data, if they are designed well enough to fit together universally.

Our current applications, for the most part, are not designed to allow for the portability of data. We need to rethink application design so that the data becomes more portable. Web applications have, by neccesity, had to have some amount of portability. Users are becoming more empowered to use the data provided to them in their own way, and if they don’t get that from your service/product, then they go elsewhere.

Digital preservation repositories are discussing ways to open up their data so that users can remix and mashup data to meet their needs. This requires new ways of archiving, cataloging, and supplying the content. Allow users to select the facets of the data that they are interested in. Provide options for visualizing the raw data in a systematic way.

Linked data platforms create identifiers for every aspect of the data they contain, and these are the primary keys that join data together. Other content that is created can be combined to enhance the data generated by agencies and libraries, but we don’t share the identifiers well enough to allow others to properly link their content.

Web architecture starts with web identifiers. We can use URLs to identify things other than just documents, but we need to be consistent and we can’t change the URL structures if we want it to be persistent. A lack of trust in identifiers is slowing down linked data. Libraries have the opportunity to leverage our trust and data to provide control points and best practices for identifier curation.

A lot of work is happening in W3C. Libraries should be more involved in the conversation.

Enable human computation by providing the necessary identifiers back to data. Empower your users to use your data, and build a community around it. Don’t worry about creating the best system — wrap and expose your data using the web as a platform.

css.php