NASIG 2012: Why the Internet is More Attractive Than the Library

Speaker: Dr. Lynn Silipigni Connaway, OCLC

Students, particularly undergraduates, find Google search results to make more sense than library database search results. In the past, these kinds of users had to work around our services, but now we need to make our resources fit their workflow.

Connaway has tried to compare 12 different user behavior studies in the UK and the US to draw some broad conclusions, and this has informed her talk today.

Convenience is number one, and it changes. Context and situation are very important, and we need to remember that when asking questions about our users. Sometimes they just want the answer, not instruction on how to do the research.

Most people power browse these days: scan small chunks of information, view first few pages, no real reading. They combine this with squirreling — short, basic searches and saving the content for later use.

Students prefer keyword searches. This is supported by looking at the kinds of terms used in the search. Experts use broad terms to cover all possible indexing, novices use specific terms. So why do we keep trying to get them to use the “advance” search in our resources?

Students are confident with information discovery tools. They mainly use their common sense for determining the credibility of a site. If a site appears to have put some time into the presentation, then they are more likely to believe it.

Students are frustrated with navigating library websites, the inconvenience of communicating with librarians face to face, and they tend to associate libraries only with books, not with other information. They don’t recognize that the library is who is providing them with access to online content like JSTOR and the things they find in Google Scholar.

Students and faculty often don’t realize they can ask a question of a librarian in person because we look “busy” staring at our screens at the desk.

Researchers don’t understand copyright, or what they have signed away. They tend to be self-taught in discovery, picking up the same patterns as their graduate professors. Sometimes they rely on the students to tell them about newer ways of finding information.

Researchers get frustrated with the lack of access to electronic backfiles of journals, discovering non-English content, and unavailable content in search results (dead links, access limitation). Humanities researchers feel like there is a lack of good, specialized search engines for them (mostly for science). They get frustrated when they go to the library because of poor usability (i.e. signs) and a lack of integration between resources.

Access is more important than discovery. They want a seamless transition from discovery to access, without a bunch of authentication barriers.

We should be improving our OPACs. Take a look at Trove and Westerville Public Library. We need to think more like startups.

tl;dr – everything you’ve heard or read about what our users really do and really need, but we still haven’t addressed in the tools and services we offer to them

ER&L 2012 – Between Physical and Digital: Understanding Cross-Channel User Experiences

UX Brighton 2011 - Andrea Resmini
photo by Katariina Järvinen

speaker: Andrea Resmini

He starts with a brief description of the movie The Name of the Rose, which is a bit of a medieval murder mystery involving a monastery library. The “library” is actually a labyrinth, but only in the movie. (The book is a little different.)

The letters on the arches represent the names of the places in the world, and are placed in the library where they would be in the world as it relates to Europe. They didn’t exactly replicate the world, but they ordered it like good librarians.

If you don’t understand the organizational system, it’s just a labyrinth. The movie had to change this because it wouldn’t work to have room after room of books covering the walls. We have to see the labyrinth to be able to participate in the experience, which can be different depending on the medium (book or movie).

Before computers, we relied on experts (people), books, and mentors to learn. With computers, we have access to all of them, at any time. We are constantly connected (if we choose) to streams of data, and the access points are more and more portable.

“Cyberspace is not a place you go to but rather a layer tightly integrated into the world around us.” –Institute for the Future

This is not the future. It’s here now. Facebook, Twitter, Foursquare… our phones and mobile devices connect us.

Think about how you might send a message? Email, text, handwritten, smoke signals, ouija… ti’s the same task, but with many different mediums.

What if someone is looking for a book? They could go to the circ desk, but that’s becoming less common. They could go to a virtual bookshelf for the library. Or they could go to a competitor like Amazon. They could do this on a mobile phone. Or they could just start looking on the shelves themselves, whether they understand the classification/organization or not. The only thing that matters is the book. They don’t want to fight with mobile interfaces, search results in the millions, or creepy library stacks. They just want the book, when they want it, and how they want it.

The library is a channel, as is the labeling, circ desk, website, mobile interface, etc. Unfortunately, they don’t work together. We have silos of channels, not just silos of information.

Think about a bank. You can talk to the call center employee — they can’t help you if it’s not a part of their scripted routines. You can’t start an online process and finish it in a physical space (i.e. online banking then local branch).

Entertainment now uses many channels to reach consumers. If you really want to understand the second and third Matrix movies, you have to be familiar with the accessory channels of information (comic books, video games, etc.). In cross-channel experiences, users constantly move between channels, and will not stay in any single one of them from start to finish.

More companies, like clothing stores, are breaking down the barriers to flow between their physical and virtual stores. You can shop on line and return items to the physical store, for example.

Manifesto:

  1. Information architectures are becoming open ecologies: no artifacts stand alone — they are all apart of the user experience
  2. users are becoming intermediaries: participants in these ecosystems actively produce and re-mediate content and meaning
  3. static becomes dynamic: ecologies are perpetually unfinished, always changing, always open to further refinement and manipulation
  4. dynamic becomes hybrid: the boundaries separating media, channels, and genres get thinner
  5. horizontal prevails over vertical: intermediaries push for spontaneity, ephemeral structures of meaning and constant change
  6. products are becoming experiences: focus shifts from how to design single items to how to design experiences spanning multiple steps
  7. experiences become cross-channel experiences: experiences bridge multiple connected media, devices and environments into ubiquitous ecologies

it could be worse

Have you noticed the changes Google has been making to the way they display search results? Google Instant has been the latest, but before that, there was the introduction of the “Everything” sidebar. And that one in particular seems to have upset numerous Google search fans. If you do a search in Google for “everything sidebar,” the first few results are about removing or hiding it.

Not only that, but the latest offering from the Funny Music Project is a song all about hating the Google “Everything” sidebar. The creator, Jesse Smith, expresses a frustration that many of us can identify with, “It’s hard to find a product that does what it does really well. In a world of mediocrity, it’s the exception that excels. Then some jerk has to justify his job by tinkering and jiggering and messing up the whole thing.”

Tech folks like to tinker. We like making things work better, or faster, or be more intuitive. I’ll bet that there are a lot of Google users who didn’t know about the different kinds of content-specific searches that Google offered, or had never used the advanced search tools. And they’re probably happy with the introduction of the “Everything” sidebar.

But there’s another group of folks who are evidently very unhappy with it. Some say it takes up too much room on the screen, that it adds complexity, and that they just don’t like the way it looks.

Cue ironic chuckling from me.

Let’s compare the Google search results screen with search results from a few of the major players in libraryland:

Google

ProQuest EBSCOhost

CSA Illumina ISI Web of Knowledge

So, who’s going to write a song about how much they hate <insert library database platform of choice>?

NASIG 2010: Publishing 2.0: How the Internet Changes Publications in Society

Presenter: Kent Anderson, JBJS, Inc

Medicine 0.1: in dealing with the influenza outbreak of 1837, a physician administered leeches to the chest, James’s powder, and mucilaginous drinks, and it worked (much like take two aspirin and call in the morning). All of this was written up in a medical journal as a way to share information with peers. Journals have been the primary source of communicating scholarship, but what the journal is has become more abstract with the addition of non-text content and metadata. Add in indexes and other portals to access the information, and readers have changed the way they access and share information in journals. “Non-linear” access of information is increasing exponentially.

Even as technology made publishing easier and more widespread, it was still producers delivering content to consumers. But, with the advent of Web 2.0 tools, consumers now have tools that in many cases are more nimble and accessible than the communication tools that producers are using.

Web 1.0 was a destination. Documents simply moved to a new home, and “going online” was a process separate from anything else you did. However, as broadband access increases, the web becomes more pervasive and less a destination. The web becomes a platform that brings people, not documents, online to share information, consume information, and use it like any other tool.

Heterarchy: a system of organization replete with overlap, multiplicity, mixed ascendandacy and/or divergent but coextistent patterns of relation

Apomediation: mediation by agents not interposed between users and resources, who stand by to guide a consumer to high quality information without a role in the acquisition of the resources (i.e. Amazon product reviewers)

NEJM uses terms by users to add related searches to article search results. They also bump popular articles from searches up in the results as more people click on them. These tools improved their search results and reputation, all by using the people power of experts. In addition, they created a series of “results in” publications that highlight the popular articles.

It took a little over a year to get to a million Twitter authors, and about 600 years to get to the same number of book authors. And, these are literate, savvy users. Twitter & Facebook count for 1.45 million views of the New York Times (and this is a number from several years ago) — imagine what it can do for your scholarly publication. Oh, and NYT has a social media editor now.

Blogs are growing four times as fast as traditional media. The top ten media sites include blogs and the traditional media sources use blogs now as well. Blogs can be diverse or narrow, their coverage varies (and does not have to be immediate), they are verifiably accurate, and they are interactive. Blogs level that media playing field, in part by watching the watchdogs. Blogs tend to investigate more than the mainstream media.

It took AOL five times as long to get to twenty million users than it did for the iPhone. Consumers are increasingly adding “toys” to their collection of ways to get to digital/online content. When the NEJM went on the Kindle, more than just physicians subscribed. Getting content into easy to access places and on the “toys” that consumers use will increase your reach.

Print digests are struggling because they teeter on the brink of the daily divide. Why wait for the news to get stale, collected, and delivered a week/month/quarter/year later? People are transforming. Our audiences don’t think of information as analogue, delayed, isolated, tethered, etc. It has to evolve to something digital, immediate, integrated, and mobile.

From the Q&A session:

The article container will be here for a long time. Academics use the HTML version of the article, but the PDF (static) version is their security blanket and archival copy.

Where does the library as source of funds when the focus is more on the end users? Publishers are looking for other sources of income as library budgets are decreasing (i.e. Kindle, product differentiation, etc.). They are looking to other purchasing centers at institutions.

How do publishers establish the cost of these 2.0 products? It’s essentially what the market will bear, with some adjustments. Sustainability is a grim perspective. Flourishing is much more positive, and not necessarily any less realistic. Equity is not a concept that comes into pricing.

The people who bring the tremendous flow of information under control (i.e. offer filters) will be successful. One of our tasks is to make filters to help our users manage the flow of information.

ER&L 2010: Where are we headed? Tools & Technologies for the future

Speakers: Ross Singer & Andrew Nagy

Software as a service saves the institution time and money because the infrastructure is hosted and maintained by someone else. Computing has gone from centralized, mainframe processing to an even mix of personal computers on an networked enterprise to once again a very centralized environment with cloud applications and thin clients.

Library resource discovery is, to a certain extent, already in the cloud. We use online databases and open web search, WorldCat, and next gen catalog interfaces. The next gen catalog places the focus on the institution’s resources, but it’s not the complete solution. (People see a search box and they want to run queries on it – doesn’t matter where it is or what it is.) The next gen catalog is only providing access to local resources, and while it looks like modern interfaces, the back end is still old-school library indexing that doesn’t work well with keyword searching.

Web-scale discovery is a one-stop shop that provides increased access, enhances research, and provides and increase ROI for the library. Our users don’t use Google because it’s Google, they use it because it’s simple, easy, and fast.

How do we make our data relevant when administration doesn’t think what we do is as important anymore? Linked data might be one solution. Unfortunately, we don’t do that very well. We are really good at identifying things but bad at linking them.

If every component of a record is given identifiers, it’s possible to generate all sorts of combinations and displays and search results via linking the identifiers together. RDF provides a framework for this.

Also, once we start using common identifiers, then we can pull in data from other sources to increase the richness of our metadata. Mashups FTW!

ER&L 2010: Patron-driven Selection of eBooks – three perspectives on an emerging model of acquisitions

Speaker: Lee Hisle

They have the standard patron-driven acquisitions (PDA) model through Coutts’ MyiLibrary service. What’s slightly different is that they are also working on a pilot program with a three college consortia with a shared collection of PDA titles. After the second use of a book, they are charged 1.2-1.6% of the list price of the book for a 4-SU, perpetual access license.

Issues with ebooks: fair use is replaced by the license terms and software restrictions; ownership has been replaced by licenses, so if Coutts/MyiLibrary were to go away, they would have to renegotiate with the publishers; there is a need for an archiving solution for ebooks much like Portico for ejournals; ILL is not feasible for permissible; potential for exclusive distribution deals; device limitations (computer screens v. ebook readers).

Speaker: Ellen Safley

Her library has been using EBL on Demand. They are only buying 2008-current content within specific subjects/LC classes (history and technology). They purchase on the second view. Because they only purchase a small subset of what they could, the number of records they load fluxuates, but isn’t overwhelming.

After a book has been browsed for more than 10 minutes, the play-per-view purchase is initiated. After eight months, they found that more people used the book at the pay-per-view level than at the purchase level (i.e. more than once).

They’re also a pilot for an Ebrary program. They had to deposit $25,000 for the 6 month pilot, then select from over 100,000 titles. They found that the sciences used the books heavily, but there were also indications that the humanities were popular as well.

The difficulty with this program is an overlap between selector print order requests and PDA purchases. It’s caused a slight modification of their acquisitions flow.

Speaker: Nancy Gibbs

Her library had a pilot with Ebrary. They were cautious about jumping into this, but because it was coming from their approval plan vendor, it was easier to match it up. They culled the title list of 50,000 titles down to 21,408, loaded the records, and enabled them in SFX. But, they did not advertise it at all. They gave no indication of the purchase of a book on the user end.

Within 14 days of starting the project, they had spent all $25,000 of the pilot money. Of the 347 titles purchased, 179 of the purchased titles were also owned in print, but those print only had 420 circulations. The most popularly printed book is also owned in print and has had only two circulations. The purchases leaned more towards STM, political science, and business/economics, with some humanities.

The library tech services were a bit overwhelmed by the number of records in the load. The MARC records lacked OCLC numbers, which they would need in the future. They did not remove the records after the trial ended because of other more pressing needs, but that caused frustration with the users and they do not recommend it.

They were surprised by how quickly they went through the money. If they had advertised, she thinks they may have spent the money even faster. The biggest challenge they had was culling through the list, so in the future running the list through the approval plan might save some time. They need better match routines for the title loads, because they ended up buying five books they already have in electronic format from other vendors.

Ebrary needs to refine circulation models to narrow down subject areas. YBP needs to refine some BISAC subjects, as well. Publishers need to communicate better about when books will be made available in electronic format as well as print. The library needs to revise their funding models to handle this sort of purchasing process.

They added the records to their holdings on OCLC so that they would appear in Google Scholar search results. So, even though they couldn’t loan the books through ILL, there is value in adding the holdings.

They attempted to make sure that the books in the list were not textbooks, but there could have been some, and professors might have used some of the books as supplementary course readings.

One area of concern is the potential of compromised accounts that may result in ebook pirates blowing through funds very quickly. One of the vendors in the room assured us they have safety valves for that in order to protect the publisher content. This has happened, and the vendor reset the download number to remove the fraudulent downloads from the library’s account.

CIL 2009: What Have We Learned Lately About Academic Library Users

Speakers: Daniel Wendling & Neal K. Kaske, University of Maryland

How should we describe information-seeking behavior?

A little over a third of the students interviewed reported that they use Google in their last course-related search, and it’s about the same across all classes and academic areas. A little over half of the same students surveyed used ResearchPort (federated search – MetaLib), with a similar spread between classes and academic areas, although social sciences clearly use it more than the other areas. (survey tool: PonderMatic – copy of survey form in the conference book).

Their methodology was a combination of focus-group interviews and individual interviews, conducted away from the library to avoid bias. They used a coding sheet to standardize the responses for input into a database.

This survey gathering & analysis tool is pretty cool – I’m beginning to suspect that the presentation is more about it than about the results, which are also rather interesting.

 

Speaker: Ken Varnum

Will students use social bookmarking on a library website?

MTagger is a library-based tagging tool, borrowing concepts from resources like Delicious or social networking sites, and intended to be used to organize academic bookmarks. In the long term, the hope is that this will create research guides in addition to those supported by the librarians, and to improve the findability of the library’s resources.

Behind the scenes, they have preserved the concept of collections, which results in users finding similar items more easily. This is different from the commercial tagging tools that are not library-focused. Most tagging systems are tagger-centric (librarians are the exception). As a result, tag clouds are less informative, since most of the tags are individualized and there isn’t enough overlap to make them more visible.

From usability interviews, they found that personal motivations are stronger than social motivations, and that they wanted to have the tags displays alongside traditional search results. They don’t know why, but many users perceived tagging to be a librarian thing and not something they can do themselves.

One other thing that stood out in the usability interviews was the issue of privacy. Access is limited to network login, which has its benefits (your tags and you) and its problems (inappropriate terminology, information living in the system beyond your tenure etc.).

They are redesigning the website to focus on outcomes (personal motivation) rather than on tagging as such.

thing 12: Rollyo

Blogcritics used Rollyo for a while a couple of years ago, and I was never happy with the search results or the way they were displayed. It could have been some setting that BC used, but I assumed it had more to do with the way Rollyo works.

When I was at Blogworld last fall, I chatted with the folks at the Lijit booth for a while and made a note to take a look at their product when I got home. Apparently so did Phillip Winn, the Blogcritics Chief Geek, because not long after, Lijit replaced Rollyo as the site’s search tool. It’s worked out well.

Rollyo’s web search is powered by Yahoo Search, so I can’t see why I would want to use it as a general search engine. I think that Rollyo’s best value is as a search engine that looks at a specific collection of websites. This might be handy in a library if you have, for example, a number of different digital collections being served up from different domains or subdomains. With a Rollyo (or similar) service, you could build a single search interface for them. That is, if you don’t mind sending your users to a site that mixes in six paid links for each page of ten results, in addition to side-bar advertisements.

CiL 2008: What’s New With Federated Search

Speakers: Frank Cervone & Jeff Wisniewski

Cervone gave a brief over-view of federated searching, with Wisniewski giving a demonstration of how it works in the real world (aka University of Pittsburgh library) using WebFeat. UofP library has a basic search front and center on their home page, and then a more advanced searching option under Find Articles. They don’t have a Database A-Z list because users either don’t know what database means in this context or can’t pick from the hundreds available.

Cervone demonstrated the trends in using meta search, which seems to go up and down, but over-all is going up. The cyclical aspect due to quarter terms was fascinating to see — more dramatic than what one might find with semester terms. Searches go up towards mid-terms and finals, then drop back down afterwards.

According to a College & Research Libraries article from November 2007, federated search results were not much different from native database searches. It also found that faculty rated results of federated searching much higher than librarians, which begs the question, “Who are we trying to satisfy — faculty/students or librarians.”

Part of why librarians are still unconvinced is because vendors are shooting themselves in the foot in the way they try to sell their products. Yes, federated search tools cannot search all possible databases, but our users are only concerned that they search the relevant databases that they need. De-duplication is virtually impossible and depends on the quality of the source data. There are other ways that vendors promote their products in ways that can be refuted, but the presenters didn’t spend much time on them.

The relationships between products and vendors is incestuous, and the options for federated searching are decreasing. There are a few open source options, though: LibraryFind, dbWiz, Masterkey, and Open Translators (provides connectors to databases, but you have to create the interface). Part of why open source options are being developed is because commercial vendors aren’t responding quickly to library needs.

LibraryFind has a two-click find workflow, making it quicker to get to the full-text. It also can index local collections, which would be handy for libraries who are going local.

dbWiz is a part of a larger ERM tool. It has an older, clunkier interface than LibraryFind. It doesn’t merge the results.

Masterkey can search 100 databases at a time, processing and returning hits at the rate of 2000 records per second, de-duped (as much as it can) and ranked by relevance. It can also do faceted browsing by library-defined elements. The interface can be as simple or complicated as you want it to be.

Federated searching as a stand-alone product is becoming passe as new products for interfacing with the OPAC are being developed, which can incorporate other library databases. vufind, WorldCat local, Encore, Primo, and Aquabrowser are just a few of the tools available. NextGen library interfaces aim to bring all library content together. However, they don’t integrate article-level information with the items in your catalog and local collections very well.

Side note: Microsoft Enterprise Search is doing a bit more than Google in integrating a wide range of information sources.

Trends: Choices from vendors is rapidly shrinking. Some progress in standards implementation. Visual search (like Grokker) is increasingly being used. Some movement to more holistic content discovery. Commercial products are becoming more affordable, making them available to institutions of all sizes of budgets.

Federated Search Blog for vendor-neutral info, if you’re interested.

css.php