NASIG 2013: Collaboration in a Time of Change

CC BY 2.0 2013-06-10
“soccer practice” by woodleywonderworks

Speaker: Daryl Yang

Why collaborate?

Despite how popular Apple products are today, they almost went bankrupt in the 90s. Experts believe that despite their innovation, their lack of collaboration led to this near-downfall. iTunes, iPod, iPad — these all require working with many developers, and is a big part of why they came back.

Microsoft started off as very open to collaboration and innovation from outside of the company, but that is not the case now. In order to get back into the groove, they have partnered with Nokia to enter the mobile phone market.

Collaboration can create commercial success, innovation, synergies, and efficiencies.

What change?

The amount of information generated now is vastly more than has ever been collected in the past. It is beyond our imagination.

How has library work changed? We still manage collections and access to information, but the way we do so has evolved with the ways information is delivered. We have had to increase our negotiation skills as every transaction is uniquely based on our customer profile. We have also needed to reorganize our structures and workflows to meet changing needs of our institutions and the information environment.

Deloitte identified ten key challenges faced by higher education: funding (public, endowment, and tuition), rivalry (competing globally for the best students), setting priorities (appropriate use of resources), technology (infrastructure & training), infrastructure (classroom design, offices), links to outcomes (graduation to employment), attracting talent (and retaining them), sustainability (practicing what we preach), widening access (MOOC, open access), and regulation (under increasing pressure to show how public funding is being used, but also maintaining student data privacy).

Libraries say they have too much stuff on shelves, more of it is available electronically, and it keeps coming. Do we really need to keep both print and digital when there is a growing pressure on space for users?

The British Library Document Supply Centre plays an essential role in delivering physical content on demand, but the demand is falling as more information is available online. And, their IT infrastructure needs modernization.

These concerns sparked conversations that created UK Research Reserve, and the evaluation of print journal usage. Users prefer print for in-depth reading, and HSS still have a high usage of print materials compared to the sciences. At least, that was the case 5-6 years ago when UKRR was created.

Ithaka S+R, JISC, and RLUK sent out a survey to faculty about print journal use, and they found that this is still fairly true. They also discovered that even those who are comfortable with electronic journal collections, they would not be happy to see print collections discarded. There was clearly a demand that some library, if not their own, maintain a collection of hard copies of journals. Libraries don’t have to keep them, but SOMEONE has to.

It is hard to predict research needs in the future, so it is important to preserve content for that future demand, and make sure that you still own it.

UKRR’s initial objectives were to de-duplicate low-use journals and allow their members to release space and realize savings/efficiency, and to preserve research material and provide access for researchers. They also want to achieve cultural change — librarians/academics don’t like to throw away things.

So far, they have examined 60,700 holdings, and of that, only 16% has been retained. They intend to keep at least 3 copies among the membership, so there was a significant amount of overlap in holdings across all of the schools.

being a student is time-consuming

I need to find a happy medium between self-paced instruction and structured instruction.

What have I done!?
“What have I done!?” by Miguel Angel

I signed up for a Coursera class on statistics for social science researchers because I wanted to learn how to better make use of library data and also how to use the open source program for statistical computing, R. The course information indicated I’d need to plan for 4-6 hours per week, which seemed doable, until I got into it.

The course consists of several lecture videos, most of which include a short “did you get the main concepts” multiple-choice quiz at the end. Each week there is an assignment and graded quiz, and of course a midterm and final.

It didn’t help that I started off behind, getting through only a lecture or two before the end of the first week, and missing the deadline for having the first assignment and quiz graded. I scrambled to catch up the second week, but once again couldn’t make it through the lectures in time.

That’s when I realized that it was going to take much longer than projected to keep up with this course. A 20-30 min lecture would take me 45-60 min to get through because I was constantly having to pause and write notes before the lecturer went on to the next concept. And since I was using Microsoft OneNote to keep and organize my notes, anything that involved a formula took longer to copy down.

By the end of the third week, I was still a few lectures away from finishing the second week, and I could see that it would take more time than I had to keep going, but I decided to go another week and do what I could.

That was this week, and I haven’t had time to make any more progress than where I was last week. With no prospect of catching up before the midterm deadline, I decided to withdraw from the course.

This makes me both disappointed in myself and in the structure of the course. I hate quitting, and I really want to learn the stuff. But, as I fell further and further behind, it became easier to put it off and focus on other overdue items on my task list, and thus compounding the problem.

The instructor for the course was easy to follow, and I like his lecture style, but when it came time to do the graded quiz and assignment, I realized I clearly had not understood everything, or he expected me to have more of a background in the field than a novice. It also seemed like the content was geared towards a 12 week course and with this being only 8 weeks, rather than reduce the content accordingly, he was cramming it all into those 8 weeks.

Having deadlines was a great motivation to keep up with the course, which I haven’t had when I’ve tried to learn on my own. It was the volume of content to absorb between those deadlines that tripped me up. I need to find a happy medium between self-paced instruction and structured instruction.

IL 2010: Dashboards, Data, and Decisions

[I took notes on paper because my netbook power cord was in my checked bag that SFO briefly lost on the way here. This is an edited transfer to electronic.]

presenter: Joseph Baisano

Dashboards pull information together and make it visible in one place. They need to be simple, built on existing data, but expandable.

Baisano is at SUNY Stonybrook, and they opted to go with Microsoft SharePoint 2010 to create their dashboards. The content can be made visible and editable through user permissions. Right now, their data connections include their catalog, proxy server, JCR, ERMS, and web statistics, and they are looking into using the API to pull license information from their ERMS.

In the future, they hope to use APIs from sources that provide them (Google Analytics, their ERMS, etc.) to create mashups and more on-the-fly graphs. They’re also looking at an open source alternative to SharePoint called Pentaho, which already has many of the plugins they want and comes in free and paid support flavors.

presenter: Cindi Trainor

[Trainor had significant technical difficulties with her Mac and the projector, which resulted in only 10 minutes of a slightly muddled presentation, but she had some great ideas for visualizations to share, so here’s as much as I captured of them.]

Graphs often tell us what we already know, so look at it from a different angle to learn something new. Gapminder plots data in three dimensions – comparing two components of each set over time using bubble graphs. Excel can do bubble graphs as well, but with some limitations.

In her example, Trainor showed reference transactions along the x-axis, the gate count along the y-axis, and the size of the circle represented the number of circulation transactions. Each bubble represented a campus library and each graph was for the year’s totals. By doing this, she was able to suss out some interesting trends and quirks to investigate that were hidden in the traditional line graphs.

ER&L 2010: ERMS Success – Harvard’s experience implementing and using an ERM system

Speaker: Abigail Bordeaux

Harvard has over 70 libraries and they are very decentralized. This implementation is for the central office that provides the library systems services for all of the libraries. Ex Libris is their primary vendor for library systems, include the ERMS, Verde. They try to go with vended products and only develop in-house solutions if nothing else is available.

Success was defined as migrating data from old system to new, to improve workflows with improved efficiency, more transparency for users, and working around any problems they encountered. They did not expect to have an ideal system – there were bugs with both the system and their local data. There is no magic bullet. They identified the high-priority areas and worked towards their goals.

Phase I involved a lot of project planning with clearly defined goals/tasks and assessment of the results. The team included the primary users of the system, the project manager (Bordeaux), and a programmer. A key part of planning includes scoping the project (Bordeaux provided a handout of the questions they considered in this process). They had a very detailed project plan using Microsoft Project, and at the very least, the listing out of the details made the interdependencies more clear.

The next stage of the project involved data review and clean-up. Bordeaux thinks that data clean-up is essential for any ERM implementation or migration. They also had to think about the ways the old ERM was used and if that is desirable for the new system.

The local system they created was very close to the DLF recommended fields, but even so, they still had several failed attempts to map the fields between the two systems. As a result, they had a cycle of extracting a small set of records, loading them into Verde, reviewing the data, and then delete the test records out of Verde. They did this several times with small data sets (10 or so), and when they were comfortable with that, they increased the number of records.

They also did a lot of manual data entry. They were able to transfer a lot, but they couldn’t do everything. And some bits of data were not migrated because of the work involved compared to the value of it. In some cases, though, they did want to keep the data, so they entered it manually. Part of what they did to visualize the mapping process, they created screenshots with notes that showed the field connections.

Prior to this project, they were not using Aleph to manage acquisitions. So, they created order records for the resources they wanted to track. The acquisitions workflow had to be reorganized from the ground up. Oddly enough, by having everything paid out of one system, the individual libraries have much more flexibility in spending and reporting. However, it took some public relations work to get the libraries to see the benefits.

As a result of looking at the data in this project, they got a better idea of gaps and other projects regarding their resources.

Phase two began this past fall to begin incorporating the data from the libraries that did not participate in phase one. They now have a small group with folks representing the libraries. This group is coming up with best practices for license agreements and entering data into the fields.

NASIG 2009: Ambient Findability

Libraries, Serials, and the Internet of Things

Presenter: Peter Morville

He’s a librarian that fell in love with the web and moved into working with information architecture. When he first wrote the book Information Architecture, he and his co-author didn’t include a definition of information architecture. With the second edition, they had four definitions: the structural design of shared information environments; the combination of organization, labeling, search, and navigation systems in webs sites and intranet; the art and science of shaping information products and experiences to support usability and finadability; an emerging discipline and community of practice focused on bringing principles of designing and architecture to the digital landscape.

[at this point, my computer crashed, losing all the lovely notes I had taken so far]

Information systems need to use a combination of categories (paying attention to audience and taxonomy), in-text linking, and alphabetical indexes in order to make information findable. We need to start thinking about the information systems of the future. If we examine the trends through findability, we might have a different perspective. What are all the different ways someone might find ____? How do we describe it to make it more findable?

We are drowning in information. We are suffering from information anxiety. Nobel Laureate Economist Herbert Simon said, “A wealth of information creates a poverty of attention.”

Ambient devices are alternate interfaces that bring information to our attention, and Moreville thinks this is a direction that our information systems are moving towards. What can we now do when our devices know where we are? Now that we can do it, how do we want to use it, and in what contexts?

What are our high-value objects, and is it important to make them more findable? RFID can be used to track important but easily hidden physical items, such as wheelchairs in a hospital. What else can we do with it besides inventory books?

In a world where for every object there are thousands of similar objects, how do we describe the uniqueness of each one? Who’s going to do it? Not Microsoft, and not Donald Rumsfeld and his unknown unknown. It’s librarians, of course. Nowadays, metadata is everywhere, turning everyone who creates it into librarians and information architects of sorts.

One of the challenges we have is determine what aspects of our information systems can evolve quickly and what aspects need more time.

In five to ten years from now, we’ll still be starting by entering a keyword or two into a box and hitting “go.” This model is ubiquitous and it works because it acknowledges human psychology of just wanting to get started. Search is not just about the software. It’s a complex, adaptive system that requires us to understand our users so that they not only get started, but also know how to take the next step once they get there.

Some example of best and worse practices for search are on his Flickr. Some user-suggested improvements to search are auto-compete search terms, suggested links or best bets, and for libraries, federated search helps users know where to begin. Faceted navigation goes hand in hand with federated search, which allows users to formulate what in the past would have been very sophisticated Boolean queries. It also helps them to understand the information space they are in by presenting a visual representation of the subset of information.

Morville referenced last year’s presentation by Mike Kuniavsky regarding ubiquitous computing, and he hoped that his presentation has complemented what Kuniavsky had to say.

Libraries are more than just warehouses of materials — they are cathedrals of information that inspire us.

PDF of his slides

thing 18: web applications

It has been a while since I seriously looked at Zoho Writer, preferring Google Docs mainly for the convenience (I always have Gmail open in a tab, so it’s easy to one-click open Google Docs from there). Zoho Writer seems to have more editing and layout tools, or at least, displays them more like MS Word.

I have been dabbling with web applications like document editors and spreadsheet creators mostly because I don’t like the ones that I purchased with my iMac. I probably would like the Mac versions more if I were more familiar with their quirks, but I’m so used to Microsoft Office products that remembering what I can and can’t do in the Mac environment is too frustrating. While Google Docs isn’t quite the same as Microsoft Office, it’s more-so than iWork ’08.

Playing with Zoho Writer, however, reminded me that I need to work around my Google bias. Particularly since the Zoho products seem to have the productivity functions that make my life easier.

CiL 2008: What’s New With Federated Search

Speakers: Frank Cervone & Jeff Wisniewski

Cervone gave a brief over-view of federated searching, with Wisniewski giving a demonstration of how it works in the real world (aka University of Pittsburgh library) using WebFeat. UofP library has a basic search front and center on their home page, and then a more advanced searching option under Find Articles. They don’t have a Database A-Z list because users either don’t know what database means in this context or can’t pick from the hundreds available.

Cervone demonstrated the trends in using meta search, which seems to go up and down, but over-all is going up. The cyclical aspect due to quarter terms was fascinating to see — more dramatic than what one might find with semester terms. Searches go up towards mid-terms and finals, then drop back down afterwards.

According to a College & Research Libraries article from November 2007, federated search results were not much different from native database searches. It also found that faculty rated results of federated searching much higher than librarians, which begs the question, “Who are we trying to satisfy — faculty/students or librarians.”

Part of why librarians are still unconvinced is because vendors are shooting themselves in the foot in the way they try to sell their products. Yes, federated search tools cannot search all possible databases, but our users are only concerned that they search the relevant databases that they need. De-duplication is virtually impossible and depends on the quality of the source data. There are other ways that vendors promote their products in ways that can be refuted, but the presenters didn’t spend much time on them.

The relationships between products and vendors is incestuous, and the options for federated searching are decreasing. There are a few open source options, though: LibraryFind, dbWiz, Masterkey, and Open Translators (provides connectors to databases, but you have to create the interface). Part of why open source options are being developed is because commercial vendors aren’t responding quickly to library needs.

LibraryFind has a two-click find workflow, making it quicker to get to the full-text. It also can index local collections, which would be handy for libraries who are going local.

dbWiz is a part of a larger ERM tool. It has an older, clunkier interface than LibraryFind. It doesn’t merge the results.

Masterkey can search 100 databases at a time, processing and returning hits at the rate of 2000 records per second, de-duped (as much as it can) and ranked by relevance. It can also do faceted browsing by library-defined elements. The interface can be as simple or complicated as you want it to be.

Federated searching as a stand-alone product is becoming passe as new products for interfacing with the OPAC are being developed, which can incorporate other library databases. vufind, WorldCat local, Encore, Primo, and Aquabrowser are just a few of the tools available. NextGen library interfaces aim to bring all library content together. However, they don’t integrate article-level information with the items in your catalog and local collections very well.

Side note: Microsoft Enterprise Search is doing a bit more than Google in integrating a wide range of information sources.

Trends: Choices from vendors is rapidly shrinking. Some progress in standards implementation. Visual search (like Grokker) is increasingly being used. Some movement to more holistic content discovery. Commercial products are becoming more affordable, making them available to institutions of all sizes of budgets.

Federated Search Blog for vendor-neutral info, if you’re interested.

straight talk

Neocons will hate this book. Moderates will feel enlightened and emboldened. Liberals will enjoy the occasional pot-shots at Neocons and want more.

Straight Talk from the Heartland : Tough Talk, Common Sense, and Hope from a Former Conservative by Ed Schultz

Ed Schultz is conservative turned liberal talk radio host. His show is syndicated on over 30 affiliate stations in the United States and Canada. The cover of his book, Straight Talk From the Heartland, proclaims that his is the fastest growing talk radio show. Not being a talk radio listener, I missed out on the hoopla surrounding this guy. However, having read his book, I’m now interested in hearing what he has to say on a regular basis. In the midst of his at times bombastic ranting (a trademark of talk radio), Schultz displays a keen intellect and average-guy understanding of the socio-politic-economic realities of life in the 21st century world. Neocons will hate this book. Moderates will feel enlightened and emboldened. Liberals will enjoy the occasional pot-shots at Neocons and want more.

The book is divided into two parts. The first describes Schultz’s transformation from hard-line conservative to left-of-center talk radio host. He outlines the events that brought him to his current ideology and lays out criticism of leaders on the Left and the Right, but mainly the Right. The second part is Schultz’s vision of what holds us together as a country and how these “pillars” are becoming unstable. At the end of each pillar section, he reiterates his main points, making this a handy crib sheet for those who may not wish to read them in detail.

My copy of this book has a handful of paper scraps sticking out of the top, marking the pages that have a particularly insightful or amusing comment. Here are just a few:

On Homeland Security:
“Minnesota, which also shares a border with Canada, has two nuclear plants within thirty miles of Minneapolis. Do you know who lives in Minneapolis? Prince! I am willing to make some concessions for homeland security. I am not willing to sacrifice the funk.” p.73

On Corporate Malfeasance:
“We need Ashcroft to stop spying on the librarians of America, and start focusing on the criminals again. And I’m not talking about Martha Stewart. We need the Securities and Exchange Commission and the Federal Trade Commission to grow some fangs, and start going after the big guns.” p.131

On Class Warfare:
“…I want to make it clear that I’m not advocating class warfare. Every good job I ever had was working for a rich man. Mr. Gates, I don’t mind the big paycheck, but could you at least give me a computer that works? Anytime any company dominates its industry like Microsoft does, there’s little motivation for the company to improve and give the public cheaper and better products.” p.135

On the “Liberal Media”:
“A journalist has to know enough about a topic to explain it to his audience. If he gets it wrong, people will know. So these people see the inner workings of government. They see the problems, they witness the disasters, and pretty soon their experiences tell them things need to change. A liberal is a compassionate proponent of change. So if journalists are liberals, maybe it’s reasonable to assume it was their life experiences that changed them. That’s how it worked for me.” p.201

On Talk Radio:
“Nowadays, it’s all too easy to get caught up in media frenzy. It feels like a new disaster is breaking every hour or so. I know this firsthand: I live, and work, in the bullet-point culture, too. My show is fast-paced. We paint in broad strokes. I provide solid information and opinions, but there’s no time for nuance — even if the President did nuance. So is talk radio the best place for in-depth news? Nah. It’s news delivered with equal helpings of entertainment, advocacy, and opinion, to help the medicine go down. Not all media is created equal.” p.220

Article first published as Straight Talk From the Heartland by Ed Schultz on Blogcritics.org

digital Christie

80 of Agatha Christie’s books are being released in digital format this year

I just read in the Powell’s newsletter that 80 of Agatha Christie’s books are being released in digital format this year. They will all be available for download in the Palm Reader, Adobe Reader, and Microsoft Reader formats, and the first five are available now for under $5 each (not bad considering paperback prices these days). Here are the titles currently available:

The Mysterious Affair at Styleshaven’t read it yet
The Murder of Roger Ackroydread it
The Murder at the Vicarageread it
The Body in the Libraryread it
They Came to Baghdadhaven’t read it yet

css.php