NASIG 2013: Collaboration in a Time of Change

CC BY 2.0 2013-06-10
“soccer practice” by woodleywonderworks

Speaker: Daryl Yang

Why collaborate?

Despite how popular Apple products are today, they almost went bankrupt in the 90s. Experts believe that despite their innovation, their lack of collaboration led to this near-downfall. iTunes, iPod, iPad — these all require working with many developers, and is a big part of why they came back.

Microsoft started off as very open to collaboration and innovation from outside of the company, but that is not the case now. In order to get back into the groove, they have partnered with Nokia to enter the mobile phone market.

Collaboration can create commercial success, innovation, synergies, and efficiencies.

What change?

The amount of information generated now is vastly more than has ever been collected in the past. It is beyond our imagination.

How has library work changed? We still manage collections and access to information, but the way we do so has evolved with the ways information is delivered. We have had to increase our negotiation skills as every transaction is uniquely based on our customer profile. We have also needed to reorganize our structures and workflows to meet changing needs of our institutions and the information environment.

Deloitte identified ten key challenges faced by higher education: funding (public, endowment, and tuition), rivalry (competing globally for the best students), setting priorities (appropriate use of resources), technology (infrastructure & training), infrastructure (classroom design, offices), links to outcomes (graduation to employment), attracting talent (and retaining them), sustainability (practicing what we preach), widening access (MOOC, open access), and regulation (under increasing pressure to show how public funding is being used, but also maintaining student data privacy).

Libraries say they have too much stuff on shelves, more of it is available electronically, and it keeps coming. Do we really need to keep both print and digital when there is a growing pressure on space for users?

The British Library Document Supply Centre plays an essential role in delivering physical content on demand, but the demand is falling as more information is available online. And, their IT infrastructure needs modernization.

These concerns sparked conversations that created UK Research Reserve, and the evaluation of print journal usage. Users prefer print for in-depth reading, and HSS still have a high usage of print materials compared to the sciences. At least, that was the case 5-6 years ago when UKRR was created.

Ithaka S+R, JISC, and RLUK sent out a survey to faculty about print journal use, and they found that this is still fairly true. They also discovered that even those who are comfortable with electronic journal collections, they would not be happy to see print collections discarded. There was clearly a demand that some library, if not their own, maintain a collection of hard copies of journals. Libraries don’t have to keep them, but SOMEONE has to.

It is hard to predict research needs in the future, so it is important to preserve content for that future demand, and make sure that you still own it.

UKRR’s initial objectives were to de-duplicate low-use journals and allow their members to release space and realize savings/efficiency, and to preserve research material and provide access for researchers. They also want to achieve cultural change — librarians/academics don’t like to throw away things.

So far, they have examined 60,700 holdings, and of that, only 16% has been retained. They intend to keep at least 3 copies among the membership, so there was a significant amount of overlap in holdings across all of the schools.

being a student is time-consuming

I need to find a happy medium between self-paced instruction and structured instruction.

What have I done!?
“What have I done!?” by Miguel Angel

I signed up for a Coursera class on statistics for social science researchers because I wanted to learn how to better make use of library data and also how to use the open source program for statistical computing, R. The course information indicated I’d need to plan for 4-6 hours per week, which seemed doable, until I got into it.

The course consists of several lecture videos, most of which include a short “did you get the main concepts” multiple-choice quiz at the end. Each week there is an assignment and graded quiz, and of course a midterm and final.

It didn’t help that I started off behind, getting through only a lecture or two before the end of the first week, and missing the deadline for having the first assignment and quiz graded. I scrambled to catch up the second week, but once again couldn’t make it through the lectures in time.

That’s when I realized that it was going to take much longer than projected to keep up with this course. A 20-30 min lecture would take me 45-60 min to get through because I was constantly having to pause and write notes before the lecturer went on to the next concept. And since I was using Microsoft OneNote to keep and organize my notes, anything that involved a formula took longer to copy down.

By the end of the third week, I was still a few lectures away from finishing the second week, and I could see that it would take more time than I had to keep going, but I decided to go another week and do what I could.

That was this week, and I haven’t had time to make any more progress than where I was last week. With no prospect of catching up before the midterm deadline, I decided to withdraw from the course.

This makes me both disappointed in myself and in the structure of the course. I hate quitting, and I really want to learn the stuff. But, as I fell further and further behind, it became easier to put it off and focus on other overdue items on my task list, and thus compounding the problem.

The instructor for the course was easy to follow, and I like his lecture style, but when it came time to do the graded quiz and assignment, I realized I clearly had not understood everything, or he expected me to have more of a background in the field than a novice. It also seemed like the content was geared towards a 12 week course and with this being only 8 weeks, rather than reduce the content accordingly, he was cramming it all into those 8 weeks.

Having deadlines was a great motivation to keep up with the course, which I haven’t had when I’ve tried to learn on my own. It was the volume of content to absorb between those deadlines that tripped me up. I need to find a happy medium between self-paced instruction and structured instruction.

IL 2010: Dashboards, Data, and Decisions

[I took notes on paper because my netbook power cord was in my checked bag that SFO briefly lost on the way here. This is an edited transfer to electronic.]

presenter: Joseph Baisano

Dashboards pull information together and make it visible in one place. They need to be simple, built on existing data, but expandable.

Baisano is at SUNY Stonybrook, and they opted to go with Microsoft SharePoint 2010 to create their dashboards. The content can be made visible and editable through user permissions. Right now, their data connections include their catalog, proxy server, JCR, ERMS, and web statistics, and they are looking into using the API to pull license information from their ERMS.

In the future, they hope to use APIs from sources that provide them (Google Analytics, their ERMS, etc.) to create mashups and more on-the-fly graphs. They’re also looking at an open source alternative to SharePoint called Pentaho, which already has many of the plugins they want and comes in free and paid support flavors.

presenter: Cindi Trainor

[Trainor had significant technical difficulties with her Mac and the projector, which resulted in only 10 minutes of a slightly muddled presentation, but she had some great ideas for visualizations to share, so here’s as much as I captured of them.]

Graphs often tell us what we already know, so look at it from a different angle to learn something new. Gapminder plots data in three dimensions – comparing two components of each set over time using bubble graphs. Excel can do bubble graphs as well, but with some limitations.

In her example, Trainor showed reference transactions along the x-axis, the gate count along the y-axis, and the size of the circle represented the number of circulation transactions. Each bubble represented a campus library and each graph was for the year’s totals. By doing this, she was able to suss out some interesting trends and quirks to investigate that were hidden in the traditional line graphs.

ER&L 2010: ERMS Success – Harvard’s experience implementing and using an ERM system

Speaker: Abigail Bordeaux

Harvard has over 70 libraries and they are very decentralized. This implementation is for the central office that provides the library systems services for all of the libraries. Ex Libris is their primary vendor for library systems, include the ERMS, Verde. They try to go with vended products and only develop in-house solutions if nothing else is available.

Success was defined as migrating data from old system to new, to improve workflows with improved efficiency, more transparency for users, and working around any problems they encountered. They did not expect to have an ideal system – there were bugs with both the system and their local data. There is no magic bullet. They identified the high-priority areas and worked towards their goals.

Phase I involved a lot of project planning with clearly defined goals/tasks and assessment of the results. The team included the primary users of the system, the project manager (Bordeaux), and a programmer. A key part of planning includes scoping the project (Bordeaux provided a handout of the questions they considered in this process). They had a very detailed project plan using Microsoft Project, and at the very least, the listing out of the details made the interdependencies more clear.

The next stage of the project involved data review and clean-up. Bordeaux thinks that data clean-up is essential for any ERM implementation or migration. They also had to think about the ways the old ERM was used and if that is desirable for the new system.

The local system they created was very close to the DLF recommended fields, but even so, they still had several failed attempts to map the fields between the two systems. As a result, they had a cycle of extracting a small set of records, loading them into Verde, reviewing the data, and then delete the test records out of Verde. They did this several times with small data sets (10 or so), and when they were comfortable with that, they increased the number of records.

They also did a lot of manual data entry. They were able to transfer a lot, but they couldn’t do everything. And some bits of data were not migrated because of the work involved compared to the value of it. In some cases, though, they did want to keep the data, so they entered it manually. Part of what they did to visualize the mapping process, they created screenshots with notes that showed the field connections.

Prior to this project, they were not using Aleph to manage acquisitions. So, they created order records for the resources they wanted to track. The acquisitions workflow had to be reorganized from the ground up. Oddly enough, by having everything paid out of one system, the individual libraries have much more flexibility in spending and reporting. However, it took some public relations work to get the libraries to see the benefits.

As a result of looking at the data in this project, they got a better idea of gaps and other projects regarding their resources.

Phase two began this past fall to begin incorporating the data from the libraries that did not participate in phase one. They now have a small group with folks representing the libraries. This group is coming up with best practices for license agreements and entering data into the fields.

NASIG 2009: Ambient Findability

Libraries, Serials, and the Internet of Things

Presenter: Peter Morville

He’s a librarian that fell in love with the web and moved into working with information architecture. When he first wrote the book Information Architecture, he and his co-author didn’t include a definition of information architecture. With the second edition, they had four definitions: the structural design of shared information environments; the combination of organization, labeling, search, and navigation systems in webs sites and intranet; the art and science of shaping information products and experiences to support usability and finadability; an emerging discipline and community of practice focused on bringing principles of designing and architecture to the digital landscape.

[at this point, my computer crashed, losing all the lovely notes I had taken so far]

Information systems need to use a combination of categories (paying attention to audience and taxonomy), in-text linking, and alphabetical indexes in order to make information findable. We need to start thinking about the information systems of the future. If we examine the trends through findability, we might have a different perspective. What are all the different ways someone might find ____? How do we describe it to make it more findable?

We are drowning in information. We are suffering from information anxiety. Nobel Laureate Economist Herbert Simon said, “A wealth of information creates a poverty of attention.”

Ambient devices are alternate interfaces that bring information to our attention, and Moreville thinks this is a direction that our information systems are moving towards. What can we now do when our devices know where we are? Now that we can do it, how do we want to use it, and in what contexts?

What are our high-value objects, and is it important to make them more findable? RFID can be used to track important but easily hidden physical items, such as wheelchairs in a hospital. What else can we do with it besides inventory books?

In a world where for every object there are thousands of similar objects, how do we describe the uniqueness of each one? Who’s going to do it? Not Microsoft, and not Donald Rumsfeld and his unknown unknown. It’s librarians, of course. Nowadays, metadata is everywhere, turning everyone who creates it into librarians and information architects of sorts.

One of the challenges we have is determine what aspects of our information systems can evolve quickly and what aspects need more time.

In five to ten years from now, we’ll still be starting by entering a keyword or two into a box and hitting “go.” This model is ubiquitous and it works because it acknowledges human psychology of just wanting to get started. Search is not just about the software. It’s a complex, adaptive system that requires us to understand our users so that they not only get started, but also know how to take the next step once they get there.

Some example of best and worse practices for search are on his Flickr. Some user-suggested improvements to search are auto-compete search terms, suggested links or best bets, and for libraries, federated search helps users know where to begin. Faceted navigation goes hand in hand with federated search, which allows users to formulate what in the past would have been very sophisticated Boolean queries. It also helps them to understand the information space they are in by presenting a visual representation of the subset of information.

Morville referenced last year’s presentation by Mike Kuniavsky regarding ubiquitous computing, and he hoped that his presentation has complemented what Kuniavsky had to say.

Libraries are more than just warehouses of materials — they are cathedrals of information that inspire us.

PDF of his slides