ER&L 2012: Working For You — NISO Standards and Best Practices

wood, water, and stone
courtyard of the conference hotel

Speaker: Nettie Legace

NISO is more than just the people at the office — it’s the community that comes together to address the problems that we have, and they’re alway soliciting new work items. What are your pain points? Who needs to be involved? How does it relate to existing efforts?

A NISO standard is more formal and goes through a rigorous process to be adopted. A NISO recommended practice is more edgy and discretionary. There are a lot of standards and best practices that have not been adopted, and it depends on the feasibility of implementation.

Get involved if you care about standards and best practices!

Speakers: Jamene Brooks-Kieffer & John Law

They’re talking about the Open Discovery Initiative. Discovery systems have exploded, and there are now opportunities to come together to create some efficiencies through standards. The working group includes librarians, publishers, and discovery service providers.

The project has three main goals: identify stakeholder needs & requirements, create recommendations and tools to streamline process, and provide effective means of assessment. Deliverables include a standard vocabulary, which the group has found they need just to communicate, and a NISO recommended practice. They plan to have the initial draft by January 2013 and finish the final draft by May 2013.

There will be an open teleconference on Monday.

Speaker: Oliver Pesch

SUSHI was one of the first standards initiatives to use the more agile development process which allows for review and revision in 5-7 years. The down side to a fixed standard is that you have to think of every possible outcome in the standard because you may not get a chance to address it again, and with electronic standards, you have to be able to move quickly. So, they focused on the core problem and allowed the standard to evolve through ongoing maintenance.

SUSHI support is now a requirement for COUNTER compliance, and it has been adopted by approximately 40 content providers. MISO client is a free tool you can download and use it to harvest SUSHI reports.

Part of SUSHI’s success comes from being a part of NISO and the integration with COUNTER, but they’re not done. They’re doing a lot of work to promote interoperability, and they have published the SUSHI Server Test Mode recommended practice. They’re also preparing for release 4 of COUNTER, and making adjustments as needed. They’re also publishing a COUNTER SUSHI Implementation Profile to standardize the interpretation of implementation across providers.

Questions/Comments:
Major concern about content providers who also have web-scale discovery solutions that don’t share content with each other — will that ever change? Not in the scope of the work of the ODI. More about how players work together in better ways, not about whether or not content providers participate.

Why is SUSHI not more commonly available? Some really big players aren’t there yet. How fast is this going to happen? A lot of work is in development but not live yet. What drives development is us. [Except we keep asking for it, and nothing happens.] If you allow it to be okay for it to not be there, then it won’t.

JISC created a template letter for member libraries to send to publishers, and that is making a difference.

NASIG 2010: Serials Management in the Next-Generation Library Environment

Panelists: Jonathan Blackburn, OCLC; Bob Bloom (?), Innovative Interfaces, Inc.; Robert McDonald, Kuali OLE Project/Indiana University

Moderator: Clint Chamberlain, University of Texas, Arlington

What do we really mean when we are talking about a “next-generation ILS”?

It is a system that will need to be flexible enough to accommodate increasingly changing and complex workflows. Things are changing so fast that systems can’t wait several years to release updates.

It also means different things to different stakeholders. The underlying thing is being flexible enough to manage both print and electronic, as well as better reporting tools.

How are “next-generation ILS” interrelated to cloud computing?

Most of them have components in the cloud, and traditional ILS systems are partially there, too. Networking brings benefits (shared workloads).

What challenges are facing libraries today that could be helped by the emerging products you are working on?

Serials is one of the more mature items in the ILS. Automation as a result of standardization of data from all information sources is going to keep improving.

One of the key challenges is to deal with things holistically. We get bogged down in the details sometimes. We need to be looking at things on the collection/consortia level.

We are all trying to do more with less funding. Improving flexibility and automation will offer better services for the users and allow libraries to shift their staff assets to more important (less repetitive) work.

We need better tools to demonstrate the value of the library to our stakeholders. We need ways of assessing resource beyond comparing costs.

Any examples of how next-gen ILS will improve workflow?

Libraries are increasing spending on electronic resources, and many are nearly eliminating their print serials spending. Next gen systems need reporting tools that not only provide data about electronic use/cost, but also print formats, all in one place.

A lot of workflow comes from a print-centric perspective. Many libraries still haven’t figured out how to adjust that to include electronic without saddling all of that on one person (or a handful). [One of the issues is that the staff may not be ready/willing/able to handle the complexities of electronic.]

Every purchase should be looked at independently of format and more on the cost/process for acquiring and making it available to the stakeholders.

[Not taking as many notes from this point on. Listening for something that isn’t fluffy pie in the sky. Want some sold direction that isn’t pretty words to make librarians happy.]

NASIG 2009: Moving Mountains of Cost Data

Standards for ILS to ERMS to Vendors and Back

Presenter: Dani Roach

Acronyms you need to know for this presentation: National Information Standards Organization (NISO), Cost of Resource Exchange (CORE), and Draft Standard For Trial Use (DSFTU).

CORE was started by Ed Riding from SirsiDynix, Jeff Aipperspach from Serials Solutions, and Ted Koppel from Ex Libris (and now Auto-Graphics). The saw a need to be able to transfer acquisitions data between systems, so they began working on it. After talking with various related parties, they approached NISO in 2008. Once they realized the scope, it went from being just an ILS to ERMS transfer to also including data from your vendors, agents, consortia, etc, but without duplicating existing standards.

Library input is critical in defining the use cases and the data exchange scenarios. There was also a need for a data dictionary and XML schema in order to make sure everyone involved understood each other. The end result is the NISO CORE DSFTU Z39.95-200x.

CORE could be awesome, but in the mean time, we need a solution. Roach has a few suggestions for what we can do.

Your ILS has a pile of data fields. Your ERMS has a pile of data fields. They don’t exactly overlap. Roach focused on only eight of the elements: title, match point (code), record order number, vendor, fund, what paid for, amount paid, and something else she can’t remember right now.

She developed Access tables with output from her ILS and templates from her ERMS. She then ran a query to match them up and then upload the acquisitions data to her ERMS.

For the database record match, she chose the Serials Solutions three letter database code, which was then put into an unused variable MARC field. For the journals, she used the SSID from the MARC records Serials Solutions supplies to them.

Things that you need to decide in advance: How do you handle multiple payments in a single fiscal year (What are you doing currently? Do you need to continue doing it?)? What about resources that share costs? How will you handle one-time vs. ongoing purchase? How will you maintain the integrity of the match point you’ve chosen?

The main thing to keep in mind is that you need to document your decisions and processes, particularly for when systems change and CORE or some other standard becomes a reality.

CIL 2009: ERM… What Do You Do With All That Data, Anyway?

This is the session that I co-presented with Cindi Trainor (Eastern Kentucky University). The slides don’t convey all of the points we were trying to make, so I’ve also included a cleaned-up version of those notes.

  1. Title
  2. In 2004, the Digital Library Federation (DLF) Electronic Resources Management Initiative (ERMI) published their report on the electronic resource management needs of libraries, and provided some guidelines for what data needed to be collected in future systems and how that data might be organized. The report identifies over 340 data elements, ranging from acquisitions to access to assessment.

    Libraries that have implemented commercial electronic resource management systems (ERMS) have spent many staff hours entering data from old storage systems, or recording those data for the first time, and few, if any, have filled out each data element listed in the report. But that is reasonable, since not every resource will have relevant data attached to it that would need to be captured in an ERMS.

    However, since most libraries do not have an infinite number of staff to focus on this level of data entry, the emphasis should instead be placed upon capturing data that is neccessary for managing the resources as well as information that will enhance the user experience.

  3. On the staff side, ERM data is useful for: upcoming renewal notifications, generating collection development reports that explain cost-per-use, based on publisher-provided use statistics and library-maintained, acquisitions data, managing trials, noting Electronic ILL & Reserves rights, and tracking the uptime & downtime of resources.
  4. Most libraries already have access management systems (link resolvers, A-Z lists, Marc records).
  5. User issues have shifted from the multiple copy problem to a “which copy?” problem. Users have multiple points of access, including: journal packages (JSTOR, Muse); A&I databases, with and without FT (which constitute e-resources in themselves); Library website (particularly “Electronic Resources” or “Databases” lists); OPAC; A-Z List (typically populated by an OpenURL link resolver); Google/gScholar; article/paper references/footnotes; course reserves; course management systems (Blackboard, Moodle, WebCT, Angel,Sakai); citation management software (RefWorks, EndNote, Zotero); LibGuides / course guides; bookmarks
  6. Users want…
  7. Google
  8. Worlds collide! What elements from the DLF ERM spec could enhance the user experience, and how? Information inside an ERMS can enhance access management systems or discovery: subject categorization within the ERM that would group similar resources and allow them to be presented alongside the resource that someone is using; using statuses to group & display items, such as a trialset within the ERM to automatically populate a page of new resources or an RSS feed to make it easy for the library to group and publicize even 30 day trial. ERMS’s need to do a better job of helping to manage the resource lifecycle by being built to track resources through that lifecycle so that discovery is updated by extension because resources are managed well, increasing uptime and availability and decreasing the time from identification above potential new resource to accessibility of that resource to our users
  9. How about turning ERM data into a discovery tool? Information about accessibility of resources to reference management systems like Endnote, RefWorks, or Zotero, and key pieces of information related to using those individual resources with same, could at least enable more sophisticated use of those resources if not increased discovery.

    (You’ve got your ERM in my discovery interface! No, you got your discovery interface in my ERM! Er… guess that doesn’t quite translate.)

  10. Flickr Mosaic: Phyllotaxy (cc:by-nc-sa); Librarians-Haunted-Love (cc:by-nc-sa); Square Peg (cc:by-nc-sa); The Burden of Thought (cc:by-nc)

CIL 2009: Open Source Library Implementations

Speakers: Karen Kohn and Eric McCloy

Preparing for moving from a traditional ILS to Koha.

They were frustrated with not being able to get at the data in their system, and it was cost-prohibitive to gain access. The user interface was locked into a particular design that was difficult to modify or make ADA-compliant. Staff clients had to be updated on each computer, which was a time-consuming process.

They have a strong IT-library partnership, which meant they knew they could work with a system that needs that kind of support.

How they did it (Fall 2008): dropped "discovery layer" product from their ILS, use the savings to get their federated search working, started doing nightly dumps of records from ILS to Koha (using ILS as back-end, Koha as the user interface), designed web interface (Drupal), and set up a Z39.50 interface to search Koha. Eventually they will be putting Koha out for native searching. Currently, they are testing and de-bugging, and have plans to roll out the migration this summer (2009).

Every once in a while, librarians need to be reminded that [insert tech glitch] is only temporary and everything will be corrected eventually, so don’t fret over the details to the point of getting distracted from the goal.

[Commentary: My library looked at Koha last year, but we need something that can handle acquisitions and serials, and Koha wasn’t at that point yet. It did not occur to me, and I don’t think it occurred to anyone else, that we could use our current ILS for staff and administrative things and make Koha our user interface, which is where it currently excels.]

NASIG 2008: Next Generation Library Automation – Its Impact on the Serials Community

Speaker: Marshall Breeding

Check & update your library’s record on lib-web-cats — Breeding uses this data to track the ILS and ERMS systems used by libraries world-wide.

The automation industry is consolidating, with several library products dropped or ceased to be supported. External financial investors are increasingly controlling the direction of the industry. And, the OPAC sucks. Libraries and users are continually frustrated with the products they are forced to use and are turning to open source solutions.

The innovation presented by automation companies falls below the expectations of libraries (not so sure about users). Conventional ILS need to be updated to incorporate the modern blend of digital and print collections.

We need to be more thoughtful in our incorporation of social tools into traditional library systems and infrastructures. Integrate those Web 2.0 tools into existing delivery options. The next NextGen automation tools should have collaborative features built into them.

Open source software isn’t free — it’s just a different model (pay for maintenance and setup v. pay for software). We need more robust open source software for libraries. Alternatively, systems need to open up so that data can be moved in and out easily. Systems need APIs that allow local coders to enhance systems to meet the needs of local users. Open source ERMS knowledge bases haven’t been seriously developed, although there is a need.

The drive towards open source solutions has often been motivated by disillusionment with current vendors. However, we need to be cautious, since open source isn’t necessarily the golden key that will unlock the door to paradise. (i.e. Koha still needs to add serials and acquisitions modules, as well as EDI capabilities).

The open source movement motivates the vendors to make their systems more open for us. This is a good thing. In the end, we’ll have a better set of options.

Open Source ILS options: Koha (commercial support from LibLime) used mostly by small to medium libraries, Evergreen (commercial support from Equinox Software) tested and proven for small to medium libraries in a consortia setting, and OPALS (commercial support from Media Flex) used mostly by k-12 schools.

In making the case for open source ILS, you need to compare the total cost of ownership, the features and functionality, and the technology platform and conceptual models. Are they next-generation systems or open source versions of legacy models?

Evaluate your RFPs for new systems. Are you asking for the things you really need or are you stuck in a rut of requiring technology that was developed in the 70s and may no longer be relevant?

Current open source ILS products lack serials and acquisitions modules. The initial wave of open source ILS commitments happened in the public library arena, but the recent activity has been in academic libraries (WALDO consortia going from Voyager to Koha, University of Prince Edward Island going from Unicorn to Evergreen in about a month). Do the current open source ILS products provide a new model of automation, or an open source version of what we already have?

Looking forward to the day when there is a standard XML for all ILS that will allow libraries to manipulated their data in any way they need to.

We are working towards a new model of library automation where monolithic legacy architectures are replaced by the fabric of service oriented architecture applications with comprehensive management.

The traditional ILS is diminishing in importance in libraries. Electronic content management is being done outside of core ILS functions. Library systems are becoming less integrated because the traditional ILS isn’t keeping up with our needs, so we find work-around products. Non-integrated automation is not sustainable.

ERMS — isn’t this what the acquisitions module is supposed to do? Instead of enhancing that to incorporate the needs of electronic resources, we had to get another module or work-around that may or may not be integrated with the rest of the ILS.

We are moving beyond metadata searching to searching the actual items themselves. Users want to be able to search across all products and packages. NextGen federated searching will harvest and index subscribed content so that it can be searched and retrieved more quickly and seamlessly.

Opportunities for serials specialists:

  • Be aware of the current trends
  • Be prepared for accelerated change cycles
  • Help build systems based on modern business process automation principles. What is your ideal serials system?
  • Provide input
  • Ensure that new systems provide better support than legacy systems
  • Help drive current vendors towards open systems

How will we deliver serials content through discovery layers?

Reference:

  • “It’s Time to Break the Mold of the Original ILS,” Computers in Libraries, Nov/Dec 2007.

CiL 2008: Catalog Effectiveness

Speaker: Rebekah Kilzer

The Ohio State University Libraries have used Google Analytics for assessing the use of the OPAC. It’s free for sites up to five million page views per month — OSU has 1-2 million page views per month. Libraries would want to use this because most integrated library systems offer little in the way of use statistics, and what they do have isn’t very… useful. You will need to add some code that will display on all OPAC pages.

Getting details about how users interact with your catalog can help with making decisions about enhancements. For example, knowing how many dial-up users interact with the site could determine whether or not you want to develop style sheets specifically for them, for example. You can also track what links are being followed, which can contribute to discussions on link real estate.

There are several libraries that are mashing up Google Analytics information with other Google tools.


Speakers: Cathy Weng and Jia Mi

The OPAC is a data-centered, card-catalog retrieval system that is good for finding known items, but not so good as an information discovery tool. It’s designed for librarians, not users. Librarian’s perceptions of users (forgetful, impatient) prevents librarians from recognizing changes in user behavior and ineffective OPAC design.

In order to see how current academic libraries represent and use OPAC systems, they studied 123 ARL libraries’ public interfaces and search capabilities as well as their bibliographic displays. In the search options, two-thirds of libraries use “keyword” as the default and the other third use “title.” The study also looked at whether or not the keyword search was a true keyword search with an automatic “and” or if the search was treated as a phrase. Few libraries used relevancy ranking as the default search results sorting.

There are some great disparities in OPAC quality. Search terms and search boxes are not retained on the results page, post-search limit functions are not always readily available, item status are not available on search results page, and the search keywords are not highlighted. These are things that the most popular non-library search engines do, which is what our users expect the library OPAC to do.

Display labels are MARC mapping, not intuitive. Some labels are suitable for certain types of materials but not all (proper name labels for items that are “authored” by conferences). They are potentially confusing (LCSH & MeSH) and occasionally inaccurate. The study found that there were varying levels of effort put to making the labels more user-friendly and not full of library jargon.

In addition to label displays, OPACs also suffer from the way the records are displayed. The order of bibliographic elements effect how users find relevant information to determine whether or not the item found is what they need.

There are three factors that contribute to the problem of the OPAC: system limitations, libraries not exploiting full functionality of ILS, and MARC standards are not well suited to online bibliographic display. We want a system that doesn’t need to be taught, that trusts users as co-developers, and we want to maximize and creatively utilize the system’s functionality.

The presentation gave great examples of why the OPAC sucks, but few concrete examples of solutions beyond the lipstick-on-a-pig catalog overlay products. I would have liked to have a list of suggestions for label names, record display, etc., since we were given examples of what doesn’t work or is confusing.

CiL 2008: Woepac to Wowpac

Moderator: Karen G. Schneider – “You’re going to go un-suck your OPACs, right?”


Speaker: Roy Tennant

Tennant spent the last ten years trying to kill off the term OPAC.

The ILS is your back end system, which is different from the discovery system (doesn’t replace the ILS). Both of these systems can be locally configured or hosted elsewhere. Worldcat Local is a particular kind of discovery system that Tenant will talk about if he has time.

Traditionally, users would search the ILS to locate items, but now the discovery system will search the ILS and other sources and present it to the user in a less “card catalog” way. Things to consider: Do you want to replace your ILS or just your public interface? Can you consider open source options (Koha, Evergreen, vuFind, LibraryFind etc.)? Do you have the technical expertise to set it up and maintain it? Are you willing to regularly harvest data from your catalog to power a separate user interface?


Speaker: Kate Sheehan

Speaking from her experience of being at the first library to implement LibraryThing for Libraries.

The OPAC sucks, so we look for something else, like LibraryThing. The users of LibraryThing want to be catalogers, which Sheehan finds amusing (and so did the audience) because so few librarians want to be catalogers. “It’s a bunch of really excited curators.”

LibraryThing for libraries takes the information available in LibraryThing (images, tags, etc.) and drops them into the OPAC (platform independent). The display includes other editions of books owned by the library, recommendations based on what people actually read, and a tag cloud. The tag cloud links to a tag browser that opens up on top of the catalog and allows users to explore other resources in the catalog based on natural language tags rather than just subject headings. Using a Greasmonkey script in your browser, you can also incorporate user reviews pulled from LibraryThing. Statistics show that the library is averaging around 30 tag clicks and 18 recommendations per day, which is pretty good for a library that size.

“Arson is fantastic. It keeps your libraries fresh.” — Sheehan joking about an unusual form of collection weeding (Danbury was burnt to the ground a few years ago)

Data doesn’t grow on trees. Getting a bunch of useful information dropped into the catalog saves staff time and energy. LibraryThing for Libraries didn’t ask for a lot from patrons, and it gave them a lot in return.


Speaker: Cindi Trainor

Are we there yet? No. We can buy products or use open source programs, but they still are not the solution.

Today’s websites are consist of content, community (interaction with other users), interactivity (single user customization), and interoperability (mashups). RSS feeds are the intersection of interactivity and content. There are a few websites that are in the sweet spot in the middle of all of these: Amazon (26/32)*, Flickr (26/32), Pandora (20/32), and Wikipedia (21/32) are a few examples.

Where are the next generation catalog enhancements? Each product has a varying degree of each element. Using a scoring system with 8 points for each of the four elements, these products were ranked: Encore (10/32), LibraryFind (12/32), Scriblio (14/32), and WorldCat Local (16/32). Trainor looked at whether the content lived in the system or elsewhere and the degree to which it pulled information from sources not in the catalog. Library products still have a long way to go – Voyager scored a 2/32.

*Trainor’s scoring system as described in paragraph three.


Speaker: John Blyberg

When we talk about OPACs, we tend to fetishize them. In theory, it’s not hard to create a Wowpac. The difficulty is in creating the system that lives behind it. We have lost touch with the ability to empower ourselves to fix the problems we have with integrated library systems and our online public access catalogs.

The OPAC is a reflection of the health of the system. The OPAC should be spilling out onto our website and beyond, mashing it up with other sites. The only way that can happen is with a rich API, which we don’t have.

The title of systems librarian is becoming redundant because we all have a responsibility and role in maintaining the health of library systems. In today’s information ecology, there is no destination — we’re online experiencing information everywhere.

There is no way to predict how the information ecology will change, so we need systems that will be flexible and can grow and change over time. (Sopac 2.0 will be released later this year for libraries who want to do something different with their OPACs.) Containers will fail. Containers are temporary. We cannot hang our hat on one specific format — we need systems that permit portability of data.

Nobody in libraries talks about “the enterprise” like they do in the corporate world. Design and development of the enterprise cannot be done by a committee, unless they are simply advisors.

The 21st century library remains un-designed – so let’s get going on it.

css.php