ER&L 2015 – All Thing Distributed: Collaborations Beyond Infrastructure

Collaboration
photo by Chris Lott

Speaker: Robert McDonald, Indiana University

Never use a fad term for your talk because it will make you look bad ten years later.

We’re not so tied up into having to build hardware stacks anymore. Now, that infrastructure is available through other services.

[insert trip down memory lane of the social media and mobile phones of 2005-2006]

We need to rethink how we procure IT in the library. We need better ways of thinking this out before committing to something that you may not be able to get out of it. Market shifts happen.

Server interfaces are much more user friendly now, particularly when you’re using something like AWS. However, bandwidth is still a big cost. Similar infrastructures, though, can mean better sharing of tools and services across institutions.

How much does your library invest in IT? How much of your percentage of your overall budget is that? How do you count that number? How much do we invest on open source or collaborative ventures that involve IT?

Groups have a negativity bias, which can have an impact on meetings. The outcomes need to be positive in order to move an organization forward.

Villanova opted to spend their funds on a locally developed discovery layer (VUFind), rather than dropping that and more on a commercial product. The broader community has benefitted from it as well.

Kuali OLE has received funding and support from a lot of institutions. GOKb is a sister project to develop a better knowledgebase to manage electronic resources, partnering with NCSU and their expertise on building an ERMS.

[Some stuff about HathiTrust, which is a members-only club my institution is not a part of, so I kind of tuned out.]

Something something Hydra and Avalon media system and Sufia.

Forking of open projects means that no one else can really use it, and that the local institution is on its own for maintaining it.

In summary, consider if you can spend money on investing in these kinds of projects rather than buying existing vendor products.

NASIG 2013: Adopting and Implementing an Open Access Policy — The Library’s Role

CC BY-NC-SA 2.0 2013-06-10
“Open Access promomateriaal” by biblioteekje

Speaker: Brian Kern

Open access policy was developed late last year and adopted/implemented in March. They have had it live for 86 days, so he’s not an expert, but has learned a lot in the process.

His college is small, and he expects less than 40 publications submitted a year, and they are using the institutional repository to manage this.

They have cut about 2/3 of their journal collections over the past decade, preferring publisher package deals and open access publications. They have identified the need to advocate for open access as a goal of the library. They are using open source software where they can, hosted and managed by a third party.

The policy borrowed heavily from others, and it is a rights-retention mandate in the style of Harvard. One piece of advice they had was to not focus on the specifics of implementation within the policy.

The policy states that it will be automatically granted, but waivers are available for embargo or publisher prohibitions. There are no restrictions on where they can publish, and they are encouraged to remove restrictive language from contracts and author addendum. Even with waivers, all articles are deposited to at least a “closed” archive. It stipulates that they are only interested in peer-reviewed articles, and are not concerned with which version of the article is deposited. Anything published or contracted to be published before the adoption date is not required to comply, but they can if they want to.

The funding, as one may expect, was left out. The library is going to cover the open access fees, with matching funds from the provost. Unused funds will be carried over year to year.

This was presented to the faculty as a way to ensure that their rights were being respected when they publish their work. Nothing was said about the library and our traditional concerns about saving money and opening access to local research output.

The web hub will include the policy, a FAQ, recommended author addendum based on publisher, funding information, and other material related to the process. The faculty will be self-depositing, with review/edit by Kern.

They have a monthly newsletter/blog to let the campus know about faculty and student publications, so they are using this to identify materials that should be submitted to the collection. He’s also using Stephen X. Flynn’s code to identify OA articles via SHERPA/RoMEO to find the ones already published that can be used to populate the repository.

They are keeping the senior projects closed in order to keep faculty/student collaborations private (and faculty research data offline until they publish).

They have learned that the policy is dependent on faculty seeing open access as a reality and library keeping faculty informed of the issues. They were not prepared for how fast they would get this through and that submissions would begin. Don’t expect faculty to be copyright lawyers. Keep the submission process as simple as possible, and allow them to use alternatives like email or paper.

LITA 2008: Hi-Fi-Sci-Fi-Library: Technology, Convergence, Content, Community, Ubiquity and Library Futures

Presenter: Michael Porter, WebJunction

Hi-fi is usually associated with audio equipment, but fidelity is very much related to our work: interoperability, compatibility, quality of the document, etc.

When you distill what libraries are and what they do, it comes down to content and community, and this is what libraries will still be in the future. Star Trek’s LCARS stands for Library Computer Access and Retrieval System — even those folks thought that the “library” would be that integrated into everything in the future.

The line between hardware and software is blurring now, particularly with software that can emulate hardware. Costs for technology are decreasing, computing power is increasing, and battery life is getting longer. There are newer and better methods of creating content, and competition for content provision is getting fierce. And, you can find community all over the Internet.

The Google Android phone is actually just software that is open source and can be used by any wireless phone manufacturer, and can be hacked by any coders who want to enhance the functionality. The Bug is hardware that comes in components that can be hooked together to created whatever you need, like a digital camera or portable computer.

Audio test for the video section – Rickroll!

The Time Machine: Computer interface in the library is represented as a human hologram. Also, the reference interview was… a bit rude. Books were represented as being behind preserved glass, and the students carried hand-held pads to download content.

Star Trek IV: Human-computer interaction in the Star Trek future uses voice recognition, but in 1985, that wasn’t possible.

Star Trek IV: Spock is working with three monitors, each presenting different problems. He uses a mixture of voice and tactile inputs to respond.

Futurama: 1000 years in the future, we will still have books and the Dewey Decimal System.

I, Robot: “ban the Internet to keep the libraries open”

Futurama: Will be able to get physical things from the Internet. We already have printers that can print in 3-D!

How William Shatner Changed the World: TNG wanted us to get the notion that we should not be afraid of technology.

Minority Report: Manipulates computer visuals using hi-tech gloves.

How William Shatner Changed the World: Modern-day physicist uses his knowledge to examine the realistic possibilities of Star Trek.

Zardoz: Ring that projects data.

Futurama: Librarians hold the keys to power, but it doesn’t always appear that way.

NASIG 2008: Next Generation Library Automation – Its Impact on the Serials Community

Speaker: Marshall Breeding

Check & update your library’s record on lib-web-cats — Breeding uses this data to track the ILS and ERMS systems used by libraries world-wide.

The automation industry is consolidating, with several library products dropped or ceased to be supported. External financial investors are increasingly controlling the direction of the industry. And, the OPAC sucks. Libraries and users are continually frustrated with the products they are forced to use and are turning to open source solutions.

The innovation presented by automation companies falls below the expectations of libraries (not so sure about users). Conventional ILS need to be updated to incorporate the modern blend of digital and print collections.

We need to be more thoughtful in our incorporation of social tools into traditional library systems and infrastructures. Integrate those Web 2.0 tools into existing delivery options. The next NextGen automation tools should have collaborative features built into them.

Open source software isn’t free — it’s just a different model (pay for maintenance and setup v. pay for software). We need more robust open source software for libraries. Alternatively, systems need to open up so that data can be moved in and out easily. Systems need APIs that allow local coders to enhance systems to meet the needs of local users. Open source ERMS knowledge bases haven’t been seriously developed, although there is a need.

The drive towards open source solutions has often been motivated by disillusionment with current vendors. However, we need to be cautious, since open source isn’t necessarily the golden key that will unlock the door to paradise. (i.e. Koha still needs to add serials and acquisitions modules, as well as EDI capabilities).

The open source movement motivates the vendors to make their systems more open for us. This is a good thing. In the end, we’ll have a better set of options.

Open Source ILS options: Koha (commercial support from LibLime) used mostly by small to medium libraries, Evergreen (commercial support from Equinox Software) tested and proven for small to medium libraries in a consortia setting, and OPALS (commercial support from Media Flex) used mostly by k-12 schools.

In making the case for open source ILS, you need to compare the total cost of ownership, the features and functionality, and the technology platform and conceptual models. Are they next-generation systems or open source versions of legacy models?

Evaluate your RFPs for new systems. Are you asking for the things you really need or are you stuck in a rut of requiring technology that was developed in the 70s and may no longer be relevant?

Current open source ILS products lack serials and acquisitions modules. The initial wave of open source ILS commitments happened in the public library arena, but the recent activity has been in academic libraries (WALDO consortia going from Voyager to Koha, University of Prince Edward Island going from Unicorn to Evergreen in about a month). Do the current open source ILS products provide a new model of automation, or an open source version of what we already have?

Looking forward to the day when there is a standard XML for all ILS that will allow libraries to manipulated their data in any way they need to.

We are working towards a new model of library automation where monolithic legacy architectures are replaced by the fabric of service oriented architecture applications with comprehensive management.

The traditional ILS is diminishing in importance in libraries. Electronic content management is being done outside of core ILS functions. Library systems are becoming less integrated because the traditional ILS isn’t keeping up with our needs, so we find work-around products. Non-integrated automation is not sustainable.

ERMS — isn’t this what the acquisitions module is supposed to do? Instead of enhancing that to incorporate the needs of electronic resources, we had to get another module or work-around that may or may not be integrated with the rest of the ILS.

We are moving beyond metadata searching to searching the actual items themselves. Users want to be able to search across all products and packages. NextGen federated searching will harvest and index subscribed content so that it can be searched and retrieved more quickly and seamlessly.

Opportunities for serials specialists:

  • Be aware of the current trends
  • Be prepared for accelerated change cycles
  • Help build systems based on modern business process automation principles. What is your ideal serials system?
  • Provide input
  • Ensure that new systems provide better support than legacy systems
  • Help drive current vendors towards open systems

How will we deliver serials content through discovery layers?

Reference:

  • “It’s Time to Break the Mold of the Original ILS,” Computers in Libraries, Nov/Dec 2007.