Speakers: Roën Janyk (Okanagan College) & Emma Lawson (Langara College)
Two new-ish librarians talk about applying their LIS training to the real world, and using the Core Competencies as a framework for identifying the gaps they encountered. They wanted to determine if the problem is training or if eresources/serials management is just really complicated.
Collection development, cataloging (both MARC and Dublin Core), records management, and digital management were covered in their classes. Needed more on institutional repository management.
They did not cover licensing at all, so all they learned was on the job, comparing different documents. They also learned that the things librarians look for in contracts is not what the college administrators are concerned about. In addition, the details of information about budgeting and where that information should be stored was fuzzy, and it took some time to gather that in their jobs. And, as with many positions, if institutional memory (and logins) is not passed on, a lot of time will be spent on recreating it. For LIS programs, they wish they had more information about the details of use statistics and their application, as well as resource format types and the quirks that come with them.
They had classes about information technology design and broader picture things, but not enough about relationships between the library and IT or the kinds of information technology in libraries now. There were some courses that focused on less relevant technology and history of technology, and the higher level courses required too high of a learning curve to attract LIS students.
For the core competency on research analysis and application, we need to be able to gather appropriate data and present the analysis to colleagues and superiors in a way that they can understand it. In applying this, they ran into questions about comparing eresources to print, deciding when to keep a low-use resource, and other common criteria for comparing collections besides cost/use. In addition, there needs to be more taught about managing a budget, determining when to make cancelation or format change decisions, alternatives to subscriptions, and communicating all of this outside of the library.
Effective communication touches on everything that we do. It requires that you frame situations from someone else’s viewpoint. You need to document everything and be able to clearly describe the situation in order to trouble-shoot with vendors. Be sympathetic to the frustrations of users encountering the problems.
Staff supervision may range from teams with no managerial authority to staff who report to you. ER librarians have to be flexible and work within a variety of deparmental/project frameworks, and even if they do have management authority, they will likely have to manage projects that involve staff from other departments/divisions/teams. They did not find that the library management course was very applicable. Project management class was much more useful. One main challenge is staff who have worked in the library for a long time, and change management or leadership training would be very valuable, as well as conversations about working with unionized staff.
In the real world being aware of trends in the profession involves attending conferences, participating in webinars/online training, and keeping up with the literature. They didn’t actually see an ERMS while in school, nor did they work with any proprietary ILS. Most of us learn new things by talking to our colleagues at other institutions. MLS faculty need to keep up with the trends as well, and incorporate that into classes — this stuff changes rapidly.
They recommend that ILS and ERMS vendors collaborate with MLS programs so that students have some real-world applications they can take with them to their jobs. Keep courses current (what is actually being used in libraries) and constantly be evaluating the curriculum, which is beyond what ALA requires for accreditation. More case studies and real-world experiences in applied courses. Collection development course was too focused on print collection analysis and did not cover electronic resources.
As a profession, we need more sessions at larger, general conferences that focus on electronic resources so that we’re not just in our bubble. More cross-training in the workplaces. MLS programs need to promote eresources as a career path, instead of just the traditional reference/cataloger/YA divides.
If we are learning it all on the job, then why are we required to get the degrees?
Updates from Serials Solutions – mostly Resource Manager (Ashley Bass):
Keep up to date with ongoing enhancements for management tools (quarterly releases) by following answer #422 in the Support Center, and via training/overview webinars.
Populating and maintaining the ERM can be challenging, so they focused a lot of work this year on that process: license template library, license upload tool, data population service, SUSHI, offline date and status editor enhancements (new data elements for sort & filter, new logic, new selection elements, notes), and expanded and additional fields.
Workflow, communication, and decision support enhancements: in context help linking, contact tool filters, navigation, new Counter reports, more information about vendors, Counter summary page, etc. Her most favorite new feature is “deep linking” functionality (aka persistent links to records in SerSol). [I didn’t realize that wasn’t there before — been doing this for my own purposes for a while.]
Next up (in two weeks, 4th quarter release): new alerts, resource renewals feature (reports! and checklist!, will inherit from Admin data), Client Center navigation improvements (i.e. keyword searching for databases, system performance optimization), new license fields (images, public performance rights, training materials rights) & a few more, Counter updates, SUSHI updates (making customizations to deal with vendors who aren’t strictly following the standard), gathering stats for Springer (YTD won’t be available after Nov 30 — up to Sept avail now), and online DRS form enhancements.
In the future: license API (could allow libraries to create a different user interface), contact tools improvements, interoperability documentation, new BI tools and reporting functionality, and improving the Client Center.
Also, building a new KB (2014 release) and a web-scale management solution (Intota, also coming 2014). They are looking to have more internal efficiencies by rebuilding the KB, and it will include information from Ulrich’s, new content types metadata (e.g. A/V), metadata standardization, industry data, etc.
Summon Updates (Andrew Nagy):
I know very little about Summon functionality, so just listened to this one and didn’t take notes. Take-away: if you haven’t looked at Summon in a while, it would be worth giving it another go.
360 Link Customization via JavaScript and CSS (Liz Jacobson & Terry Brady, Georgetown University):
Goal #1: Allow users to easily link to full-text resources. Solution: Go beyond the out-of-the box 360 Link display.
Goal #2: Allow users to report problems or contact library staff at the point of failure. Solution: eresources problem report form
They created the eresources problem report form using Drupal. The fields include contact information, description of the resource, description of the problem, and the ability to attach a screenshot.
When they evaluated the slightly customized out of the box 360 Link page, they determined that it was confusing to users, with too many options and confusing links. So, they took some inspiration from other libraries (Matthew Reidsma’s GVUS jQuery code available on Github) and developed a prototype that uses custom JavaScript and CSS to walk the user through the process.
Some enhancements included: making the links for full-text (article & journal) butttons, hiding additional help information and giving some hover-over information, parsing the citation into the problem report page, and moving the citation below the links to full-text. For journal citations with no full-text, they made the links to the catalog search large buttons with more text detail in them.
Some of the challenges of implementing these changes is the lack of a test environment because of the limited preview capablities in 360 Link. Any changes actually made required an overnight refresh and they would be live, opening the risk of 24 hour windows of broken resource links. So, they created their own test environment by modifying test scenarios into static HTML files and wrapping them in their own custom PHP to mimic the live pages without having to work with the live pages.
[At this point, it got really techy and lost me. Contact the presenters for details if you’re interested. They’re looking to go live with this as soon as they figure out a low-use time that will have minimal impact on their users.]
Customizing 360 Link menu with jQuery (Laura Wrubel, George Washington University)
They wanted to give better visual clues for users, emphasize the full-text, have more local control over linkns, and visual integration with other library tools so it’s more seamless for users.
They started with Reidsma’s code, then then forked off from it. They added a problem link to a Google form, fixed ebook chapter links and citation formatting, created conditional links to the catalog, and linked to their other library’s link resolver.
They hope to continue to tweak the language on the page, particularly for ILL suggestion. The coverage date is currently hidden behind the details link, which is fine most of the time, but sometimes that needs to be displayed. They also plan to load the print holdings coverage dates to eliminate confusion about what the library actually has.
In the future, they would rather use the API and blend the link resolver functionality with catalog tools.
Custom document delivery services using 360 Link API (Kathy Kilduff, WRLC)
They facilitate inter-consortial loans (Consortium Loan Service), and originally requests were only done through the catalog. When they started using SFX, they added a link there, too. Now that they have 360 Link, they still have a link there, but now the request form is prepopulated with all of the citation information. In the background, they are using the API to gather the citation information, as well as checking to see if there are terms of use, and then checking to see if there are ILL permissions listed. They provide a link to the full-text in the staff client developed for the CLS if the terms of use allow for ILL of the electronic copy. If there isn’t a copy available in WRLC, they forward the citation information to the user’s library’s ILL form.
License information for course reserves for faculty (Shanyun Zhang, Catholic University)
Included course reserve in the license information, but then it became an issue to convey that information to the faculty who were used to negotiating it with publishers directly. Most faculty prefer to use Blackboard for course readings, and handle it themselves. But, they need to figure out how to incorporate the library in the workflow. Looking for suggestions from the group.
Advanced Usage Tracking in Summon with Google Anaytics (Kun Lin, Catholic University)
In order to tweak user experience, you need to know who, what, when, how, and most important, what were they thinking. Google Anayltics can help figure those things out in Summon. Parameters are easy ways to track facets, and you can use the data from Google Analytics to figure out the story based on that. Tracking things the “hard way,” you can use the conversion/goal function of Google Analytics. But, you’ll need to know a little about coding to make it work, because you have to add some javascripts to your Summon pages.
Use of ERM/KB for collection analysis (Mitzi Cole, NASA Goddard Library)
Used the overlap analysis to compare print holdings with electronic and downloaded the report. The partial overlap can actually be a full overlap if the coverage dates aren’t formatted the same, but otherwise it’s a decent report. She incorporated license data from Resource Manager and print collection usage pulled from her ILS. This allowed her to create a decision tool (spreadsheet), and denoted the print usage in 5 year increments, eliminating previous 5 years use with each increment (this showed a drop in use over time for titles of concern).
Discussion of KnowledgeWorks Management/Metadata (Ben Johnson, Lead Metadata Librarian, SerialsSolutions)
After they get the data from the provider or it is made available to them, they have a system to automatically process the data so it fits their specifications, and then it is integrated into the KB.
They deal with a lot of bad data. 90% of databases change every month. Publishers have their own editorial policies that display the data in certain ways (e.g., title lists) and deliver inconsistent, and often erroneous, metadata. The KB team tries to catch everything, but some things still slip through. Throught the data ingestion process, they apply rules based on past experience with the data source. After that, the data is normalized so that various title/ISSN/ISBN combinations can be associated with the authority record. Finally, the data is incorporated into the KB.
Authority rules are used to correct errors and inconsistencies. Rule automatically and consistently correct holdings, and they are often used to correct vendor reporting problems. Rules are condified for provider and database, with 76,000+ applied to thousands of databases, and 200+ new rules are added each month.
Why does it take two months for KB data to be corrected when I report it? Usually it’s because they are working with the data providers, and some respond more quickly than others. They are hoping that being involved with various initiatives like KBART will help fix data from the provider so they don’t have to worry about correcting it for us, but also making it easier to make those corrections by using standards.
Client Center ISSN/ISBN doesn’t always work in 360 Links, which may have something to do with the authority record, but it’s unclear. It’s possible that there are some data in the Client Center that haven’t been normalized, and could cause this disconnect. And sometimes the provider doesn’t send both print and electronic ISSN/ISBN.
What is the source for authority records for ISSN/ISBN? LC, Bowker, ISSN.org, but he’s not clear. Clarification: Which field in the MARC record is the source for the ISBN? It could be the source of the normalization problem, according to the questioner. Johnson isn’t clear on where it comes from.
You will not hear the magic rational that will allow you to cancel all your A&I databases. The last three years of analysis at her institution has resulted in only two cancelations.
Background: she was a science librarian before becoming an administrator, and has a great appreciation for A&I searching.
Scenario: a subject-specific database with low use had been accessed on a per-search basis, but going forward it would be sole-sourced and subscription based. Given that, their cost per search was going to increase significantly. They wanted to know if Summon would provide a significant enough overlap to replace the database.
Arguments: it’s key to the discipline, specialized search functionality, unique indexing, etc… but there’s no data to support how these unique features are being used. Subject searches in the catalog were only 5% of what was being done, and most of them came from staff computers. So, are our users actually using the controlled vocabularies of these specialized databases. Finally, librarians think they just need to promote these more, but sadly, that ship’s already sailed.
Beyond usage data, you can also look at overlap with your discovery service, and also identify unique titles. For those, you’ll need to consider local holdings, ILL data, impact factors, language, format, and publication history.
Once they did all of that, they found that 92% of the titles were indexed in their discovery service. The depth of the backfile may be an issue, depending on the subject area. Also, you may need to look at the level of indexing (cover to cover vs. selective). In the end, they found that 8% of the titles not included, they owned most of them in print and they were rather old. 15% of the 8% had impact factors, which may or may not be relevant, but it is something to consider. And, most of the titles were non-English. They also found that there were no ILL requests for the non-owned unique titles, and less than half were scholarly and currently being published.
The US ISSN Center (part of the Library of Congress) is primarily responsible for assigning ISSN and metadata to journals. They work with the international ISSN network as well as answering questions from libraries and publishers.
R.R. Bowker is a subdivision of ProQuest. They create metadata for libraries and publishers and several products like Ulrichs. They have one employee working in the ISSN Center assigning ISSNs, creating CONSER records, screens incoming requests, and solves problems. Initially, they worked mostly with the Ulrichs database, and now are working with the SerialsSolutions database.
The relationship began over metadata. Bowker wanted data for Ulrichs, and the ISSN Center needed more staff to do it. The contract is non-exclusive, but to date, no other company has been involved. LoC would like to develop more relationships like this, in part to reduce duplication of effort.
The benefit for publishers is a one-stop shop for ISSN registration, Ulrichs inclusion, and CONSER records in WorldCat. The serials community benefits from having a standard number for each publication, fuller records, and an advocate for gathering that metadata.
Some of the challenges have been technological (firewalls, different computers, etc.) and administrative (different bosses, holidays, etc,). They are looking to better ways of collaborating between the ISSN record creation and the Ulrichs record creation.
Standards matter. Common data elements can be mapped.
More detail in the current issue of Serials Review.
When creating a digital repository, start small. They started initially with getting faculty publications up. This required developing strong relationships with them.
While you may be starting small, you need to dream big as well. What else can you do?
Get support. Go to the offices of the people you want to work with. Get familiar with the administrative assistants. But, be ready to do everything yourself.
Plan ahead, but write it in pencil. They organizational structure of your repository needs to be flexible to handle outside changes.
Once you get going, take time to assess how it’s doing. Get familiar with the reports available in your system, or use tools like Google Analytics. Don’t rely only on anecdotes. Use both numbers and stories.
I can’t help feeling disappointed in how quickly folks jumped ship and stayed on the raft even when it became clear that it was just a leaky faucet and not a hole in the hull.
I’ve been seeing many of my friends and peers jump ship and move their social/online bookmarks to other services (both free and paid) since the Yahoo leak about Delicious being in the sun-setting category of products. Given the volume of outcry over this, I was pretty confident that either Yahoo would change their minds or someone would buy Delicious or someone would replicate Delicious. So, I didn’t worry. I didn’t freak out. I haven’t even made a backup of my bookmarks, although I plan to do that soon just because it’s good to have backups of data.
Now the word is that Delicious will be sold, which is probably for the best. Yahoo certainly didn’t do much with it after they acquired it some years ago. But, honestly, I’m pretty happy with the features Delicious has now, so really don’t care that it hasn’t changed much. However, I do want it to go to someone who will take care of it and continue to provide it to users, whether it remains free or becomes a paid service.
I looked at the other bookmark services out there, and in particular those recommended by Lifehacker. Frankly, I was unimpressed. I’m not going to pay for a service that isn’t as good as Delicious, and I’m not going to use a bookmarking service that isn’t integrated into my browser. I didn’t have much use for Delicious until the Firefox extension, and now it’s so easy to bookmark and tag things on the fly that I use it quite frequently as a universal capture tool for websites and gift/diy ideas.
The technorati are a fickle bunch. I get that. But I can’t help feeling disappointed in how quickly they jumped ship and stayed on the raft even when it became clear that it was just a leaky faucet and not a hole in the hull.
speakers: Mike Ridley, Donna Scheeder, & Jim Peterson (moderated by Jane Dysart)
Ridley sees his job as leveraging information and economics to move the institution forward. Scheeder combines information management and technology to support their users. Peterson is from a small, rural library system where he manages all of the IT needs. (regarding his director: “I’m the geek, she’s the wallet.”)
Ridley
“I just want to remind you that if you think my comments are a load of crap, that’s a good thing.” Mike Ridley, referencing yesterday’s keynote about the hidden treasure of bat guano in libraries.
Information professionals have ways of thinking about how we do what we do, but our user populations have different perspectives. The tribal identities can be challenging when it comes to communicating effectively.
The information age is over. We’ve done that. But we’re still hanging on to it, even though everyone is in the information business. We need to leave that metaphor behind.
This is the age of imagination. What can we do differently? How will we change the rules to make a better world?
Open organizations are the way to go. Command and control organizations won’t get us to where we need to be in this age of imagination. We need to be able to fail. We are completely ignorant of how this will play out, and that opens doors of possibilities that wouldn’t otherwise be there.
Scheeder
It’s challenging to balance the resource needs of diverse user groups. You can add value to information by deeply understanding your users, your resources, and the level of risk that is acceptable.
There’s a big movement towards teleworking in the government. This can change your culture and the way you deliver services. Also, the proliferation of mobile devices among the users creates challenges in delivering content to them.
There’s a constant push and pull among the disciplines to get what they want.
Finally, security requirements make outside collaboration difficult. They want to be open, but they also have to protect the assets they were entrusted with.
Peterson
We all have computers, servers, and patrons, so under the hood we’re all the same.
The ability that IT has to cut power consumption costs can really help you out. Technology upgrades will increase productivity and decrease energy costs. In general, if it’s generating heat, it’s wasting electricity. Open source software can save on those costs, particularly if you have tech support that can manage it.
IT is more than just the geek you call when you have a tech problem. We’re here to help you save money.
Dysart’s questions
What’s the future of libraries?
Scheeder: The screen is the library now, so the question is where do we want the library. The library should be where people have their “dwell time.”
Ridley: The internet is going to get so big that it will disappear as a separate entity. Libraries will be everywhere, no matter what you’re doing. The danger is that libraries may disappear, so we need to think about value in that sphere.
Peterson: Libraries of the future are going to be most valuable as efficient information providers.
Tips for financing resources?
Peterson: Show a solid business model for the things you need.
Scheeder: Figure out how the thing you want to do aligns with the greater good of the organization. Identify how the user experience will improve. Think like the decision-makers and identify the economic reality of the organization.
Ridley: Prefers “participant” to “user”. Make yourself visible to everyone in your organization. Bridge the gap between tribes.
Anything else?
Peterson: If we don’t talk to our legislators then we won’t have a voice and they won’t know our needs.
Scheeder: Information professionals have the opportunity to maximize content to be finable by search engines, create taxonomy, and manage the digital lifecycle. We need to do better about preserving the digital content being created every moment.
Ridley: Go out and hire someone like Peterson. We need people who can understand technology and bridge the divide between IT and users.
Presenters: Jason Price, Claremont Colleges Library and SCELC Consortium
KBART stands for Knowledge Bases and Related Tools (a NISO group). Standards and best practices are challenging to move forward, so why should we “back this horse” versus something else?
This group is a collection of publishers, aggregators, knowledge base vendors, and librarians who want to create a universally acceptable holdings data format. Phase one of the report came out in January of this year, and the endorsement phase begins this month.
KBART expresses title level coverage by date and volume/issue. It’s a single solution for sharing holdings data across the scholarly communications supply chain. Essentially, it’s a simple metadata exchange format.
It wasn’t a simple process to get to this schema. They thought about all of the data in knowledge bases, how data is transferred to and from other sources, and the role of licensing in this process. When a publisher produces content, it flows to hosts/databases then gateways then knowledge bases and then catalogs/lists/guides.
When a user has a citation, they initiate a process that queries the knowledge base, which returns a list of access points. However, this breaks down when the holdings information is incorrect or even worse, when it’s missing. We get stuck with a lot of inaccuracies and manual work. At some point, it gets to be too much to keep up with.
Everyone is working with the same kind of data, albeit slightly customized at a local level. If we can begin to move toward a standard way of distributing the data, we can then look at automating this process.
KBART is the end to our role as translators – no more badgering publishers for complete lists, no more teasing out title changes (including former titles and ISSNs), no more waiting for the knowledge base data team to translate the data, and no more out of date access lists.
What can librarians do? Learn more about KBART. Insist on “knowing” what you’re buying (require annual delivery of a useable holdings list before you pay). Enable publisher sales staff to make the case to their companies – show them that use goes up when it’s accurately represented in link resolvers. Follow up with continued requests as necessary.
The American Institute of Physics implemented the KBART standard on their own, and they’ve now officially joined the group. On the other hand “A Big Publisher” recognizes the problem, but they need to establish the priority of the change, which includes getting their hosting service to make appropriate changes. So, they need to hear from all of us about this.
Publishers who are interested should review the requirements and format the content availability data to meet those requirements. Check your work and make it available to customers. And, of course, register as a KBART member.
Vision for KBART: Currently in phase one of standardizing. Phase two is more content type coverage. Phase three is a dream of incorporating metadata distribution for consortia and institutional level holdings based on what is accessible from a particular IP.
Speakers: “Collector in Chief” David Ferriero interviewed by Paul Holdengräber
Many people don’t know what the archivist does. They often think that the National Archives are a part of the Library of Congress. In fact, the agency is separate.
Ferriero is the highest ranking librarian in the administration. It’s usually a historian or someone with connections to the administration. He was surprised to get the appointment, and had been expecting to head the IMLS instead.
He is working to create a community around the records and how they are being used. His blog talks about creating citizen archivists. In addition, he is working to declassify 100 million documents a year. There is an enormous backlog of these documents going back to WWII. Each record must be reviewed by the agency who initially classified them, and there are 2400 classification guides that are supposed to be reviewed every five years, but around 50% of them have not.
You can’t have an open government if you don’t have good records. When records are created, they need to be ready to migrate formats as needed. There will be a meeting between the chief information officers and the record managers to talk about how to tackle this problem. These two groups have historically not communicated very well.
He’s also working to open up the archives to groups that we don’t often think of being archive users. There will be programs for grade school groups, and more than just tours.
Large digitization projects with commercial entities lock up content for periods of time, including national archives. He recognizes the value that commercial entities bring to the content, but he’s concerned about the access limitations. This may be a factor in what is decided when the contract with Ancestry.com is up.
“It’s nice having a boss down the street, but not, you know, in my face.” (on having not yet met President Obama)
Ferriero thinks we need to save smarter and preserve more digital content.
Three options: do it yourself, gather and format to upload to a vendor’s collection database, or have the vendor gather the data and send a report (Harrassowitz e-Stats). Surprisingly, the second solution was actually more time-consuming than the first because the library’s data didn’t always match the vendor’s data. The third is the easiest because it’s coming from their subscription agent.
Evaluation: review cost data; set cut-off point ($50, $75, $100, ILL/DocDel costs, whatever); generate list of all resources that fall beyond that point; use that list to determine cancellations. For citation databases, they want to see upward trends in use, not necessarily cyclical spikes that average out year-to-year.
Future: Need more turnaway reports from publishers, specifically journal publishers. COUNTER JR5 will give more detail about article requests by year of publication. COUNTER JR1 & BR1 combined report – don’t care about format, just want download data. Need to have download information for full-text subscriptions, not just searches/sessions.
Speaker: Benjamin Heet, librarian
He is speaking about University of Notre Dame’s statistics philosophy. They collect JR1 full text downloads – they’re not into database statistics, mostly because fed search messes them up. Impact factor and Eigen factors are hard to evaluate. He asks, “can you make questionable numbers meaningful by adding even more questionable numbers?”
At first, he was downloading the spreadsheets monthly and making them available on the library website. He started looking for a better way, whether that was to pay someone else to build a tool or do it himself. He went with the DIY route because he wanted to make the numbers more meaningful.
Avoid junk in junk out: HTML vs. PDF downloads depends on the platform setup. Pay attention to outliers to watch for spikes that might indicate unusual use by an individual. The reports often have bad data or duplicate data on the same report.
CORAL Usage Statistics – local program gives them a central location to store user names & passwords. He downloads reports quarterly now, and the public interface allows other librarians to view the stats in readable reports.
Speaker: Justin Clarke, vendor
Harvesting reports takes a lot of time and requires some administrative costs. SUSHI is a vehicle for automating the transfer of statistics from one source to another. However, you still need to look at the data. Your subscription agent has a lot more data about the resources than just use, and can combine the two together to create a broader picture of the resource use.
Harrassowitz starts with acquisitions data and matches the use statistics to that. They also capture things like publisher changes and title changes. Cost per use is not as easy as simple division – packages confuse the matter.
High use could be the result of class assignments or hackers/hoarders. Low use might be for political purchases or new department support. You need a reference point of cost. Pricing from publishers seems to have no rhyme or reason, and your price is not necessarily the list price. Multi-year analysis and subject-based analysis look at local trends.
Rather than usage statistics, we need useful statistics.