Speakers: Roën Janyk (Okanagan College) & Emma Lawson (Langara College)
Two new-ish librarians talk about applying their LIS training to the real world, and using the Core Competencies as a framework for identifying the gaps they encountered. They wanted to determine if the problem is training or if eresources/serials management is just really complicated.
Collection development, cataloging (both MARC and Dublin Core), records management, and digital management were covered in their classes. Needed more on institutional repository management.
They did not cover licensing at all, so all they learned was on the job, comparing different documents. They also learned that the things librarians look for in contracts is not what the college administrators are concerned about. In addition, the details of information about budgeting and where that information should be stored was fuzzy, and it took some time to gather that in their jobs. And, as with many positions, if institutional memory (and logins) is not passed on, a lot of time will be spent on recreating it. For LIS programs, they wish they had more information about the details of use statistics and their application, as well as resource format types and the quirks that come with them.
They had classes about information technology design and broader picture things, but not enough about relationships between the library and IT or the kinds of information technology in libraries now. There were some courses that focused on less relevant technology and history of technology, and the higher level courses required too high of a learning curve to attract LIS students.
For the core competency on research analysis and application, we need to be able to gather appropriate data and present the analysis to colleagues and superiors in a way that they can understand it. In applying this, they ran into questions about comparing eresources to print, deciding when to keep a low-use resource, and other common criteria for comparing collections besides cost/use. In addition, there needs to be more taught about managing a budget, determining when to make cancelation or format change decisions, alternatives to subscriptions, and communicating all of this outside of the library.
Effective communication touches on everything that we do. It requires that you frame situations from someone else’s viewpoint. You need to document everything and be able to clearly describe the situation in order to trouble-shoot with vendors. Be sympathetic to the frustrations of users encountering the problems.
Staff supervision may range from teams with no managerial authority to staff who report to you. ER librarians have to be flexible and work within a variety of deparmental/project frameworks, and even if they do have management authority, they will likely have to manage projects that involve staff from other departments/divisions/teams. They did not find that the library management course was very applicable. Project management class was much more useful. One main challenge is staff who have worked in the library for a long time, and change management or leadership training would be very valuable, as well as conversations about working with unionized staff.
In the real world being aware of trends in the profession involves attending conferences, participating in webinars/online training, and keeping up with the literature. They didn’t actually see an ERMS while in school, nor did they work with any proprietary ILS. Most of us learn new things by talking to our colleagues at other institutions. MLS faculty need to keep up with the trends as well, and incorporate that into classes — this stuff changes rapidly.
They recommend that ILS and ERMS vendors collaborate with MLS programs so that students have some real-world applications they can take with them to their jobs. Keep courses current (what is actually being used in libraries) and constantly be evaluating the curriculum, which is beyond what ALA requires for accreditation. More case studies and real-world experiences in applied courses. Collection development course was too focused on print collection analysis and did not cover electronic resources.
As a profession, we need more sessions at larger, general conferences that focus on electronic resources so that we’re not just in our bubble. More cross-training in the workplaces. MLS programs need to promote eresources as a career path, instead of just the traditional reference/cataloger/YA divides.
If we are learning it all on the job, then why are we required to get the degrees?
Updates from Serials Solutions – mostly Resource Manager (Ashley Bass):
Keep up to date with ongoing enhancements for management tools (quarterly releases) by following answer #422 in the Support Center, and via training/overview webinars.
Populating and maintaining the ERM can be challenging, so they focused a lot of work this year on that process: license template library, license upload tool, data population service, SUSHI, offline date and status editor enhancements (new data elements for sort & filter, new logic, new selection elements, notes), and expanded and additional fields.
Workflow, communication, and decision support enhancements: in context help linking, contact tool filters, navigation, new Counter reports, more information about vendors, Counter summary page, etc. Her most favorite new feature is “deep linking” functionality (aka persistent links to records in SerSol). [I didn’t realize that wasn’t there before — been doing this for my own purposes for a while.]
Next up (in two weeks, 4th quarter release): new alerts, resource renewals feature (reports! and checklist!, will inherit from Admin data), Client Center navigation improvements (i.e. keyword searching for databases, system performance optimization), new license fields (images, public performance rights, training materials rights) & a few more, Counter updates, SUSHI updates (making customizations to deal with vendors who aren’t strictly following the standard), gathering stats for Springer (YTD won’t be available after Nov 30 — up to Sept avail now), and online DRS form enhancements.
In the future: license API (could allow libraries to create a different user interface), contact tools improvements, interoperability documentation, new BI tools and reporting functionality, and improving the Client Center.
Also, building a new KB (2014 release) and a web-scale management solution (Intota, also coming 2014). They are looking to have more internal efficiencies by rebuilding the KB, and it will include information from Ulrich’s, new content types metadata (e.g. A/V), metadata standardization, industry data, etc.
Summon Updates (Andrew Nagy):
I know very little about Summon functionality, so just listened to this one and didn’t take notes. Take-away: if you haven’t looked at Summon in a while, it would be worth giving it another go.
Goal #1: Allow users to easily link to full-text resources. Solution: Go beyond the out-of-the box 360 Link display.
Goal #2: Allow users to report problems or contact library staff at the point of failure. Solution: eresources problem report form
They created the eresources problem report form using Drupal. The fields include contact information, description of the resource, description of the problem, and the ability to attach a screenshot.
Some enhancements included: making the links for full-text (article & journal) butttons, hiding additional help information and giving some hover-over information, parsing the citation into the problem report page, and moving the citation below the links to full-text. For journal citations with no full-text, they made the links to the catalog search large buttons with more text detail in them.
Some of the challenges of implementing these changes is the lack of a test environment because of the limited preview capablities in 360 Link. Any changes actually made required an overnight refresh and they would be live, opening the risk of 24 hour windows of broken resource links. So, they created their own test environment by modifying test scenarios into static HTML files and wrapping them in their own custom PHP to mimic the live pages without having to work with the live pages.
[At this point, it got really techy and lost me. Contact the presenters for details if you’re interested. They’re looking to go live with this as soon as they figure out a low-use time that will have minimal impact on their users.]
Customizing 360 Link menu with jQuery (Laura Wrubel, George Washington University)
They wanted to give better visual clues for users, emphasize the full-text, have more local control over linkns, and visual integration with other library tools so it’s more seamless for users.
They started with Reidsma’s code, then then forked off from it. They added a problem link to a Google form, fixed ebook chapter links and citation formatting, created conditional links to the catalog, and linked to their other library’s link resolver.
They hope to continue to tweak the language on the page, particularly for ILL suggestion. The coverage date is currently hidden behind the details link, which is fine most of the time, but sometimes that needs to be displayed. They also plan to load the print holdings coverage dates to eliminate confusion about what the library actually has.
In the future, they would rather use the API and blend the link resolver functionality with catalog tools.
Custom document delivery services using 360 Link API (Kathy Kilduff, WRLC)
License information for course reserves for faculty (Shanyun Zhang, Catholic University)
Included course reserve in the license information, but then it became an issue to convey that information to the faculty who were used to negotiating it with publishers directly. Most faculty prefer to use Blackboard for course readings, and handle it themselves. But, they need to figure out how to incorporate the library in the workflow. Looking for suggestions from the group.
Advanced Usage Tracking in Summon with Google Anaytics (Kun Lin, Catholic University)
Use of ERM/KB for collection analysis (Mitzi Cole, NASA Goddard Library)
Used the overlap analysis to compare print holdings with electronic and downloaded the report. The partial overlap can actually be a full overlap if the coverage dates aren’t formatted the same, but otherwise it’s a decent report. She incorporated license data from Resource Manager and print collection usage pulled from her ILS. This allowed her to create a decision tool (spreadsheet), and denoted the print usage in 5 year increments, eliminating previous 5 years use with each increment (this showed a drop in use over time for titles of concern).
Discussion of KnowledgeWorks Management/Metadata (Ben Johnson, Lead Metadata Librarian, SerialsSolutions)
After they get the data from the provider or it is made available to them, they have a system to automatically process the data so it fits their specifications, and then it is integrated into the KB.
They deal with a lot of bad data. 90% of databases change every month. Publishers have their own editorial policies that display the data in certain ways (e.g., title lists) and deliver inconsistent, and often erroneous, metadata. The KB team tries to catch everything, but some things still slip through. Throught the data ingestion process, they apply rules based on past experience with the data source. After that, the data is normalized so that various title/ISSN/ISBN combinations can be associated with the authority record. Finally, the data is incorporated into the KB.
Authority rules are used to correct errors and inconsistencies. Rule automatically and consistently correct holdings, and they are often used to correct vendor reporting problems. Rules are condified for provider and database, with 76,000+ applied to thousands of databases, and 200+ new rules are added each month.
Why does it take two months for KB data to be corrected when I report it? Usually it’s because they are working with the data providers, and some respond more quickly than others. They are hoping that being involved with various initiatives like KBART will help fix data from the provider so they don’t have to worry about correcting it for us, but also making it easier to make those corrections by using standards.
Client Center ISSN/ISBN doesn’t always work in 360 Links, which may have something to do with the authority record, but it’s unclear. It’s possible that there are some data in the Client Center that haven’t been normalized, and could cause this disconnect. And sometimes the provider doesn’t send both print and electronic ISSN/ISBN.
What is the source for authority records for ISSN/ISBN? LC, Bowker, ISSN.org, but he’s not clear. Clarification: Which field in the MARC record is the source for the ISBN? It could be the source of the normalization problem, according to the questioner. Johnson isn’t clear on where it comes from.
You will not hear the magic rational that will allow you to cancel all your A&I databases. The last three years of analysis at her institution has resulted in only two cancelations.
Background: she was a science librarian before becoming an administrator, and has a great appreciation for A&I searching.
Scenario: a subject-specific database with low use had been accessed on a per-search basis, but going forward it would be sole-sourced and subscription based. Given that, their cost per search was going to increase significantly. They wanted to know if Summon would provide a significant enough overlap to replace the database.
Arguments: it’s key to the discipline, specialized search functionality, unique indexing, etc… but there’s no data to support how these unique features are being used. Subject searches in the catalog were only 5% of what was being done, and most of them came from staff computers. So, are our users actually using the controlled vocabularies of these specialized databases. Finally, librarians think they just need to promote these more, but sadly, that ship’s already sailed.
Beyond usage data, you can also look at overlap with your discovery service, and also identify unique titles. For those, you’ll need to consider local holdings, ILL data, impact factors, language, format, and publication history.
Once they did all of that, they found that 92% of the titles were indexed in their discovery service. The depth of the backfile may be an issue, depending on the subject area. Also, you may need to look at the level of indexing (cover to cover vs. selective). In the end, they found that 8% of the titles not included, they owned most of them in print and they were rather old. 15% of the 8% had impact factors, which may or may not be relevant, but it is something to consider. And, most of the titles were non-English. They also found that there were no ILL requests for the non-owned unique titles, and less than half were scholarly and currently being published.
The US ISSN Center (part of the Library of Congress) is primarily responsible for assigning ISSN and metadata to journals. They work with the international ISSN network as well as answering questions from libraries and publishers.
R.R. Bowker is a subdivision of ProQuest. They create metadata for libraries and publishers and several products like Ulrichs. They have one employee working in the ISSN Center assigning ISSNs, creating CONSER records, screens incoming requests, and solves problems. Initially, they worked mostly with the Ulrichs database, and now are working with the SerialsSolutions database.
The relationship began over metadata. Bowker wanted data for Ulrichs, and the ISSN Center needed more staff to do it. The contract is non-exclusive, but to date, no other company has been involved. LoC would like to develop more relationships like this, in part to reduce duplication of effort.
The benefit for publishers is a one-stop shop for ISSN registration, Ulrichs inclusion, and CONSER records in WorldCat. The serials community benefits from having a standard number for each publication, fuller records, and an advocate for gathering that metadata.
Some of the challenges have been technological (firewalls, different computers, etc.) and administrative (different bosses, holidays, etc,). They are looking to better ways of collaborating between the ISSN record creation and the Ulrichs record creation.
Standards matter. Common data elements can be mapped.
When creating a digital repository, start small. They started initially with getting faculty publications up. This required developing strong relationships with them.
While you may be starting small, you need to dream big as well. What else can you do?
Get support. Go to the offices of the people you want to work with. Get familiar with the administrative assistants. But, be ready to do everything yourself.
Plan ahead, but write it in pencil. They organizational structure of your repository needs to be flexible to handle outside changes.
Once you get going, take time to assess how it’s doing. Get familiar with the reports available in your system, or use tools like Google Analytics. Don’t rely only on anecdotes. Use both numbers and stories.
I can’t help feeling disappointed in how quickly folks jumped ship and stayed on the raft even when it became clear that it was just a leaky faucet and not a hole in the hull.
I’ve been seeing many of my friends and peers jump ship and move their social/online bookmarks to other services (both free and paid) since the Yahoo leak about Delicious being in the sun-setting category of products. Given the volume of outcry over this, I was pretty confident that either Yahoo would change their minds or someone would buy Delicious or someone would replicate Delicious. So, I didn’t worry. I didn’t freak out. I haven’t even made a backup of my bookmarks, although I plan to do that soon just because it’s good to have backups of data.
Now the word is that Delicious will be sold, which is probably for the best. Yahoo certainly didn’t do much with it after they acquired it some years ago. But, honestly, I’m pretty happy with the features Delicious has now, so really don’t care that it hasn’t changed much. However, I do want it to go to someone who will take care of it and continue to provide it to users, whether it remains free or becomes a paid service.
I looked at the other bookmark services out there, and in particular those recommended by Lifehacker. Frankly, I was unimpressed. I’m not going to pay for a service that isn’t as good as Delicious, and I’m not going to use a bookmarking service that isn’t integrated into my browser. I didn’t have much use for Delicious until the Firefox extension, and now it’s so easy to bookmark and tag things on the fly that I use it quite frequently as a universal capture tool for websites and gift/diy ideas.
The technorati are a fickle bunch. I get that. But I can’t help feeling disappointed in how quickly they jumped ship and stayed on the raft even when it became clear that it was just a leaky faucet and not a hole in the hull.