Charleston 2012: bX Usage-Based Services

Blogger recommend Sign by Davich Klinadung
“Blogger recommend Sign” by Davich Klinadung

Speaker: Christine Stohn, bX project manager

There are two components — the recommender and hot articles.

This began in 2009 with the article recommended, and as of this year, it’s used by over 1100 institutions. This year they added the hot articles service, with “popularity reports”. And, there is a mobile app for the hot articles service. Behind the scenes, there is the bX Data Lab, where they run experiments and quality control. They’re also interested in data mining researchers who might want to take the data and use it for their own work.

The data for bX comes from SFX users who actively contribute the data from user clicks at their institutions. It’s content-neutral, coming from many institutions.

bX is attempting to add some serendipity to searches that by definition require some knowledge of what you are looking for. When you find something from your searching, the bX recommender will find other relevant articles for you, based on what other people have used in the past. The hot articles component will list the most used articles from the last month that are on the same topic as your search result.

It currently works only with articles, but they are collecting data on ebooks that may eventually lead to the ability to recommend them as well.

The hot articles component is based on HILCC subjects that have been assigned to journal titles, so it’s not as precise as the recommender.

You can choose to limit the recommendations to only your holdings, but that limits the discovery. You can have indicators that show whether the item is available locally or not.

It’s available in SFX, Primo, Scopus, and the Science Direct platform. Hot articles can be embedded in LibGuides.

Atmetrics – probably will be incorporated to enhance the recommender service.

They are looking at article metrics calculated as a percentile rank per topic, which is more relevant today than the citations that may come five years down the road. It’s based on usage through SFX and bX, but not direct links or DOI links.

ER&L 2010: ERMS Success – Harvard’s experience implementing and using an ERM system

Speaker: Abigail Bordeaux

Harvard has over 70 libraries and they are very decentralized. This implementation is for the central office that provides the library systems services for all of the libraries. Ex Libris is their primary vendor for library systems, include the ERMS, Verde. They try to go with vended products and only develop in-house solutions if nothing else is available.

Success was defined as migrating data from old system to new, to improve workflows with improved efficiency, more transparency for users, and working around any problems they encountered. They did not expect to have an ideal system – there were bugs with both the system and their local data. There is no magic bullet. They identified the high-priority areas and worked towards their goals.

Phase I involved a lot of project planning with clearly defined goals/tasks and assessment of the results. The team included the primary users of the system, the project manager (Bordeaux), and a programmer. A key part of planning includes scoping the project (Bordeaux provided a handout of the questions they considered in this process). They had a very detailed project plan using Microsoft Project, and at the very least, the listing out of the details made the interdependencies more clear.

The next stage of the project involved data review and clean-up. Bordeaux thinks that data clean-up is essential for any ERM implementation or migration. They also had to think about the ways the old ERM was used and if that is desirable for the new system.

The local system they created was very close to the DLF recommended fields, but even so, they still had several failed attempts to map the fields between the two systems. As a result, they had a cycle of extracting a small set of records, loading them into Verde, reviewing the data, and then delete the test records out of Verde. They did this several times with small data sets (10 or so), and when they were comfortable with that, they increased the number of records.

They also did a lot of manual data entry. They were able to transfer a lot, but they couldn’t do everything. And some bits of data were not migrated because of the work involved compared to the value of it. In some cases, though, they did want to keep the data, so they entered it manually. Part of what they did to visualize the mapping process, they created screenshots with notes that showed the field connections.

Prior to this project, they were not using Aleph to manage acquisitions. So, they created order records for the resources they wanted to track. The acquisitions workflow had to be reorganized from the ground up. Oddly enough, by having everything paid out of one system, the individual libraries have much more flexibility in spending and reporting. However, it took some public relations work to get the libraries to see the benefits.

As a result of looking at the data in this project, they got a better idea of gaps and other projects regarding their resources.

Phase two began this past fall to begin incorporating the data from the libraries that did not participate in phase one. They now have a small group with folks representing the libraries. This group is coming up with best practices for license agreements and entering data into the fields.

css.php