apathy in our fourth (fifth?) decade of the serials crisis

The 2018 periodicals price survey has been published, and it’s not going to tell you anything you didn’t know already if you have been paying any attention to the scholarly publishing industry. It is a gratifying read only in that it conveys the mix of pessimism, despair, and apathy that I feel at this point when we talk about the unsustainable pricing models for subscription resources in libraries. Or when I am using this data to support our annual budget request that I know will not be enough even if they grant it.

Sometimes I want to burn it all to the ground. Cancel everything with a price increase above CPI-W. But I can’t, because the only people it will hurt are students (faculty can and do get copies of anything they want from colleagues elsewhere). And the publishers know this. And they gleefully take more money from us.

quantified self, an addendum

Digital Body Fat Weight Scale by BalanceYesterday I shared a list of apps and tools I’m using to monitor and track things, mainly health-related. Well, my Amazon packages arrived last night, and I now have a new scale. The old one started acting weird a week or two ago, coinciding with what appeared to be a three or four pound gain in weight in a week. The new scale indicates my weight is right around where it was before my old scale went haywire. So, that’s reassuring.

My new scale also measures body fat %, muscle mass %, bone density %, and water weight %. As I mentioned yesterday, I already have a hand-held body fat monitoring tool, but I was curious to know if the electrical impulses running from foot to foot would encounter different types of data points than those running from hand to hand. Sure enough, my body fat % is much higher on the scale than with the hand-held device. For my own tracking purposes, I’m recording the average between the two.

the quantified self

Over the past few years, I’ve been using a variety of apps and devices to keep track of all sorts of things about myself, primarily related to my health and well-being. It’s been on my mind lately that you might be interested in these as well, so here’s a brief run-down of what I’m using today.

Fitbit OneThe Fitbit One is no longer manufactured by Fitbit, which is too bad. I’ve had one of these devices for a little over five years (thanks to Marie), and I’m on my third one with a fourth in reserve. It’s small enough to fit in my pocket, even with the holder/clip. It tracks my steps fairly accurately (I’ve tested it periodically), as well as distance covered and the quality of my sleep. It has been my primary health app/device for most of the time I’ve been measuring myself, and though it has its limitations, I still appreciate the core functions. [Side note: I have tried one of the wristband style trackers and I didn’t like it. The neoprene strap made my wrist sweaty, and the step counts seemed less accurate. I liked the heart-rate monitor aspect, but not enough to deal with the annoyances of a thing on my wrist. How I managed to wear a wristwatch for most of the first quarter of my life, I can’t imagine now.]

SyncSolver app iconFitbit decided to not play nicely with the Apple Health universe, but another app developer built SyncSolver to fix that. Since I have my Fitbit on me more than my phone (and it seems to be more accurate than the built-in pedometer on the iPhone), I use this to sync my steps to Apple Health for other apps to read. More on those below.

SleepCycle regular sleep graphA little over a year ago, I became concerned about the quality of my sleep. I downloaded an app that I no longer use and can’t remember the name of to track my snoring, which was far more frequent and vociferous than I thought. I began experimenting with things to improve my sleep quality, from nasal strips (not helpful) to a contoured memory foam pillow (helpful). In the process, I ran across the SleepCycle app. It’s a smart alarm that listens to your sleep and based on the programming, can determine where you are in your sleep cycle throughout the night. As it gets closer to when the alarm is set (within a half an hour, to be exact), if it sounds like you are in a lighter part of your sleep cycle, it will play the sound or song you selected to wake you up. It can also note (and record) when you are snoring and assesses the over-all quality of your sleep. I have two cats, and it seems to know when the noise making is coming from me versus them, which is both amazing and kinda weirds me out. Anyway, it’s been useful for figuring out what I need to do to sleep better. I can sync the sleep data to Apple Health, were it can be read by other apps as needed.

Strides app screenshotLast fall, as a part of my ongoing effort to get better sleep, I was looking for a tool to help encourage me to go to bed on time and wake up when my alarm goes off, rather than staying up too late and hitting snooze or turning off the alarm altogether. A regular schedule is generally believed to be helpful for sleeping better. I started using an app called Strides to keep track of my progress. In January, I added a workouts tracker to provide me with an easy overview of how often I’m doing that this year and how close to the 218 in 2018 I’m able to hit. At the current rate (37  out of 112 days), it will be more like 118, but it’s better than sitting on the couch. None of this syncs with Apple Health, but I don’t need it to.

Fitocracy app screenshot mid-workoutI’ve been a member of the Fitocracy website for several years now, but only in the past couple years did it become useful to me thanks to a more functional app. I prefer to do strength training rather than cardio at the gym (though I make myself do some cardio), and this app is very helpful for keeping track of what I did the last time and guiding me through the workout this time. There’s also a community/social aspect, as well as gamification (you get points for each exercise depending on how challenging it is to do), if that’s your thing. Since so few of my friends use it regularly, I don’t focus on those features much. As you’re going through your workout, you can edit the weight, reps, and sets if you end up doing more or less than you planned. If there is an example video, it will be at the top of the screen, and clicking on it will make the video play. This is helpful if you want to make sure you’re using proper form or need to remember an exercise you haven’t done in a while. It also indicates how many more exercises you have planned (the circles at the bottom of the screen) and what the next exercise set will be. None of the data from this app will sync with anything else, but it’s so useful in an of itself, I don’t mind.

MyFitnessPal screenshotI reinstalled MyFitnessPal in mid-January and have been diligently tracking my food, water, and cardio minutes. This syncs with Apple Health and Fitbit, which is useful for keeping tabs on key nutrients, since Fitbit’s food logger is not great. I used this app a few years ago, and as with others, got frustrated because it was so hard to be precise without measuring out every morsel I consumed. This time it seems to be easier, in part because the food database has expanded, and in part because I’ve let myself not care about the details too much. Part of changing my diet means eating less convenience food and eating more whole foods, preferably that I cooked myself. The challenge is that convenience foods also conveniently have their nutritional content displayed on the package with a barcode to save me even the effort of typing. I’m taking a horseshoes and hand-grenades approach this time — close enough will work. Also, I’m trying to focus more on the macro goals in addition to the caloric limits. What I’ve re-learned in all of this is that lean protein is not nearly as appealing as fatty protein, and I tend to eat very dense foods such that I can consume a lot of calories without feeling like I’m over-eating.

Happy Scale app screenshotLastly (and most recently), on the recommendation of my friend Jenica, I’ve started using the Happy Scale app to track my weight trends. It takes a long-term goal and break it down in to smaller, more immediately achievable goals, showing progress along the way. Although I am logging my weight every day (immediately after I wake up and use the toilet), it’s focused more on averages than that specific day’s weight. The data syncs with Apple Health, which is how other apps like MyFitnessPal and Fitbit get updated. The scale and this app only measure my entire body mass, which isn’t the whole focus of my fitness goals, so I also have a hand-held body fat monitor that I check periodically, usually only when the scale numbers have moved. Since I’m strength training, the scale numbers might go up with the addition of muscle mass that is denser than fat mass. At some point, I should do tape measurements, but for now I’m relying on the fit of my clothing to let me know if things are changing there.

thoughts on the Banff Mountain Film Festival World Tour 2018

Banff Centre for Arts and CreativityThis past weekend I sat in too-narrow auditorium seats at a local high school with several hundred other people and watched short “adventure” films that were a part of the annual week-long Banff Centre Mountain Film & Book Festival this past fall.

This was my second year attending the local event, and it was as enjoyable as last year’s. Below are some trailers in no particular order of the films I saw and especially enjoyed.

DreamRide 2 – Promo Clip (Long) from Banff Mountain Film Festival on Vimeo.

Edges – Promo Clip (Long) from Banff Mountain Film Festival on Vimeo.

The Frozen Road (Trailer) from Ben Page Films on Vimeo.

Imagination: Tom Wallisch – Promo Clip (Long) from Banff Mountain Film Festival on Vimeo.

La Casita Wip Trailer from Afuera Producciones on Vimeo.

My Irnik Teaser from François Lebeau on Vimeo.

Planet Earth II – Mountain Ibex – Promo Clip (Long) from Banff Mountain Film Festival on Vimeo.

Stumped – Promo Clip (Long) from Banff Mountain Film Festival on Vimeo.

Where The Wild Things Play – Promo Clip (Long) from Banff Mountain Film Festival on Vimeo.

a values conundrum

Scales
photo by Charles Thompson (CC BY 2.0)

‘Tis the season when I spend a lot of time gathering and consolidating usage reports for the previous calendar year (though next year not as many if my SUSHI experiment goes well). Today, as I was checking and organizing some of the reports I had retrieved last week, I noticed a journal that had very little use in the 2017 YOP (or 2016, for that matter), so I decided to look into it a bit more.

The title has a one year embargo and then the articles are open access. Our usage is very low (average 3.6 downloads per year) and most of it, according to the JR5 and JR1 GOA for confirmation, is coming from the open access portion, not the closed access we pay for.

The values conundrum I have is multifaceted. This is a small society publisher, and we have only the one title from them. They are making the content open access after one year, and I don’t think they are making authors pay for this, though I could be wrong. These are market choices I want to support. And yet….

How do I demonstrate fiscal responsibility when we are paying ~$300/download? Has the research and teaching shifted such that this title is no longer needed and that’s why usage is so low? Is this such a seminal title we would keep it regardless of whether it’s being used?

Collection development decisions are not easy when there are conflicting values.

what’s the big deal?

house of cards
photo by Erin Wilson (CC BY-NC-ND 2.0)

I’ve been thinking about Big Deals again lately, particularly as there are more reports of institutions breaking them (and then later having to pick them up again) because the costs are unsustainable. It’s usually just the money that is the issue. No one has a problem with buying huge journal (and now book) bundles in general because they tend to be used heavily and reduce friction in the research process. No, it’s usually about the cost increases, which happen annually, generally at higher rates than library collections budgets increase. That’s not new.

The reality of breaking a Big Deal is not pleasant, and often does not result in cost savings without a severe loss of access to scholarly research. I’m  not at a research institution, and yet, every time I have run the numbers, our Big Deals still cost less than individual subscriptions to the titles that get used more than the ILL threshold, and even if I bump it up to, say, 20 downloads a year, we’re still better off paying for the bundle than list price for individual titles. I can only imagine this is even more true at R1 schools, though their costs are likely exponentially higher than ours and may be bearing a larger burden per FTE.

That gets at one factor of the Big Deal that is not good — the lack of transparency or equity in pricing. One publisher’s Big Deal pricing is based on your title list prior to the Big Deal, which can result in vastly different costs for different institutions for essentially the same content. Another publisher many years ago changed their pricing structure, and in more polite terms told my consortia at the time we were not paying enough (i.e. we had negotiated too good of a contract), and we would see hefty annual increases until we reached whatever amount they felt we should be paying. This is what happens in a monopoly, and scholarly publishing is a monopoly in practice if not in legal terms.

We need a different model (and Open Access as it is practiced now is not going to save us). I don’t know what it is, but we need to figure that out soon, because I am seeing the impending crash of some Big Deals, and the fallout is not going to be pretty.

giving SUSHI another try

(It's just) Kate's sushi! photo by Cindi Blyberg
photo by Cindi Blyberg

I’m going to give SUSHI another try this year. I had set it up for some of our stuff a few years back with mix results, so I removed it and have been continuing to manually retrieve and load reports into our consolidation tool. I’m still doing that for the 2017 reports, because the SUSHI harvesting tool I have won’t let me go back and pull from before, only monthly moving forward now.

I’ve spent a lot of time making sure titles in reports matched up with our ERMS so that consolidation would work (it’s matching on title, ugh), and despite my efforts, any reports generated still need cleanup. What is the value of my effort there? Not much anymore. Especially since ingesting cost data for journals/books is not a simple process to maintain, either. So, if all that matters less to none, might as well take whatever junk is passed along in the SUSHI feed as well and save myself some time for other work in 2019.

resisting my inevitable death

black and white photo of a kettle bell weight and two medicine balls, along with part of a human leg and sneaker-shod footI’m getting older. It’s hard to avoid. My body isn’t as resilient as it was fifteen years ago when I started this blog. As my income increased, so did my pant size, and being in a sedentary job didn’t help.

January began as January often begins, with a renewed commitment to stay as physically active as I can and work on getting stronger. For the first two weeks, I managed to get out and hike/walk/gym every day but three. Then my choir rehearsals began and things picked up again with new music being sent to the radio station, and I was reminded why I don’t spend two hours at the gym every day.

One of my favorite blogs is Fit is a Feminist Issue, and several of the bloggers over there are talking about a 218 workouts in 2018 challenge. I missed jumping on from the start, but I’ve been keeping track for other reasons and I’m up to 23 so far. Not bad. Could be much better — there was one week in there with zero. If I’m going to hit that goal, I’ll need to be doing 4-5 workouts a week, not the average 3-4 I’ve been doing so far.

I’ve also been keeping track of the food I eat. I’ve done this in the past with mixed success, but I’m finding the tool less frustrating this time. (Or maybe I just care less about being absolutely precise?) I haven’t approached this with the intent to prescribe some sort of diet regimen, but the data has been useful for making tweaks. Since I’m also weight training, I’ve been paying closer attention to macros and increasing protein without blowing up the fat percentage, too.

I’ve also discovered how easy it is for me to consume a massive amount of calories and not even realize it — it simply doesn’t seem like that much food, and by weight, it isn’t, but the nutritional composition is very densely packed with caloric energy. So, I need to out-think my survival brain that compels me towards high energy foods my body can store for later use in the lean times that will never come.

My goals are simple: get stronger, avoid physical injuries, lose some weight to relieve stress on my joints, and get ready for prime softball/baseball/hiking season. Oh, and delaying my inevitable death.

notes from #OpenConVA

Open Access Principles and Everyday Choices

Speaker: Hilda Bastian, National Center for Biotechnology Information

It’s not enough to mean well — principles and effects matter.

Everyday choices build culture. You can have both vicious circles and virtuous spirals of trust.

There is a fine line between idealism and becoming ideological.

Principles can clash — and be hard to live up to.

One of the unintended consequences of OA is the out of control explosion of APC costs for institutions who’s principles call upon researchers to publish OA. APC costs have not decreased as expected.

http://blogs.plos.org/absolutely-maybe/2017/08/29/bias-in-open-science-advocacy-the-case-of-article-badges-for-data-sharing/

Take critics seriously. What can you learn from them? But also, make sure you take care of yourself — it can be overwhelming. If you can’t learn anything useful from the criticism, only then can you dismiss it.


Lightning Talks

  1. Ian Sullivan, Center for Open Science – how to pitch openness to pragmatists: Openness is seen (and often times presented as) extra work. Reframe it as time-shifting. It’s work you’re already doing (documenting, moving material to somewhere else, etc.). Think about it as increasing efficiency and reducing frustration if you plan for it early on. Version control or document management will save time later, and includes all the documentation through the process that you’ll need in the end, anyway. Open practices are not going away, and they are increasingly be required by grant agencies and publication outlets. If you can become the “open expert” in your lab, that will build your reputation.
  2. Anita Walz, Virginia Tech – what is open in the context of OER: Lower cost educational materials; broader format types for learning materials; increasing impact for scholars/authors; collaboration; identity/ego? Maybe. Values of open that benefit higher education: access is for everyone (inclusive), providing feedback (collaborating); sharing new ideas about teaching & research; embracing the use of open licenses; giving credit where it’s due, even when not expected; outward facing, thinking about audience; using it as a lever to positively effect change in your work and the world around you.
  3. Pamela Lawton and Cosima Storz, Virginia Commonwealth University – incorporating art in the community: handed out a zine about zines, with the project of making a zine ourselves. Zines are collaborative, accessible, and community-driven.
  4. Eric Olson, ORCID – increasing research visibility: Names are complicated when it comes to finding specific researchers. One example, is a lab that has two people with the same rather common name or first initial and last name (this is not as uncommon as you might think). A unique ID will help disambiguate this if the IDs are included with the researcher’s publications. ORCID is a system that makes this possible if authors and publications/indexes connect to it.
  5. Beth Bernhardt, UNC Greensboro – how to support open scholarship with limited library resources: created a grant that offers time rather than money — web developers, staff with expertise on OA, digitization, metadata, etc. The grant awards the equivalent of .5 FTE. In the end, they found they needed to give more staff time than originally planned to fully execute the projects.
  6. Kate Stilton, North Carolina A&T State University – open educational resources at a mid-sized university: 85% of students receive need-based financial aid, so they definitely need help with acquiring educational resources since, in part, it’s a STEM-focused institution. They have to be realistic about what they can offer — the library is understaffed and underfunded. They are focusing on adoption of OER materials, and less about creating their own. They’re also looking at what other schools in the area are doing and how they could be applied locally, as well as leaning on the expertise of early adopters.
  7. Jason Dean Henderso, Oklahoma State University – OERx, custom installation of an open source content management system MODx: received a donation to make OER content, which meant they had to find a way to host and distribute them. They’ve used open journal systems, but there isn’t great documentation for Public Knowledge Project’s Open Monograph Press software, so they modified it for their own purposes to make something easier to use out of the box. They’ve cross-listed the OER books with the course offerings for faculty to make use of them if they wish.
  8. Braddlee, NOVA Annandale – adoption of OER in VCCS’s Zx23: surveyed faculty who participated in the program across all of the VCCS schools. As you might expect, faculty still don’t see librarians in outreach or institutional leadership roles.
  9. Sue Erickson, Virginia Wesleyan College, and Gardner Campbell, Virginia Commonwealth University – Open Learning ’18: online course about open learning starting in February. Hypothes.is is an annotation tool that will be used and is a favorite of Campbell.
  10. Nina Exner, Virginia Commonwealth University – data reuse: When we talk about sharing data, we don’t mean you need to ignore other obligations like privacy of research subjects (IRB) or copyright restrictions you’ve agreed to. You don’t need to share every single piece of data generated — just the data associated with a specific finding you’ve published or received funding for. FAIR principles come into play at this point, which are generally good practices, anyway. Where you store data isn’t as important as whether it’s accessible and reusable. If you’re a librarian, please don’t talk about “scholarly communications” with non-librarians. Use terms like public access, supporting data, data availability, reproducibility, and rigor.
  11. Jason Meade, Virginia Commonwealth University – Safecast example of crowdsourcing scientific data: Created in response to the Fukushima Daiichi nuclear power plant disaster in 2011. Handed out mini Geiger counter kits, and the data was uploaded to a central site for anyone to see. The initial group to receive the kits were the hardcore skeptics. He is quite impressed with the volume of data created over a short amount of time with very little cost. This model could be used in many other fields to increase data generation at reduced costs, with increased buy-in and awareness among the public.

Student Voices in Open Education

Speakers: info coming soon

Business faculty member at Virginia Tech decided to revamp what a textbook would be, and the end result is more dynamic and useful for that particular course than any offered through traditional sources. It’s also open.

VCU language faculty agreed that teaching 200 level courses is the worst. They decided to create WordPress sites for the 201 students to create curated content that was more engaging than traditional language pedagogy. The second part of the project was to have the 202 students create OER scaffolded projects from the curated collections. The students are finding this much more engaging than the expensive textbooks.

Student says she has to choose between an older edition that is more affordable but means she may struggle more in class, and the current edition that is more expensive. Another student says that for how much they spend on the books, they can sometimes be surprisingly out of date.

Faculty are concerned about inclusion and equity, and the cost of materials can have inequitable impact on learning between students from different economic backgrounds. There is also concern about the texts having relevance to current culture (ie Madonna references aren’t great in 2017), so they need to be regularly updated, but that can increase the costs. Additionally, supplemental tools require access code purchases, but often are used sub-optimally. When fields are changing rapidly, textbooks are out of date before they are even adopted.

Language faculty working with students on this project have learned a lot more about how they learn, despite what their own training about pedagogy told them. The students were quite frank about what worked and what didn’t.

Student says that the curation project has given her tools for lifelong language learning and application.


Predatory Publishing: Separating the Good from the Bad

Speakers: info coming soon

Predatory, parasitic, vanity, disreputable — these are journals that are not interested in scholarly communication, just in making money. They lack peer review (i.e. they say they do, but it takes 24 hours), charge fees for submissions, and they want to retain all copyright.

Open Access has been tainted by predatory publishing, but they aren’t the same thing. Look out for: a lack of clearly defined scope (or a bunch of SEO-oriented keywords), small editorial board and/or no contact information, lack of peer review process, article submission fees, and the publisher retaining all copyright. Not necessarily related, but are kind of murky regarding credibility: lack of impact factor, geographical location (one of the issues with Beall’s list), article processing charges (to publish, not to submit), and poor quality.

If you’re still uncertain about a specific journal: ask your colleagues; see if it’s indexed where the journal claims to be indexed; if it’s OA, see if it is listed in DOAJ, see if the publisher belongs to OASPA or COPE.

Other tools:
Think. Check. Submit.
COPE principles of transparency & best practices in scholarly publishing
ODU LibGuide

Watch out for predatory conferences. They will fake speakers, locations, schedules, etc., just to get your registration money.

Sometimes it’s hard to tell if a new journal is legitimate because there are a lot of characteristics that overlap with predatory publishers. Check with the editorial board members — do they even know they are on the editorial board?


Open in the Age of Inequality

Speaker: Tressie McMillan Cottom, Virginia Commonwealth University

She’s been at VCU for three years, and one of the first things she and her colleagues tackled was revamping the digital sociology program. In part, there was an effort to make it more open/accessible. Open is inherently critical, and her perspective about sociology is that it’s critical of every aspect of systems and institutions, including the one you exist within.

The program is mostly made up of professionals, so part of it involved developing specific types of skills. They needed to learn professional digital practice, being sociological with their critique of digital life, and analysis of digital data and the creation of that data.

They wanted to practice what they were preaching: open, accessible, rigorous, and critical. They had access to OER materials and SocArXiv (social sciences open archive).

VCU faculty were incentivized to use eportfolios, but no one really knows how to do it well. The tool is a blog. Because it was inconsistently required, the students get the impression it’s not important. However, it’s supposed to show growth over time and potentially be used for getting a job after graduating.

To fix this, they started by shifting to a cohort model. This meant switching to a fall-only enrollment. The second thing they did was to create a course sequence that all students must follow. This meant that faculty could build assignments based on previous assignments. The cohort structure emphasized theory-building and problem solving.

What/why worked: leadership that was wiling to embrace the changes; trust among the faculty teaching in the program; approaches to teaching had to be restructured with different cohorts, which required a lot of communication.

What kinda worked: open data was easier to implement than OER (quality and vigor varied tremendously, not much available in critical sociology at the graduate student level, most of the important topics from the past 30 years was not included); OER resources lacked the critical sociology content they were interested in, such as race, gender, class, intersectionality.

What chafed: accretion (five offices are in charge of “online”, with different staff and objectives; often they don’t know who does what); market logics (why we are supposed to adopt open as a model — things aren’t less expensive when you consider the faculty time it takes to implement them); working without a model (had to develop everything they use from scratch, such as how the eportfolios would be assessed, protect the student’s identities online, adopting open access products from for-profit sources).

OER can be created by people with institutional support time, cumulative advantage of tenure and digital skills without immediate need for pay, job security, mobility, or prestige. What happens is that those who can do it, tend to be homogeneous, which is not what critical sociology is interested in, and in fact, their institutions are often the topics of critical sociology.

They are working on figuring out how to have online classes that protect students who may be vulnerable to critique/attack online. They are trying to build a community around this — it’s very labor-intensive and can’t be done by a small group.

They are trying to reuse the student work as much as possible, generally with data rather than theory work (it’s not really up to par — they’re graduate students). They need to constantly revisit what colleagues have taught or how syllabus shifted in response to that particular cohort as they are planning the next semester of work.

There is a big concern about where to put the data for reuse, but not for reuse by for-profit agencies wanting to create better targeted ads, for example. For now, it’s restricted to use by students at VCU.

“Pay to play” mode of OA journals/books is neo-liberal open access. How is the open model simply repackaging capitalists systems? This is also something they need to be incorporating into a critical study of digital sociology.

Online is a way to generate revenue, not as a learning tool. Marketing/communications departments have far too much power over how faculty use online platforms.

Charleston 2016: COUNTER Release 5 — Consistency, Clarity, Simplification and Continuous Maintenance

Speakers: Lorraine Estelle (Project COUNTER), Anne Osterman (VIVA – The Virtual Library of Virginia), Oliver Pesch (EBSCO Information Services)

COUNTER has had very minimal updates over the years, and it wasn’t until release 4 that things really exploded with report types and additional useful data. Release 5 attempts to reduce complexity so that all publishers and content providers are able to achieve compliance.

They are seeking consistency in the report layout, between formats, and in vocabulary. Clarity in metric types and qualifying action, processing rules, and formatting expectations.

The standard reports will be fewer, but more flexible. The expanded reports will introduce more data, but with flexibility.

A transaction will have different attributes recorded depending on the item type. They are also trying to get at intent — items investigated (abstract) vs. items requested (full-text). Searches will now distinguish between whether it was on a selected platform, a federated search, a discovery service search, or a search across a single vendor platform. Unfortunately, the latter data point will only be reported on the platform report, and still does not address teasing that out at the database level.

The access type attribute will indicate when the usage is on various Open Access or free content as well as licensed content. There will be a year of publication (YOP) attribution, which was not in any of the book reports and only included in Journal Report 5.

Consistent, standard header for each report, with additional details about the data. Consistent columns for each report. There will be multiple rows per title to cover all the combinations, making it more machine-friendly, but you can create filters in Excel to make it more human-friendly.

They expect to have release 5 published by July 2017 with compliance required by January 2019.

Q&A
Q: Will there eventually be a way to account for anomalies in data (abuse of access, etc.)?
A: They are looking at how to address use triggered by robot activity. Need to also be sensitive of privacy issues.

Q: Current book reports do not include zero use entitlements. Will that change?
A: Encouraged to provide KBART reports to get around that. The challenge is that DDA/PDA collections are huge and cumbersome to deliver reports. Will also be dropping the zero use reporting on journals, too.

Q: Using DOI as a unique identifier, but not consistently provided in reports. Any advocacy to include unique identifiers?
A: There is an initiative associated with KBART to make sure that data is shared so that knowledgbases are updated so that users find the content so that there are fewer zero use titles. Publisher have motivation to do this.

Q: How do you distinguish between unique uses?
A: Session based data. Assign a session ID to activity. If no session tracking, a combination of IP address and user agent. The user agent is helpful when multiple users are coming through one IP via the proxy server.

Slides

css.php