what’s the big deal?

house of cards
photo by Erin Wilson (CC BY-NC-ND 2.0)

I’ve been thinking about Big Deals again lately, particularly as there are more reports of institutions breaking them (and then later having to pick them up again) because the costs are unsustainable. It’s usually just the money that is the issue. No one has a problem with buying huge journal (and now book) bundles in general because they tend to be used heavily and reduce friction in the research process. No, it’s usually about the cost increases, which happen annually, generally at higher rates than library collections budgets increase. That’s not new.

The reality of breaking a Big Deal is not pleasant, and often does not result in cost savings without a severe loss of access to scholarly research. I’m  not at a research institution, and yet, every time I have run the numbers, our Big Deals still cost less than individual subscriptions to the titles that get used more than the ILL threshold, and even if I bump it up to, say, 20 downloads a year, we’re still better off paying for the bundle than list price for individual titles. I can only imagine this is even more true at R1 schools, though their costs are likely exponentially higher than ours and may be bearing a larger burden per FTE.

That gets at one factor of the Big Deal that is not good — the lack of transparency or equity in pricing. One publisher’s Big Deal pricing is based on your title list prior to the Big Deal, which can result in vastly different costs for different institutions for essentially the same content. Another publisher many years ago changed their pricing structure, and in more polite terms told my consortia at the time we were not paying enough (i.e. we had negotiated too good of a contract), and we would see hefty annual increases until we reached whatever amount they felt we should be paying. This is what happens in a monopoly, and scholarly publishing is a monopoly in practice if not in legal terms.

We need a different model (and Open Access as it is practiced now is not going to save us). I don’t know what it is, but we need to figure that out soon, because I am seeing the impending crash of some Big Deals, and the fallout is not going to be pretty.

giving SUSHI another try

(It's just) Kate's sushi! photo by Cindi Blyberg
photo by Cindi Blyberg

I’m going to give SUSHI another try this year. I had set it up for some of our stuff a few years back with mix results, so I removed it and have been continuing to manually retrieve and load reports into our consolidation tool. I’m still doing that for the 2017 reports, because the SUSHI harvesting tool I have won’t let me go back and pull from before, only monthly moving forward now.

I’ve spent a lot of time making sure titles in reports matched up with our ERMS so that consolidation would work (it’s matching on title, ugh), and despite my efforts, any reports generated still need cleanup. What is the value of my effort there? Not much anymore. Especially since ingesting cost data for journals/books is not a simple process to maintain, either. So, if all that matters less to none, might as well take whatever junk is passed along in the SUSHI feed as well and save myself some time for other work in 2019.

resisting my inevitable death

black and white photo of a kettle bell weight and two medicine balls, along with part of a human leg and sneaker-shod footI’m getting older. It’s hard to avoid. My body isn’t as resilient as it was fifteen years ago when I started this blog. As my income increased, so did my pant size, and being in a sedentary job didn’t help.

January began as January often begins, with a renewed commitment to stay as physically active as I can and work on getting stronger. For the first two weeks, I managed to get out and hike/walk/gym every day but three. Then my choir rehearsals began and things picked up again with new music being sent to the radio station, and I was reminded why I don’t spend two hours at the gym every day.

One of my favorite blogs is Fit is a Feminist Issue, and several of the bloggers over there are talking about a 218 workouts in 2018 challenge. I missed jumping on from the start, but I’ve been keeping track for other reasons and I’m up to 23 so far. Not bad. Could be much better — there was one week in there with zero. If I’m going to hit that goal, I’ll need to be doing 4-5 workouts a week, not the average 3-4 I’ve been doing so far.

I’ve also been keeping track of the food I eat. I’ve done this in the past with mixed success, but I’m finding the tool less frustrating this time. (Or maybe I just care less about being absolutely precise?) I haven’t approached this with the intent to prescribe some sort of diet regimen, but the data has been useful for making tweaks. Since I’m also weight training, I’ve been paying closer attention to macros and increasing protein without blowing up the fat percentage, too.

I’ve also discovered how easy it is for me to consume a massive amount of calories and not even realize it — it simply doesn’t seem like that much food, and by weight, it isn’t, but the nutritional composition is very densely packed with caloric energy. So, I need to out-think my survival brain that compels me towards high energy foods my body can store for later use in the lean times that will never come.

My goals are simple: get stronger, avoid physical injuries, lose some weight to relieve stress on my joints, and get ready for prime softball/baseball/hiking season. Oh, and delaying my inevitable death.

notes from #OpenConVA

Open Access Principles and Everyday Choices

Speaker: Hilda Bastian, National Center for Biotechnology Information

It’s not enough to mean well — principles and effects matter.

Everyday choices build culture. You can have both vicious circles and virtuous spirals of trust.

There is a fine line between idealism and becoming ideological.

Principles can clash — and be hard to live up to.

One of the unintended consequences of OA is the out of control explosion of APC costs for institutions who’s principles call upon researchers to publish OA. APC costs have not decreased as expected.


Take critics seriously. What can you learn from them? But also, make sure you take care of yourself — it can be overwhelming. If you can’t learn anything useful from the criticism, only then can you dismiss it.

Lightning Talks

  1. Ian Sullivan, Center for Open Science – how to pitch openness to pragmatists: Openness is seen (and often times presented as) extra work. Reframe it as time-shifting. It’s work you’re already doing (documenting, moving material to somewhere else, etc.). Think about it as increasing efficiency and reducing frustration if you plan for it early on. Version control or document management will save time later, and includes all the documentation through the process that you’ll need in the end, anyway. Open practices are not going away, and they are increasingly be required by grant agencies and publication outlets. If you can become the “open expert” in your lab, that will build your reputation.
  2. Anita Walz, Virginia Tech – what is open in the context of OER: Lower cost educational materials; broader format types for learning materials; increasing impact for scholars/authors; collaboration; identity/ego? Maybe. Values of open that benefit higher education: access is for everyone (inclusive), providing feedback (collaborating); sharing new ideas about teaching & research; embracing the use of open licenses; giving credit where it’s due, even when not expected; outward facing, thinking about audience; using it as a lever to positively effect change in your work and the world around you.
  3. Pamela Lawton and Cosima Storz, Virginia Commonwealth University – incorporating art in the community: handed out a zine about zines, with the project of making a zine ourselves. Zines are collaborative, accessible, and community-driven.
  4. Eric Olson, ORCID – increasing research visibility: Names are complicated when it comes to finding specific researchers. One example, is a lab that has two people with the same rather common name or first initial and last name (this is not as uncommon as you might think). A unique ID will help disambiguate this if the IDs are included with the researcher’s publications. ORCID is a system that makes this possible if authors and publications/indexes connect to it.
  5. Beth Bernhardt, UNC Greensboro – how to support open scholarship with limited library resources: created a grant that offers time rather than money — web developers, staff with expertise on OA, digitization, metadata, etc. The grant awards the equivalent of .5 FTE. In the end, they found they needed to give more staff time than originally planned to fully execute the projects.
  6. Kate Stilton, North Carolina A&T State University – open educational resources at a mid-sized university: 85% of students receive need-based financial aid, so they definitely need help with acquiring educational resources since, in part, it’s a STEM-focused institution. They have to be realistic about what they can offer — the library is understaffed and underfunded. They are focusing on adoption of OER materials, and less about creating their own. They’re also looking at what other schools in the area are doing and how they could be applied locally, as well as leaning on the expertise of early adopters.
  7. Jason Dean Henderso, Oklahoma State University – OERx, custom installation of an open source content management system MODx: received a donation to make OER content, which meant they had to find a way to host and distribute them. They’ve used open journal systems, but there isn’t great documentation for Public Knowledge Project’s Open Monograph Press software, so they modified it for their own purposes to make something easier to use out of the box. They’ve cross-listed the OER books with the course offerings for faculty to make use of them if they wish.
  8. Braddlee, NOVA Annandale – adoption of OER in VCCS’s Zx23: surveyed faculty who participated in the program across all of the VCCS schools. As you might expect, faculty still don’t see librarians in outreach or institutional leadership roles.
  9. Sue Erickson, Virginia Wesleyan College, and Gardner Campbell, Virginia Commonwealth University – Open Learning ’18: online course about open learning starting in February. Hypothes.is is an annotation tool that will be used and is a favorite of Campbell.
  10. Nina Exner, Virginia Commonwealth University – data reuse: When we talk about sharing data, we don’t mean you need to ignore other obligations like privacy of research subjects (IRB) or copyright restrictions you’ve agreed to. You don’t need to share every single piece of data generated — just the data associated with a specific finding you’ve published or received funding for. FAIR principles come into play at this point, which are generally good practices, anyway. Where you store data isn’t as important as whether it’s accessible and reusable. If you’re a librarian, please don’t talk about “scholarly communications” with non-librarians. Use terms like public access, supporting data, data availability, reproducibility, and rigor.
  11. Jason Meade, Virginia Commonwealth University – Safecast example of crowdsourcing scientific data: Created in response to the Fukushima Daiichi nuclear power plant disaster in 2011. Handed out mini Geiger counter kits, and the data was uploaded to a central site for anyone to see. The initial group to receive the kits were the hardcore skeptics. He is quite impressed with the volume of data created over a short amount of time with very little cost. This model could be used in many other fields to increase data generation at reduced costs, with increased buy-in and awareness among the public.

Student Voices in Open Education

Speakers: info coming soon

Business faculty member at Virginia Tech decided to revamp what a textbook would be, and the end result is more dynamic and useful for that particular course than any offered through traditional sources. It’s also open.

VCU language faculty agreed that teaching 200 level courses is the worst. They decided to create WordPress sites for the 201 students to create curated content that was more engaging than traditional language pedagogy. The second part of the project was to have the 202 students create OER scaffolded projects from the curated collections. The students are finding this much more engaging than the expensive textbooks.

Student says she has to choose between an older edition that is more affordable but means she may struggle more in class, and the current edition that is more expensive. Another student says that for how much they spend on the books, they can sometimes be surprisingly out of date.

Faculty are concerned about inclusion and equity, and the cost of materials can have inequitable impact on learning between students from different economic backgrounds. There is also concern about the texts having relevance to current culture (ie Madonna references aren’t great in 2017), so they need to be regularly updated, but that can increase the costs. Additionally, supplemental tools require access code purchases, but often are used sub-optimally. When fields are changing rapidly, textbooks are out of date before they are even adopted.

Language faculty working with students on this project have learned a lot more about how they learn, despite what their own training about pedagogy told them. The students were quite frank about what worked and what didn’t.

Student says that the curation project has given her tools for lifelong language learning and application.

Predatory Publishing: Separating the Good from the Bad

Speakers: info coming soon

Predatory, parasitic, vanity, disreputable — these are journals that are not interested in scholarly communication, just in making money. They lack peer review (i.e. they say they do, but it takes 24 hours), charge fees for submissions, and they want to retain all copyright.

Open Access has been tainted by predatory publishing, but they aren’t the same thing. Look out for: a lack of clearly defined scope (or a bunch of SEO-oriented keywords), small editorial board and/or no contact information, lack of peer review process, article submission fees, and the publisher retaining all copyright. Not necessarily related, but are kind of murky regarding credibility: lack of impact factor, geographical location (one of the issues with Beall’s list), article processing charges (to publish, not to submit), and poor quality.

If you’re still uncertain about a specific journal: ask your colleagues; see if it’s indexed where the journal claims to be indexed; if it’s OA, see if it is listed in DOAJ, see if the publisher belongs to OASPA or COPE.

Other tools:
Think. Check. Submit.
COPE principles of transparency & best practices in scholarly publishing
ODU LibGuide

Watch out for predatory conferences. They will fake speakers, locations, schedules, etc., just to get your registration money.

Sometimes it’s hard to tell if a new journal is legitimate because there are a lot of characteristics that overlap with predatory publishers. Check with the editorial board members — do they even know they are on the editorial board?

Open in the Age of Inequality

Speaker: Tressie McMillan Cottom, Virginia Commonwealth University

She’s been at VCU for three years, and one of the first things she and her colleagues tackled was revamping the digital sociology program. In part, there was an effort to make it more open/accessible. Open is inherently critical, and her perspective about sociology is that it’s critical of every aspect of systems and institutions, including the one you exist within.

The program is mostly made up of professionals, so part of it involved developing specific types of skills. They needed to learn professional digital practice, being sociological with their critique of digital life, and analysis of digital data and the creation of that data.

They wanted to practice what they were preaching: open, accessible, rigorous, and critical. They had access to OER materials and SocArXiv (social sciences open archive).

VCU faculty were incentivized to use eportfolios, but no one really knows how to do it well. The tool is a blog. Because it was inconsistently required, the students get the impression it’s not important. However, it’s supposed to show growth over time and potentially be used for getting a job after graduating.

To fix this, they started by shifting to a cohort model. This meant switching to a fall-only enrollment. The second thing they did was to create a course sequence that all students must follow. This meant that faculty could build assignments based on previous assignments. The cohort structure emphasized theory-building and problem solving.

What/why worked: leadership that was wiling to embrace the changes; trust among the faculty teaching in the program; approaches to teaching had to be restructured with different cohorts, which required a lot of communication.

What kinda worked: open data was easier to implement than OER (quality and vigor varied tremendously, not much available in critical sociology at the graduate student level, most of the important topics from the past 30 years was not included); OER resources lacked the critical sociology content they were interested in, such as race, gender, class, intersectionality.

What chafed: accretion (five offices are in charge of “online”, with different staff and objectives; often they don’t know who does what); market logics (why we are supposed to adopt open as a model — things aren’t less expensive when you consider the faculty time it takes to implement them); working without a model (had to develop everything they use from scratch, such as how the eportfolios would be assessed, protect the student’s identities online, adopting open access products from for-profit sources).

OER can be created by people with institutional support time, cumulative advantage of tenure and digital skills without immediate need for pay, job security, mobility, or prestige. What happens is that those who can do it, tend to be homogeneous, which is not what critical sociology is interested in, and in fact, their institutions are often the topics of critical sociology.

They are working on figuring out how to have online classes that protect students who may be vulnerable to critique/attack online. They are trying to build a community around this — it’s very labor-intensive and can’t be done by a small group.

They are trying to reuse the student work as much as possible, generally with data rather than theory work (it’s not really up to par — they’re graduate students). They need to constantly revisit what colleagues have taught or how syllabus shifted in response to that particular cohort as they are planning the next semester of work.

There is a big concern about where to put the data for reuse, but not for reuse by for-profit agencies wanting to create better targeted ads, for example. For now, it’s restricted to use by students at VCU.

“Pay to play” mode of OA journals/books is neo-liberal open access. How is the open model simply repackaging capitalists systems? This is also something they need to be incorporating into a critical study of digital sociology.

Online is a way to generate revenue, not as a learning tool. Marketing/communications departments have far too much power over how faculty use online platforms.

Charleston 2016: COUNTER Release 5 — Consistency, Clarity, Simplification and Continuous Maintenance

Speakers: Lorraine Estelle (Project COUNTER), Anne Osterman (VIVA – The Virtual Library of Virginia), Oliver Pesch (EBSCO Information Services)

COUNTER has had very minimal updates over the years, and it wasn’t until release 4 that things really exploded with report types and additional useful data. Release 5 attempts to reduce complexity so that all publishers and content providers are able to achieve compliance.

They are seeking consistency in the report layout, between formats, and in vocabulary. Clarity in metric types and qualifying action, processing rules, and formatting expectations.

The standard reports will be fewer, but more flexible. The expanded reports will introduce more data, but with flexibility.

A transaction will have different attributes recorded depending on the item type. They are also trying to get at intent — items investigated (abstract) vs. items requested (full-text). Searches will now distinguish between whether it was on a selected platform, a federated search, a discovery service search, or a search across a single vendor platform. Unfortunately, the latter data point will only be reported on the platform report, and still does not address teasing that out at the database level.

The access type attribute will indicate when the usage is on various Open Access or free content as well as licensed content. There will be a year of publication (YOP) attribution, which was not in any of the book reports and only included in Journal Report 5.

Consistent, standard header for each report, with additional details about the data. Consistent columns for each report. There will be multiple rows per title to cover all the combinations, making it more machine-friendly, but you can create filters in Excel to make it more human-friendly.

They expect to have release 5 published by July 2017 with compliance required by January 2019.

Q: Will there eventually be a way to account for anomalies in data (abuse of access, etc.)?
A: They are looking at how to address use triggered by robot activity. Need to also be sensitive of privacy issues.

Q: Current book reports do not include zero use entitlements. Will that change?
A: Encouraged to provide KBART reports to get around that. The challenge is that DDA/PDA collections are huge and cumbersome to deliver reports. Will also be dropping the zero use reporting on journals, too.

Q: Using DOI as a unique identifier, but not consistently provided in reports. Any advocacy to include unique identifiers?
A: There is an initiative associated with KBART to make sure that data is shared so that knowledgbases are updated so that users find the content so that there are fewer zero use titles. Publisher have motivation to do this.

Q: How do you distinguish between unique uses?
A: Session based data. Assign a session ID to activity. If no session tracking, a combination of IP address and user agent. The user agent is helpful when multiple users are coming through one IP via the proxy server.