what’s the big deal?

house of cards
photo by Erin Wilson (CC BY-NC-ND 2.0)

I’ve been thinking about Big Deals again lately, particularly as there are more reports of institutions breaking them (and then later having to pick them up again) because the costs are unsustainable. It’s usually just the money that is the issue. No one has a problem with buying huge journal (and now book) bundles in general because they tend to be used heavily and reduce friction in the research process. No, it’s usually about the cost increases, which happen annually, generally at higher rates than library collections budgets increase. That’s not new.

The reality of breaking a Big Deal is not pleasant, and often does not result in cost savings without a severe loss of access to scholarly research. I’m  not at a research institution, and yet, every time I have run the numbers, our Big Deals still cost less than individual subscriptions to the titles that get used more than the ILL threshold, and even if I bump it up to, say, 20 downloads a year, we’re still better off paying for the bundle than list price for individual titles. I can only imagine this is even more true at R1 schools, though their costs are likely exponentially higher than ours and may be bearing a larger burden per FTE.

That gets at one factor of the Big Deal that is not good — the lack of transparency or equity in pricing. One publisher’s Big Deal pricing is based on your title list prior to the Big Deal, which can result in vastly different costs for different institutions for essentially the same content. Another publisher many years ago changed their pricing structure, and in more polite terms told my consortia at the time we were not paying enough (i.e. we had negotiated too good of a contract), and we would see hefty annual increases until we reached whatever amount they felt we should be paying. This is what happens in a monopoly, and scholarly publishing is a monopoly in practice if not in legal terms.

We need a different model (and Open Access as it is practiced now is not going to save us). I don’t know what it is, but we need to figure that out soon, because I am seeing the impending crash of some Big Deals, and the fallout is not going to be pretty.

NASIG 2015 – Somewhere To Run To, Nowhere To Hide

info free fridge
information wants to be free?

Speaker: Stephen Rhind-Tutt, President, Alexander Street Press

His perspective is primary source collections, mostly video and audio, created by a small company of 100 or so people.

There are billions and trillions of photos, videos, and audio files being added to the Internet every year, and it’s growing year over year. We’re going to need a bigger boat.

He reviewed past presentations at NASIG, and there are reoccurring nightmares of OA replacing publishers, Wikipedia replacing reference sources, vendors will bypass libraries and go direct to faculty, online learning will replace universities, etc.

All technologies evolve and die. Many worry about the future, many hold onto the past, and we’re not responding quickly enough to the user. Dispense with the things that are less relevant. Users don’t want to search, they want to find.

You can project the future, and not just by guessing. You don’t have to know how it’s going to happen, but you can look at what people want and project from that.

Even decades after the motor car was developed, we were still framing it within the context and limitations of the horse-drawn carriage. We’re doing that with our ebooks and ejournals today. If we look to the leaders in the consumer space, we can guess where the information industry is heading.

If we understand the medium, we can understand how best to use it. Louis Kahn says, “Honor the material you use.” The medium of electronic publications favors small pieces (articles, clips) and is infinitely pliable, which means it can be layered and made more complex. Everything is interconnected with links, and the links are more important than the destination. We are fighting against the medium when we put DRM on content, limit the simultaneous use, and hide the metadata.

“I don’t know how long it will take, but I truly believe information will become free.”

Video is a terrible medium for information if you want it fast — 30 min of video can be read in 5 minutes. ASP has noticed that the use of the text content is on par with the use of the associated video content.

Mobile is becoming very important.

Linking — needs to work going out and coming in. The metadata for linking must be made free so that it can be used broadly and lead users to the content.

The researcher wants every piece of information created on every topic for free. From where he is as a publisher, he’s seeing better content moving more and more to open access. And, as a result of that, ASP is developing an open music library that will point to both fee and free content, to make it shareable with other researchers.

In the near future, publishers will be able to make far more money developing the research process ecosystem than by selling one journal.

SSP/NASIG – How One Publisher Is Responding to the Changing World of Scholarly Communication

Money
saving up for this year’s price increases

Speaker: Jayne Marks, Vice President of Global Publishing, LWW Journals, Wollters Kluwer

We have to be adaptable and willing to change to respond to the market. Online platforms are only 19 years old (ejournals are 22), and they have changed a lot in that time.

What will journals look like in 2025? What about books?

Some assumptions people have had: Open Access is the answer to everything; move publishing back to institutions; journals cost less when they aren’t printed; textbooks would be cheaper online; self-publishing will replace publishers; publishers don’t add any value to the educational process.

STM journal output continues to grow while library budgets remain flat at best. The number of researchers in the world is growing as well, and there is a correlation between the output and the researchers. Library budgets in North America are growing the least. There is a huge rise in papers submitted from China in recent years, and not all of them can be published.

The trend in STM is to make the article a hub that links out to other content: data, podcasts, video, etc. Data is becoming a significant piece of research. There are also more tools available to manage the research reputation of scholars, which is becoming more important.

Print is not dead. It’s in decline, but still needed. Physicians want print as an option as well as online, tablet, and smart phone versions. Students still want print textbooks, as well as online resources from the library. Pharma advertising is focused solely on print.

Access and archiving mandates for open access are increasing, though mostly from institutional mandates rather than from funding sources. Open access is here to stay, and as far as publishing is concerned, it’s another business model. OA use is different in different disciplines: medicine/bio is focused on Gold, but most other science focus on Green. There’s also an increase in mandates requiring data to be made available publicly as well, and with that come questions about what data is needed and where it is stored/delivered.

Publishers see pressure on costs and revenue, with demands for content in multiple formats and an increasing number of submissions and requests for new journals. There are more formats, with various apps and delivery models. Requirements for usage metrics across all the formats puts pressure on internal systems and speed of development. OA mandates require creative business models. Increasing US regulations are driving revenue out of medical publishing for pharma.

Publishers are responding by assessing the needs of their target demographics through extensive market research. The challenge after that is taking the diverse wants and needs and developing something that can be implemented universally and automated.

New business models: gold or hybrid OA; platinum OA is where a society or institution pays all costs of publications so that neither author nor reader pays, generally only in emerging markets; new journals are being launched, with OA or blended or bundled models; advertising-funded publishing is not working.

New editorial models: post-publication commentary, open peer review, community-based peer review, and independent companies providing peer review services; non-English speaking services like translation, detailed content editing, compliance and ethics policy assistance, and recommendation engines.

There is experimentation all over the market, with new startups and services being rolled out all the time. Publishers are working with them, and there are varying degrees of adoption by authors and readers.

Listen, question, engage — it’s all about understanding the customers. Publishers need to engage more in the scholarly communication process.

Public policies are driving change and impacting scholarly communication, driving innovation and experimentation. Free drives engagement; revenue will come from new places. Identifying correctly and solving customer problems will drive opportunities. Journals and books will become content to be used in new ways.

Critical Moments: Chance, Choice, and Change in Scholarly Publishing

Dr. Katherine Skinner
Dr. Katherine Skinner at NASIG 2014

Speaker: Dr. Katherine Skinner, Educopia Institute

How do we better make sure that there are connections between all of the players in this scholarly space?

The Educopia Institute advances cultural, scientific, and scholarly institutions by catalyzing networks and collaborative communities. They want to preserve digital scholarly information for the long-term, in a neutral, lightweight solution.

In the early 2000s, digital scholarship was uncharted. The frontier depends on your perspective, since it’s not really as blank a slate as it appears on the surface. Establishing a settlement in the digital frontier is difficult because traditional business practices work to enforce each other. We have to respect traditions to adapt them.

How do new fields come into being? Sociology looks at the factors in place to make change happen. Skinner focued her dissertation on the creation of new styles of music, and these questions led her to become deeply involved with questions about scholarly publishing. She wants to revolutionize the way scholarship is produced.

Field formation: principle 1
Beware changes in the modes of communication — new fields and practices emerge.

Example: The printing press revolutionized the modes of communication in the Church, causing it to splinter into different groups allowing the people, not just the leaders, to interpret the content.

Field formation: principle 2:
Innovations don’t come from the center, they come from unexpected locations.

Example: When the phonograph was created, 150 record companies sprung up. But, by the time the Great Depression hit, the market for records had declined, but were revived by the creation of the juke box. The records were changed out weekly, creating a larger demand for more new content. It also exposed more white Americans to black music.

Field formation: principle 3:
Cultural process of production, distribution, and reception depend upon networks of people.

Example: Castellers in Barcelona are teams of people who “build” towers several stories high, standing on each other’s shoulders. We need closely integrated chains of dependence in our communities.

The internet has shifted the nature of communications. Our media is conservative, and it’s in a business model developed and established over centuries. We’re starting to notice some of the innovations around us (example: Elsevier acquisition of Mendeley) and bringing them in from the fringes.

System-wide change requires system-wide involvement. We can’t change one tiny aspect and expect the rest to shift — it has to happen across the system. We can’t do this in silos. We have to work with all the players, and remind them why we’re here and why we matter.

Our mission as librarians is to support and sustain access to cultural, political, and scientific knowledge. That’s radical and wonderful. We need to start committing our resources to efforts to make change, like DPLA. It’s not just about content types, it’s about networks of people supporting the broader public good.

Chance keeps us very interested, because we never know how it’s going to turn out. Theoretically. Sometimes it can be rigged. Every time we turn around, there are game changers like computers, scanners, the Internet, mobile devices, etc. These provide us moments when massive changes in communication can occur.

Some of the entities in these moments have rigged the system to their own advantage. Over and over, we have made a choice to go along with them.

Publishers are motivated differently from librarians, and the communication changes have had a huge impact on their ability to do what they do. Some have folded or been absorbed by larger companies. Publisher are seeking survival right now, and many are motivated to support the academic environment, but they come at it from a different perspective than librarians. They are acting out of self-interest not because they are bad, but because they are seeking survival in an uncertain marketplace where libraries are seen as a stable source of income. They can’t survive in the common marketplace at 10% a year increase, but they know libraries will absorb it, so they turn to us.

Why are we choosing to let a marketplace impinge on our mission as libraries? Every time we turn from permanent collections to rented collections, we are failing our mission. The recession has hit us hard, and it’s going to hit us harder before it’s over. Higher education is starting to downsize, which includes the money the publishers expect to get from us.

Flexibility is the trademark of survival. We can turn this situation around. We could resist change by doing the same thing the same way, or embrace change by being on the bleeding edge, or go at a moderate pace. But none of these will work as individuals. We have to do them as a community in a network to make real changes.

Trends to watch: library publishing (Library Publishing Coalition), web archiving (netpreserve.org), preservation, and open-access funding.

Audience member suggests that schools who pay OA fees should get rebates on their subscriptions for those journals in order to increase parity in the system, since institutions that have actively publishing scholars end up paying for the rest of us to get it free.

NASIG 2012: Is the Journal Dead? Possible Futures for Serial Scholarship

Speaker: Rick Anderson, University of Utah

He started with an anecdote about a picture of his dog that he thought made her look like Jean Paul Sartre. He then went to find a picture of him on Google, and had absolutely no doubt he’d not only find one quickly, but that he would find one with the same expression. In a world where he can find that picture in less than a minute makes it absurd for us to think we can keep doing serial scholarship in the way we have always done it.

The latest version of Siri can identify a reference-type question and go to Wolfram-Alpha to find the answer. How far away are we from this kind of thing doing very specific article identification and retrieval?

When budgets are falling or flat, there is a rising impatience with waste in libraries. One of the most egregious waste is that we have now and always bought stuff that nobody wants, and we still hold onto those things.

Market saturation is becoming an increasing issue as more and more articles are being submitted, and rejecting them or publishing them costs more money. A landslide of data is being created, with more coming every year. Open access mandates (whether seriously enforced or not) are forcing authors to think about copyright, putting pressure on the existing scholarly communications structure.

The Google books case, the Hathi Trust case, and the Georgia State ruling will all have impacts on copyright law and the traditional model of scholarly communication. The ground is soft — we can make changes now that may not have been possible 5 years ago, and may not be possible 2 years from now. Moving documents from print to digital is not a revolutionary change, but moving from a non-networked to a networked environment is. Distribution is at the heart of publishing, and is obviated if everyone has access to a document in a central location.

Before iTunes and the internet, we had to hope that the record store would carry the music we were interested in. Now, we can access any music from anywhere, and that’s the kind of thing that is happening to scholarly communications.

The environment is changing. The Digital Public Library of America and Google Books are changing the conversation. Patron-driven acquisitions and print on demand are only possible because of the networked environment. As we move towards this granular collecting, the whole dynamic of library collections is going to change.

This brings up some serious questions about the Big Deal and the Medium Deal. Anderson calls the Medium Deal individual title subscriptions, where you buy a bunch of articles you don’t need in order to ensure that you get them at a better price per download.

Anderson believes that there is little likelihood that open access is going to become the main publishing of scholarly communications in the foreseeable future, but it is going to become an increasing niche in the marketplace.

What does the journal do for us that is still necessary? What problem is solved for us by each element of the article citation? Volume, issue, and page number are not really necessary in the networked age. Our students don’t necessarily think about journals, they think about sources. The journal matters as a branding mechanism for articles, and gives us an idea of the reliability of the article. It matters who the author is. It matters when it was published. The article title tells us what the article is about, and the journal title lends that authority. But, the journal and issue don’t really tell you anything, and has more to do with the economics of print distribution. Finally the DOI matters, so you can retrieve it. So, why is the publisher missing? Because it doesn’t matter for identifying or retrieving or selecting the article.

There really is no such thing as “serials” scholarship. There are articles, but they aren’t serials. They may be in journals or a collection/server/repository. Typically there isn’t anything serial about a book, a review, a report, but… blog postings might be serial. What’s really interesting are the new categories of publication, such as data sets (as by-products of research or as an intentional product) and book+ (ongoing updated monographic publications, or monographs that morph into databases).

A database (or article or book) can be a “flow site,” such as Peggy Battin’s The Ethics of Suicide book, which she’s been working on for a decade. It will be published as both a book and as a website with ever growing content/data. It’s no longer a static thing, and gives us the benefit of currency with a cost of stability. How do you quote it? What is the version of record?

The people we serve have access to far more content than ever before, and they are more able to access it outside of the services we provide. So how do we stay relevant in this changing environment?

Definitions will get fuzzier, not clearer. This will be a tremendous boon to researchers. What emerges will be cool, exciting, incredibly useful and productive, and hard to manage. If we try to force our traditional methods of control onto the emerging models of scholarship, we will not only frustrate ourselves, but also our scholars. It is our job to internalize complexity so that we are the ones experiencing it so that our users don’t have to.

social & scholarly communications, mixing it up

Scientific publisher Springer has been doing several things lately that make me sit up and pay attention. Providing DRM-free PDF files of their ebooks is one, and now I see they are providing rather useful bits of scholarly information in a rather social media format.

Springer Realtime gives currently trending topics and downloads for content they are serving out to subscribers around the world. The only thing that’s missing is a way to embed these nifty widgets elsewhere, like on subject guide pages.

get off my lawn…er…library

Going back to some idealized vision of the way things were won’t solve the problem.

The librarian community (at least, those in higher education) is all abuzz over a recent article in The Chronicle by social science and humanities librarian Daniel Goldstein. He makes several damning statements about the trend in libraries towards access over ownership and “good enough” over perfect.

Before reading the byline at the end of the article, I had a sense that the author was a well-meaning if ill-informed professor, standing up for what he thinks is what libraries should be. Needless to say, I was surprised to learn that he’s a librarian who aught to know better.

Yes, librarians should be making careful decisions about collections that guide users to the best resources, but at the same time we are facing increasing demands for more and expensive content than what we already provide. And yes, we should be instructing users on how to carefully construct searches in specialized bibliographic databases, but we’re also facing increased class sizes with decreased staff.

There is no easy answer, and going back to some idealized vision of the way things were won’t solve the problem, either. If you do go read this article, I highly recommend reading the comments as well. At least the first few do an excellent job of pointing out the flaws in Goldstein’s either-or argument.

NASIG 2010: It’s Time to Join Forces: New Approaches and Models that Support Sustainable Scholarship

Presenters: David Fritsch, JSTOR and Rachel Lee, University of California Press

JSTOR has started working with several university press and other small scholarly publishers to develop sustainable options.

UC Press is one of the largest university press in the US (36 journals in the humanities, biological & social sciences), publishing both UC titles and society titles. Their prices range from $97-422 for annual subscriptions, and they are SHERPA Green. One of the challenges they face on their own platform is keeping up with libraries expectations.

ITHAKA is a merger of JSTOR, ITHAKA, Portico, and Alkula, so JSTOR is now a service rather than a separate company. Most everyone here knows what the JSTOR product/service is, and that hasn’t changed much with the merger.

Scholar’s use of information is moving online, and if it’s not online, they’ll use a different resource, even if it’s not as good. And, if things aren’t discoverable by Google, they are often overlooked. More complex content is emerging, including multimedia and user-generated content. Mergers and acquisitions in publishing are consolidating content under a few umbrellas, and this threatens smaller publishers and university presses that can’t keep up with the costs on a smaller scale.

The serials crisis has impacted smaller presses more than larger ones. Despite good relationships with societies, it is difficult to retain popular society publications when larger publishers can offer them more. It’s also harder to offer the deep discounts expected by libraries in consortial arrangements. University presses and small publishers are in danger of becoming the publisher of last resort.

UC Press and JSTOR have had a long relationship, with JSTOR providing long-term archiving that UC Press could not have afforded to maintain on their own. Not all of the titles are included (only 22), but they are the most popular. They’ve also participated in Portico. JSTOR is also partnering with 18 other publishers that are mission-driven rather than profit-driven, with experience at balancing the needs of academia and publishing.

By partnering with JSTOR for their new content, UC Press will be able to take advantage of the expanded digital platform, sales teams, customer service, and seamless access to both archive and current content. There are some risks, including the potential loss of identity, autonomy, and direct communication with libraries. And then there is the bureaucracy of working within a larger company.

The Current Scholarship Program seeks to provide a solution to the problems outlined above that university presses and small scholarly publishers are facing. The shared technology platform, Portico preservation, sustainable business model, and administrative services potentially free up these small publishers to focus on generating high-quality content and furthering their scholarly communication missions.

Libraries will be able to purchase current subscriptions either through their agents or JSTOR (who will not be charging a service fee). However, archive content will be purchased directly from JSTOR. JSTOR will handle all of the licensing, and current JSTOR subscribers will simply have a rider adding title to their existing licenses. For libraries that purchase JSTOR collections through consortia arrangements, it will be possible to add title by title subscriptions without going through the consortia if a consortia agreement doesn’t make sense for purchase decisions. They will be offering both single-title purchases and collections, with the latter being more useful for large libraries, consortia, and those who want current content for titles in their JSTOR collections.

They still don’t know what they will do about post-cancellation access. Big red flag here for potential early adopters, but hopefully this will be sorted out before the program really kicks in.

Benefits for libraries: reasonable pricing, more efficient discovery, single license, and meaningful COUNTER-compliant statistics for the full run of a title. Renewal subscriptions will maintain access to what they have already, and new subscriptions will come with access to the first online year provided by the publisher, which may not be volume one, but certainly as comprehensive as what most publishers offer now.

UC Press plans to start transitioning in January 2011. New orders, claims, etc. will be handled by JSTOR (including print subscriptions), but UC Press will be setting their own prices. Their platform, Caliber, will remain open until June 30, 2011, but after that content will be only on the JSTOR platform. UC Press expects to move to online-only in the next few years, particularly as the number of print subscriptions are dwindling to the point where it is cost-prohibitive to produce the print issues.

There is some interest from the publishers to add monographic content as well, but JSTOR isn’t ready to do that yet. They will need to develop some significant infrastructure in order to handle the order processing of monographs.

Some in the audience are concerned that the cost of developing platform enhancements and other tools, mostly that these costs will be passed on in subscription prices. They will be, to a certain extent, only in that the publishers will be contributing to the developments and they set the prices, but because it is a shared system, the costs will be spread out and likely impact libraries no more than they have already.

One big challenge all will face is unlearning the mindset that JSTOR is only archive content and not current content.

Learning 2008 Keynote: Networked Academic Conversations and the Liberal Arts

The creation of knowledge through conversation is the core of liberal arts education.

Presenter: Ruben R. Puentedura

The creation of knowledge through conversation is the core of liberal arts education.

According to research from the past 5-10 years, blended learning (face-to-face + online) is becoming more relevant and necessary on residential campuses. These studies show that truly blended courses where the face-to-face and online components are comparable in magnitude will fix some of the problems with both face-to-face and online courses.

Face-to-face learning is good for:

  • establishing a local presence
  • discursive task definition
  • generation of ideas

Online learning is good for:

  • sustaining social presence
  • discursive task execution
  • evaluation & development of ideas

[side note: I am seeing truth in the above thanks to online social networks like Twitter, Facebook, and the Library Society of the World, which are responsible for both sustaining and growing the connections I make at conferences.]

Prior to the development of the tools and technology that led to Web 2.0, we did not have the ability to see bi-directional conversations on the Web. Web 2.0 has re-defined the Web as a platform for small pieces, loosely joined. The Web 2.0 is the architecture of participating, with remixable data sources and data transformations, harnessing collective intelligence.

Conversations as continuous partial attention
Twitter is both asynchronous and synchronous at the same time. Conversations can be both instantaneous and over time, and there are no expectations that you will read every single update from everyone you follow.

Conversations surrounding production/consumption
Flickr has taken the static image on a website and enhanced it with conversational elements like comments, groupings, tags, and notes on photos. Partially because the content is self-produced, this has created a supportive community and a culture of intolerance for troll-like behavior. In contrast, YouTube, which offers similar features for moving images, is filled with content not created by the sharer, and the community is unfriendly compared to Flickr.

Ustream contains user-generated live streaming video, and should have a culture of users similar to Flickr; however, it appears to lean more towards the YouTube culture. Swivel is a site for sharing data and creating visualizations from that data, and it straddles the line between a supportive culture and one that is prone to troll-like behavior.

All of this is to say that if you choose to use these tools in your classroom, you need to be aware of the baggage that comes with them.

Conversations mapping the terrain
del.icio.us is a social bookmarking service that can be an information discovery tool as well as a conversation. The process of adding a new bookmark tells you something about the URL by showing how others have added it (leaning on the expertise of other). The network of users and tags can show connections outside of defined groups.

Conversations based on shared creation
Most blogs include comment functionality which allows readers to participate on equal footing. Trackbacks show links from other locations, branching out the conversation beyond the boundaries of the solitary blog. The blog has also cause the rediscovery of forms of discourse such as the exploratory essay, epistolary conversation, and public scholarly societies (scholarly societies that are visible and present in the public eye as authorities on subjects).

Wikis provide a forum for discussion with a historical archive of past conversations. Through the interaction between scholars and non-scholars on wikis such as Wikipedia, the articles become better, more comprehensible explorations of topics. A student project using wikis could be one in which they create a scholarly essay that for a topic lacking such on Wikipedia and submit it, thus gaining the experience of creating scholarship in the public eye and contributing to the greater good of the whole.

SIMILE Timeline is another tool for creating content relevant to a course that provides a forum for discussion.

Conversations about conversations
Ning allows you to create a social network with tools like those on MySpace or Facebook but without the culture and baggage. You can do similar things in traditional academic tools such as course management software, but Ning is more attractive and functional.

What’s next? Puentedura suggests the SAMR model. As we move from substitution to augmentation to modification to redefinition in the way we use technology and tools in the classroom, we move from basic enhancement with little buy-in or value to a complete transformation of the learning process that is a true academic conversation between the student and the professor.

Resources:
The Horizon Report
ELI: 7 Things You Should Know About…
50 Web 2.0 Ways to Tell a Story

Harvard & the Open Access movement

A colleague called the Harvard faculty’s decision on making all of their works available in an institutional repository a “bold step towards online scholarship and open access.” I thought about this for a bit, and I’m not so sure it’s the right step, depending on how this process is done. Initially, I thought the resolution called for depositing articles before they are published, which would be difficult to enforce and likely result in the non-publication of said articles. However, upon further reflection and investigation, it seems that the resolution simply limits the outlets for faculty publication to those journals that allow for pre- or post-publication versions to be deposited in institutional repositories. Many publishers are moving in that direction, but it’s still not universal, and is unlikely to be so in the near future.

I am concerned that the short-term consequences will be increased difficulty in junior faculty getting their work published, thus creating another unnecessary barrier to tenure. I like the idea of a system that retains the scholarship generated at an institution, but I’m not sure if this is the right way to do it. Don’t get me wrong — repositories are a great way to collect the knowledge of an institution’s researchers, but they aren’t the holy grail solution to the scholarly communication crisis. Unless faculty put more of a priority on making their scholarship readily available to the world than on the prestige of the journal in which it is published, there will be little incentive to exclusively submit articles to publishers that allow them to be deposited in institutional repositories beyond mandatory participation. There are enough hungry junior faculty in the world to keep the top-shelf journal publishers in the black for years to come.

css.php