what’s the big deal?

house of cards
photo by Erin Wilson (CC BY-NC-ND 2.0)

I’ve been thinking about Big Deals again lately, particularly as there are more reports of institutions breaking them (and then later having to pick them up again) because the costs are unsustainable. It’s usually just the money that is the issue. No one has a problem with buying huge journal (and now book) bundles in general because they tend to be used heavily and reduce friction in the research process. No, it’s usually about the cost increases, which happen annually, generally at higher rates than library collections budgets increase. That’s not new.

The reality of breaking a Big Deal is not pleasant, and often does not result in cost savings without a severe loss of access to scholarly research. I’m  not at a research institution, and yet, every time I have run the numbers, our Big Deals still cost less than individual subscriptions to the titles that get used more than the ILL threshold, and even if I bump it up to, say, 20 downloads a year, we’re still better off paying for the bundle than list price for individual titles. I can only imagine this is even more true at R1 schools, though their costs are likely exponentially higher than ours and may be bearing a larger burden per FTE.

That gets at one factor of the Big Deal that is not good — the lack of transparency or equity in pricing. One publisher’s Big Deal pricing is based on your title list prior to the Big Deal, which can result in vastly different costs for different institutions for essentially the same content. Another publisher many years ago changed their pricing structure, and in more polite terms told my consortia at the time we were not paying enough (i.e. we had negotiated too good of a contract), and we would see hefty annual increases until we reached whatever amount they felt we should be paying. This is what happens in a monopoly, and scholarly publishing is a monopoly in practice if not in legal terms.

We need a different model (and Open Access as it is practiced now is not going to save us). I don’t know what it is, but we need to figure that out soon, because I am seeing the impending crash of some Big Deals, and the fallout is not going to be pretty.

giving SUSHI another try

(It's just) Kate's sushi! photo by Cindi Blyberg
photo by Cindi Blyberg

I’m going to give SUSHI another try this year. I had set it up for some of our stuff a few years back with mix results, so I removed it and have been continuing to manually retrieve and load reports into our consolidation tool. I’m still doing that for the 2017 reports, because the SUSHI harvesting tool I have won’t let me go back and pull from before, only monthly moving forward now.

I’ve spent a lot of time making sure titles in reports matched up with our ERMS so that consolidation would work (it’s matching on title, ugh), and despite my efforts, any reports generated still need cleanup. What is the value of my effort there? Not much anymore. Especially since ingesting cost data for journals/books is not a simple process to maintain, either. So, if all that matters less to none, might as well take whatever junk is passed along in the SUSHI feed as well and save myself some time for other work in 2019.

notes from #OpenConVA

Open Access Principles and Everyday Choices

Speaker: Hilda Bastian, National Center for Biotechnology Information

It’s not enough to mean well — principles and effects matter.

Everyday choices build culture. You can have both vicious circles and virtuous spirals of trust.

There is a fine line between idealism and becoming ideological.

Principles can clash — and be hard to live up to.

One of the unintended consequences of OA is the out of control explosion of APC costs for institutions who’s principles call upon researchers to publish OA. APC costs have not decreased as expected.

http://blogs.plos.org/absolutely-maybe/2017/08/29/bias-in-open-science-advocacy-the-case-of-article-badges-for-data-sharing/

Take critics seriously. What can you learn from them? But also, make sure you take care of yourself — it can be overwhelming. If you can’t learn anything useful from the criticism, only then can you dismiss it.


Lightning Talks

  1. Ian Sullivan, Center for Open Science – how to pitch openness to pragmatists: Openness is seen (and often times presented as) extra work. Reframe it as time-shifting. It’s work you’re already doing (documenting, moving material to somewhere else, etc.). Think about it as increasing efficiency and reducing frustration if you plan for it early on. Version control or document management will save time later, and includes all the documentation through the process that you’ll need in the end, anyway. Open practices are not going away, and they are increasingly be required by grant agencies and publication outlets. If you can become the “open expert” in your lab, that will build your reputation.
  2. Anita Walz, Virginia Tech – what is open in the context of OER: Lower cost educational materials; broader format types for learning materials; increasing impact for scholars/authors; collaboration; identity/ego? Maybe. Values of open that benefit higher education: access is for everyone (inclusive), providing feedback (collaborating); sharing new ideas about teaching & research; embracing the use of open licenses; giving credit where it’s due, even when not expected; outward facing, thinking about audience; using it as a lever to positively effect change in your work and the world around you.
  3. Pamela Lawton and Cosima Storz, Virginia Commonwealth University – incorporating art in the community: handed out a zine about zines, with the project of making a zine ourselves. Zines are collaborative, accessible, and community-driven.
  4. Eric Olson, ORCID – increasing research visibility: Names are complicated when it comes to finding specific researchers. One example, is a lab that has two people with the same rather common name or first initial and last name (this is not as uncommon as you might think). A unique ID will help disambiguate this if the IDs are included with the researcher’s publications. ORCID is a system that makes this possible if authors and publications/indexes connect to it.
  5. Beth Bernhardt, UNC Greensboro – how to support open scholarship with limited library resources: created a grant that offers time rather than money — web developers, staff with expertise on OA, digitization, metadata, etc. The grant awards the equivalent of .5 FTE. In the end, they found they needed to give more staff time than originally planned to fully execute the projects.
  6. Kate Stilton, North Carolina A&T State University – open educational resources at a mid-sized university: 85% of students receive need-based financial aid, so they definitely need help with acquiring educational resources since, in part, it’s a STEM-focused institution. They have to be realistic about what they can offer — the library is understaffed and underfunded. They are focusing on adoption of OER materials, and less about creating their own. They’re also looking at what other schools in the area are doing and how they could be applied locally, as well as leaning on the expertise of early adopters.
  7. Jason Dean Henderso, Oklahoma State University – OERx, custom installation of an open source content management system MODx: received a donation to make OER content, which meant they had to find a way to host and distribute them. They’ve used open journal systems, but there isn’t great documentation for Public Knowledge Project’s Open Monograph Press software, so they modified it for their own purposes to make something easier to use out of the box. They’ve cross-listed the OER books with the course offerings for faculty to make use of them if they wish.
  8. Braddlee, NOVA Annandale – adoption of OER in VCCS’s Zx23: surveyed faculty who participated in the program across all of the VCCS schools. As you might expect, faculty still don’t see librarians in outreach or institutional leadership roles.
  9. Sue Erickson, Virginia Wesleyan College, and Gardner Campbell, Virginia Commonwealth University – Open Learning ’18: online course about open learning starting in February. Hypothes.is is an annotation tool that will be used and is a favorite of Campbell.
  10. Nina Exner, Virginia Commonwealth University – data reuse: When we talk about sharing data, we don’t mean you need to ignore other obligations like privacy of research subjects (IRB) or copyright restrictions you’ve agreed to. You don’t need to share every single piece of data generated — just the data associated with a specific finding you’ve published or received funding for. FAIR principles come into play at this point, which are generally good practices, anyway. Where you store data isn’t as important as whether it’s accessible and reusable. If you’re a librarian, please don’t talk about “scholarly communications” with non-librarians. Use terms like public access, supporting data, data availability, reproducibility, and rigor.
  11. Jason Meade, Virginia Commonwealth University – Safecast example of crowdsourcing scientific data: Created in response to the Fukushima Daiichi nuclear power plant disaster in 2011. Handed out mini Geiger counter kits, and the data was uploaded to a central site for anyone to see. The initial group to receive the kits were the hardcore skeptics. He is quite impressed with the volume of data created over a short amount of time with very little cost. This model could be used in many other fields to increase data generation at reduced costs, with increased buy-in and awareness among the public.

Student Voices in Open Education

Speakers: info coming soon

Business faculty member at Virginia Tech decided to revamp what a textbook would be, and the end result is more dynamic and useful for that particular course than any offered through traditional sources. It’s also open.

VCU language faculty agreed that teaching 200 level courses is the worst. They decided to create WordPress sites for the 201 students to create curated content that was more engaging than traditional language pedagogy. The second part of the project was to have the 202 students create OER scaffolded projects from the curated collections. The students are finding this much more engaging than the expensive textbooks.

Student says she has to choose between an older edition that is more affordable but means she may struggle more in class, and the current edition that is more expensive. Another student says that for how much they spend on the books, they can sometimes be surprisingly out of date.

Faculty are concerned about inclusion and equity, and the cost of materials can have inequitable impact on learning between students from different economic backgrounds. There is also concern about the texts having relevance to current culture (ie Madonna references aren’t great in 2017), so they need to be regularly updated, but that can increase the costs. Additionally, supplemental tools require access code purchases, but often are used sub-optimally. When fields are changing rapidly, textbooks are out of date before they are even adopted.

Language faculty working with students on this project have learned a lot more about how they learn, despite what their own training about pedagogy told them. The students were quite frank about what worked and what didn’t.

Student says that the curation project has given her tools for lifelong language learning and application.


Predatory Publishing: Separating the Good from the Bad

Speakers: info coming soon

Predatory, parasitic, vanity, disreputable — these are journals that are not interested in scholarly communication, just in making money. They lack peer review (i.e. they say they do, but it takes 24 hours), charge fees for submissions, and they want to retain all copyright.

Open Access has been tainted by predatory publishing, but they aren’t the same thing. Look out for: a lack of clearly defined scope (or a bunch of SEO-oriented keywords), small editorial board and/or no contact information, lack of peer review process, article submission fees, and the publisher retaining all copyright. Not necessarily related, but are kind of murky regarding credibility: lack of impact factor, geographical location (one of the issues with Beall’s list), article processing charges (to publish, not to submit), and poor quality.

If you’re still uncertain about a specific journal: ask your colleagues; see if it’s indexed where the journal claims to be indexed; if it’s OA, see if it is listed in DOAJ, see if the publisher belongs to OASPA or COPE.

Other tools:
Think. Check. Submit.
COPE principles of transparency & best practices in scholarly publishing
ODU LibGuide

Watch out for predatory conferences. They will fake speakers, locations, schedules, etc., just to get your registration money.

Sometimes it’s hard to tell if a new journal is legitimate because there are a lot of characteristics that overlap with predatory publishers. Check with the editorial board members — do they even know they are on the editorial board?


Open in the Age of Inequality

Speaker: Tressie McMillan Cottom, Virginia Commonwealth University

She’s been at VCU for three years, and one of the first things she and her colleagues tackled was revamping the digital sociology program. In part, there was an effort to make it more open/accessible. Open is inherently critical, and her perspective about sociology is that it’s critical of every aspect of systems and institutions, including the one you exist within.

The program is mostly made up of professionals, so part of it involved developing specific types of skills. They needed to learn professional digital practice, being sociological with their critique of digital life, and analysis of digital data and the creation of that data.

They wanted to practice what they were preaching: open, accessible, rigorous, and critical. They had access to OER materials and SocArXiv (social sciences open archive).

VCU faculty were incentivized to use eportfolios, but no one really knows how to do it well. The tool is a blog. Because it was inconsistently required, the students get the impression it’s not important. However, it’s supposed to show growth over time and potentially be used for getting a job after graduating.

To fix this, they started by shifting to a cohort model. This meant switching to a fall-only enrollment. The second thing they did was to create a course sequence that all students must follow. This meant that faculty could build assignments based on previous assignments. The cohort structure emphasized theory-building and problem solving.

What/why worked: leadership that was wiling to embrace the changes; trust among the faculty teaching in the program; approaches to teaching had to be restructured with different cohorts, which required a lot of communication.

What kinda worked: open data was easier to implement than OER (quality and vigor varied tremendously, not much available in critical sociology at the graduate student level, most of the important topics from the past 30 years was not included); OER resources lacked the critical sociology content they were interested in, such as race, gender, class, intersectionality.

What chafed: accretion (five offices are in charge of “online”, with different staff and objectives; often they don’t know who does what); market logics (why we are supposed to adopt open as a model — things aren’t less expensive when you consider the faculty time it takes to implement them); working without a model (had to develop everything they use from scratch, such as how the eportfolios would be assessed, protect the student’s identities online, adopting open access products from for-profit sources).

OER can be created by people with institutional support time, cumulative advantage of tenure and digital skills without immediate need for pay, job security, mobility, or prestige. What happens is that those who can do it, tend to be homogeneous, which is not what critical sociology is interested in, and in fact, their institutions are often the topics of critical sociology.

They are working on figuring out how to have online classes that protect students who may be vulnerable to critique/attack online. They are trying to build a community around this — it’s very labor-intensive and can’t be done by a small group.

They are trying to reuse the student work as much as possible, generally with data rather than theory work (it’s not really up to par — they’re graduate students). They need to constantly revisit what colleagues have taught or how syllabus shifted in response to that particular cohort as they are planning the next semester of work.

There is a big concern about where to put the data for reuse, but not for reuse by for-profit agencies wanting to create better targeted ads, for example. For now, it’s restricted to use by students at VCU.

“Pay to play” mode of OA journals/books is neo-liberal open access. How is the open model simply repackaging capitalists systems? This is also something they need to be incorporating into a critical study of digital sociology.

Online is a way to generate revenue, not as a learning tool. Marketing/communications departments have far too much power over how faculty use online platforms.

Charleston 2016: COUNTER Release 5 — Consistency, Clarity, Simplification and Continuous Maintenance

Speakers: Lorraine Estelle (Project COUNTER), Anne Osterman (VIVA – The Virtual Library of Virginia), Oliver Pesch (EBSCO Information Services)

COUNTER has had very minimal updates over the years, and it wasn’t until release 4 that things really exploded with report types and additional useful data. Release 5 attempts to reduce complexity so that all publishers and content providers are able to achieve compliance.

They are seeking consistency in the report layout, between formats, and in vocabulary. Clarity in metric types and qualifying action, processing rules, and formatting expectations.

The standard reports will be fewer, but more flexible. The expanded reports will introduce more data, but with flexibility.

A transaction will have different attributes recorded depending on the item type. They are also trying to get at intent — items investigated (abstract) vs. items requested (full-text). Searches will now distinguish between whether it was on a selected platform, a federated search, a discovery service search, or a search across a single vendor platform. Unfortunately, the latter data point will only be reported on the platform report, and still does not address teasing that out at the database level.

The access type attribute will indicate when the usage is on various Open Access or free content as well as licensed content. There will be a year of publication (YOP) attribution, which was not in any of the book reports and only included in Journal Report 5.

Consistent, standard header for each report, with additional details about the data. Consistent columns for each report. There will be multiple rows per title to cover all the combinations, making it more machine-friendly, but you can create filters in Excel to make it more human-friendly.

They expect to have release 5 published by July 2017 with compliance required by January 2019.

Q&A
Q: Will there eventually be a way to account for anomalies in data (abuse of access, etc.)?
A: They are looking at how to address use triggered by robot activity. Need to also be sensitive of privacy issues.

Q: Current book reports do not include zero use entitlements. Will that change?
A: Encouraged to provide KBART reports to get around that. The challenge is that DDA/PDA collections are huge and cumbersome to deliver reports. Will also be dropping the zero use reporting on journals, too.

Q: Using DOI as a unique identifier, but not consistently provided in reports. Any advocacy to include unique identifiers?
A: There is an initiative associated with KBART to make sure that data is shared so that knowledgbases are updated so that users find the content so that there are fewer zero use titles. Publisher have motivation to do this.

Q: How do you distinguish between unique uses?
A: Session based data. Assign a session ID to activity. If no session tracking, a combination of IP address and user agent. The user agent is helpful when multiple users are coming through one IP via the proxy server.

Slides

Charleston 2016: Rolling with the Punches… and Punching Back: The Millennial Librarian’s Approach to Library Budgets and Acquisitions

Are you one of the Millennials who are ruining everything?Speakers: Ashley Krenelka Chase (Stetson University College of Law), Lindsay Cronk (University of Houston), Ellen Frentzen (Boston University School of Law), and Christine Weaver-Pieh (Medina County District Library)

If pesky whipper-snapper seem to be moving up the ranks really fast, it’s probably because they don’t have any damn money.

Pesky Whipper-Snapper are team-oriented, which drives GenXers crazy. They are the most well educated generation so far. [Many other characteristics are described, but these two stand out to me.]

Vendor relations
Sending questions/requirements in writing to vendors has helped them take it more seriously that this person knows what they are doing. There’s a desire to develop a relationship with the individuals working for a vendor in order to have a better conduit for feedback. The communication needs to happen both ways to be productive.

Coworker relations
Take responsibility for your own actions and present how you might do something better if it fails. Know where your weaknesses are. Need to get beyond “we’ve always done it this way” — spend time regularly assessing workflows and processes to make sure it’s still necessary and appropriately distributed. Kindness and a willingness to approach the work as a team goes a long way.

Collection development
Collections is going to need to be fundamentally re-imagined, but we’re going to have to continue with the models we have as well. Don’t need to buy everything we’ve ever bought — hold off until it’s requested (i.e. standing orders). Decisions based on hard data about usage. Working with estates for endowed funds to shift the gift requirements from monographs only. Cutting the stuff nobody looks at may drive usage up for the things they do. Shifting from journal subscriptions to article purchases.

Collections budget
Previous managers held back money to make sure that all assumed needs were covered in a fiscal period – shifted to focus on the kinds of things people are asking for, and providing it when they do. Less books is not a problem if the people are finding what they need. Shifting from buying everything of a particular type to buying in targeted areas that are relevant to most of the programs supported by the library.

Missing in skill set?
Need to know more about Banner. Need to know more about faculty politics, but can’t do that until allowed in the room.

Budget priorities
Getting everything to zero by the end of the year. Don’t waste money — tracking usage — but making sure it’s spent. Staff professional development, public facing services, and the tools to do their work (i.e. Office supplies).

Identifying audiences
Stakeholders are not the same as the users (i.e. Provost/Dean, alumni, accrediting bodies). Creating personas. Identifying the shift in majors by population.

Compensating for shortfalls
Reduced sharing of materials outside the institution to keep materials there for the users. Strategic planning to identify potential cuts if they become necessary. Play it close to the vest until what is happening is actually happening, because things can change on a dime as leadership looks at big picture and shifts. Communicate budget information in a clear visual to the decision-makers, particularly to make a point of what funds are available versus what has either been spent or encumbered.

Q&A
Q: What do you do with people who don’t take responsibility?
A: Pesky Whipper-Snappers might be gun-shy about taking responsibility for things they don’t feel confident about. Use this as a teachable moment.

Charleston 2016: You Can’t Preserve What You Don’t Have – Or Can You? Libraries as Infrastructure for Perpetual Access to Intellectual Output

Plenary Sessions of the Charleston Conference at the Gaillard Center (Charleston, South Carolina) - November 3, 2016
Anja Smit at Charleston Conference

Speaker: Anja Smit, Utrecht University

Ancient scholars would not recognize our modern libraries. There are new services (via the internet) that replace some of the services of library, and we need to continual re-evaluate what value we are adding.

For example, we are putting a lot of effort into locally managed discovery services, and yet a majority of sources referring users to content are Google and Google Scholar. For some disciplines, the library plays a very small role in discovery of content, so the Dutch have focused on providing access to content over discovery.

But, what if OA becomes the publication model of the future? What if Google does digitize all the books? What if users organize access themselves?

The Dutch consortia is flipping some pricing models. In two of the licenses they currently hold, they are paying for the cost of publication rather than the rights for access, and they are making the Dutch scholarly work OA globally. However, they have found perpetual access, or preservation, has not been an easy thing to negotiate or prioritize.

Librarians have been trying to find a solution for long-term preservation since the dawn of digital publication. There are some promising initiatives.

France has built a repository that includes access (not just a dark archive). How do we scale this kind of thing globally? Funding is local. We will never have a global system, so we need local systems based on a standard that will connect them.

Libraries do not own the digital content. We can collect it, but we tend to collect what our community needs rather than the output of our researchers.

Libraries can put things on the agenda of other stakeholders. OA and Open Science is on the agenda of politicians and governments because of libraries.

To-do:

  1. Make perpetual access to knowledge the top priority on our agenda.
  2. Get perpetual access to knowledge on the agenda of relevant stakeholders as quickly as possible. Collectively.
  3. Find partners to develop longer term preservation infrastructure.

We can leave the rest to Google.

Q&A

Q: Dutch presidency of EU and Dutch proposals for OA – what do you think of the Dutch policies in this area?
A: We are all trying to find solutions to further and advance access to knowledge. That is our common goal. This is such a complicated issue — all the stakeholders have to work together to do this.

Q: Libraries have not done as well a job of preserving media. Not as concerned about the availability of scholarly journals and books in the future — what happens to the emails and other media forms that are getting lost?
A: Documented knowledge is at the core of libraries. The other areas have much bigger problems. That is such a huge area that she would not presume to have ideas or suggestions for solutions.

Q: Libraries are being pressured to collect and manage raw faculty research, without additional support, so it’s taking away from collecting in traditional areas.
A: Some say that this will become the new knowledge — data will trump publication. Libraries are best positioned to help researchers manage their data in a consultancy role, and let IT handle the storage of the data. We could spend a little less on collection development to do this.

Q: What will happen when Google is no longer freely accessible and there’s a cost?
A: It doesn’t help if we keep pointing people to local collections. Our users use Google, so we need to help them find what they are not able to find there themselves.


social justice librarianship

Barbara Fister’s latest Library Babel Fish essay is on point for me in so many ways.

It’s not easy to write this well, to combine edge-of-your-seat narrative momentum with scholarly rigor. Not only is it not easy, but we’re schooled to write in an inaccessible style, as if our ideas are somehow better if written in a hard-to-decipher script that only the elite can decode because if people who haven’t been schooled that way can understand it, it’s somehow base and common, not valuable enough.

Yes. So much this. I think it’s possibly one of the reasons why librar* blogs burned so brightly and fiercely before other social media sites took on that space. It gave us a platform to share our thoughts and work in ways that were not stifling like the journals that normally published librar* scholarship. Bloggers who could write eloquently and pointedly about the issues of the day and what they thought of them gained quite a bit of attention (and still do, for those that have continued to write in this type of forum). I certainly read them more consistently and thoroughly than any professional publication filled with strict form and complex sentence structures.

…it’s immoral to study poor people and publish the results of that study in journal run by a for-profit company that charges more for your article than what the household you studied has to buy food this week. I cannot think of any valid excuse for publishing social research this way.

Many of the economic arguments for open access have grown stale, but this one is fresh and new to me, and it hits hard. Much like when those of us in library acquisitions roles submit articles to closed publications, we are choosing the expectations of our peers for tenure requirements over our professional ethics. If we want the contents of scholarly journals to be accessible to all who need them, then we need to make sure our own house is in order before we go out and ask faculty to do the same.

You can reserve the right to share your work, and we’re finding sustainable ways to fund public knowledge. Will it take a little more of your time? Yeah, it’s a cultural shift, which is obviously complex, and you’re so busy.

But if you actually think your research matters, if you think research could make people’s lives better, if you use the phrase “social justice” when you describe your work, you should take that time. It’s unethical not to.

new platforms! eek!

snapchatWhat does it say about library systems and tools that the initial response to trying a new thing is a general groan about having to teach a new platform? Our students are used to hopping on a new social media platform with minimal to no “instruction” every 6-8 months. Our systems and tools should be that intuitive. We shouldn’t need to “teach” them. They should be discovered and used without our active intervention.

ER&L 2016: Access Denied!

outlier
“outliers” by Robert S. Donovan

Speakers: Julie Linden, Angela Sidman, and Sarah Tudesco, Yale University

Vendors often use the data from COUNTER turn away reports as marketing tools to convince a library to purchase new content.

How are users being turned away from the content? How are they finding it in the first place? Google/Google Scholar, PUBMED, and publisher platforms that don’t allow for limiting to your content only are generally the sources.

Look for patterns in the turnaway data. Does it match the patterns in your use data and the academic year? Corroborate with examples from access issue reports. This can lead to a purchase decision. Or not.

Look for outliers in the turnaway data. What could have caused this? Platform changes, site outages (particularly for content you do license but appears on the turnaway report), reported security breaches, etc. You can ask for more granular data from the vendor such as turnaways by day or week, as well as IP address. You can ask the vendor for contextual information such as platform changes/issues, and more pointedly, do they think the turnaways are coming from real users.

Combine the data from the turnaway reports with ILL requests. Do they match up? This might mean that those titles are really in demand. However, bear in mind that many users will just give up and look for something else that’s just as good but available right now.

Analysis checklist:
IF you see a steady pattern:

  • Check holdings for access to the content
  • Consider the access model (MU/SU)

IF you see outliers:

  • Consider outside events

ASK the vendor for more information

  • Can you provide more granular data?
  • Can you provide contextual information?
  • Do you think this represents real users?

Audience Q&A:

Journal turnaways can include archival years for current subscriptions that aren’t included.

One very aggressive vendor used the library’s purchase request form to flood them with requests from users that don’t exist.

How are the outliers documented? Hard to do. Vendors certainly hang on to them, even when they acknowledge they know this isn’t legit.

ER&L 2016: Using the Scrum Project Management Methodology to Create a Comprehensive Collection Assessment Framework

Scrum
“Scrum” by Curtis Pennell

Speaker: Galadriel Chilton, University of Connecticut

Used SCRUM for assessment project for their electronic resources collection. They wanted to make sure that all library staff in collection development would be able to manage annual reviews of eresources.

SCRUM: A breathtakingly brief and agile introduction by Chris Sims & Hillary Louise Johnson

First you put together a team, then you create the stories you want to build from the deliverables. Once you have your story, you have a sprint planning meeting for the following 2 week period, and this will take about 4 hours. This planning takes the deliverables and the story, and then develops the tasks needed to accomplish this. You’ll also need to factor in available time because the daily work still needs to be done. Each task will get an estimated time (determined by consensus). Tasks are assigned based on availability and skill set.

The sprint story board is a physical item. You document the story, then three columns of not started, in progress, and done. Each day of the sprint you have a check-in to report on the previous day’s work, problems, and the work that will be done that day.

One of the down sides is that they are a small team, and by the second or third sprint, they were getting exhausted by it. They had other jobs that needed to be done during this as well.

It worked really well for balancing the work against the other tasks of each person, and avoid burnout or a sense of imbalance.

Q: What other projects would be useful for this method?
A: Moving proxy services; mass communication with vendors to update mailing address and contacts; tracking time and deliverables for annual reporting; projects you don’t know what you have to do ahead of time.

Tracking time: Chrome plugin, post-it notes; spreadsheet of a time managed by a time-tracker

Q: minimum number of people? 4

css.php