fail whale and phoney stark

I joined Twittr in 2007 — some time before they put the ‘e’ in the name. Some colleagues at Blogcritics thought it would be a useful tool for sharing what we’re working on. That use fell away pretty quickly. As more library folks joined the service, it became a really useful tool for keeping up with the profession, following conference content, and shitposting.

Now it looks like the new owner either is intentionally destroying it or is just too incompetent to lead. Some folks are hanging on to the bitter end, some are jumping ship to Mastodon or Tumblr or only using Facebook/Instagram. As much as that place had become a bit of a hellscape, it was a tool that connected me to a lot of my colleagues and friends from all over. I’ll miss that aspect.

I joined Mastodon in 2018 at the general suggestion of Ruth Kitchin Tillman. She’s written a great introduction for new users. The glammr.us instance was pretty quiet until rumors of the Twitter sale surfaced, and then over the past few weeks it’s gotten quite busy. Up to that point, it was a small community I’d look in on occasionally, but did not engage with that much.

The Mastodon culture I’ve encountered so far has been much more community oriented and 1:1 engagement. Folks there make use of content warnings and use alt tags on images. It’s not without its issues (black folks say it’s inherently a white space; there is a bit of learning curve), but so far I’m fine with building a different community and engagement in that space.

I took Facebook off my phone…

…and I’m kind of surprised and pleased by what happened.

Admittedly, it’s only been two days, and I’ve done this before (for different reasons), so I know I might eventually add it back. But for now, it’s doing what I had hoped and more.

I don’t have an endless scroll of posts and links and memes and videos to occupy my brain in the down times. I still have other social media apps, so there are plenty of things to occupy that space, but they aren’t nearly as prolific. Also, although I’m still on Twitter, I barely read it and usually only a subset of content when I do. (Come find me on Mastodon, if that’s your thing, though I’m not much more active there.)

The thing that surprised me, though, was a resurgence of the use of Pocket. I’ve started throwing links to essays and articles there for later reading as I peruse the scroll of social media elsewhere. Then, when I’m waiting in line, or have a few minutes before the next thing, or eating a meal alone, I have some handy reading material that I actually want to see.

I get “all caught up” on Instagram more quickly than I used to, and I’m trying to browse the Flickr app regularly, too, but it’s not a well designed.

When I do look at FB, it’s on a desktop browser with a plugin that filters out certain content. I mean, I know that one cousin loves right wing media and posting racist/homophobic memes, but thanks to the filter, I can remain ignorant on the details.

I find that in my time away from FB, not much had actually happened that needs my attention. I hope eventually I can settle back into the apathetic disinterest I had for it years ago.

I still have Messenger, though. Too many people I like use it instead of texting or email.

quantified self, an addendum

Digital Body Fat Weight Scale by BalanceYesterday I shared a list of apps and tools I’m using to monitor and track things, mainly health-related. Well, my Amazon packages arrived last night, and I now have a new scale. The old one started acting weird a week or two ago, coinciding with what appeared to be a three or four pound gain in weight in a week. The new scale indicates my weight is right around where it was before my old scale went haywire. So, that’s reassuring.

My new scale also measures body fat %, muscle mass %, bone density %, and water weight %. As I mentioned yesterday, I already have a hand-held body fat monitoring tool, but I was curious to know if the electrical impulses running from foot to foot would encounter different types of data points than those running from hand to hand. Sure enough, my body fat % is much higher on the scale than with the hand-held device. For my own tracking purposes, I’m recording the average between the two.

the quantified self

Over the past few years, I’ve been using a variety of apps and devices to keep track of all sorts of things about myself, primarily related to my health and well-being. It’s been on my mind lately that you might be interested in these as well, so here’s a brief run-down of what I’m using today.

Fitbit OneThe Fitbit One is no longer manufactured by Fitbit, which is too bad. I’ve had one of these devices for a little over five years (thanks to Marie), and I’m on my third one with a fourth in reserve. It’s small enough to fit in my pocket, even with the holder/clip. It tracks my steps fairly accurately (I’ve tested it periodically), as well as distance covered and the quality of my sleep. It has been my primary health app/device for most of the time I’ve been measuring myself, and though it has its limitations, I still appreciate the core functions. [Side note: I have tried one of the wristband style trackers and I didn’t like it. The neoprene strap made my wrist sweaty, and the step counts seemed less accurate. I liked the heart-rate monitor aspect, but not enough to deal with the annoyances of a thing on my wrist. How I managed to wear a wristwatch for most of the first quarter of my life, I can’t imagine now.]

SyncSolver app iconFitbit decided to not play nicely with the Apple Health universe, but another app developer built SyncSolver to fix that. Since I have my Fitbit on me more than my phone (and it seems to be more accurate than the built-in pedometer on the iPhone), I use this to sync my steps to Apple Health for other apps to read. More on those below.

SleepCycle regular sleep graphA little over a year ago, I became concerned about the quality of my sleep. I downloaded an app that I no longer use and can’t remember the name of to track my snoring, which was far more frequent and vociferous than I thought. I began experimenting with things to improve my sleep quality, from nasal strips (not helpful) to a contoured memory foam pillow (helpful). In the process, I ran across the SleepCycle app. It’s a smart alarm that listens to your sleep and based on the programming, can determine where you are in your sleep cycle throughout the night. As it gets closer to when the alarm is set (within a half an hour, to be exact), if it sounds like you are in a lighter part of your sleep cycle, it will play the sound or song you selected to wake you up. It can also note (and record) when you are snoring and assesses the over-all quality of your sleep. I have two cats, and it seems to know when the noise making is coming from me versus them, which is both amazing and kinda weirds me out. Anyway, it’s been useful for figuring out what I need to do to sleep better. I can sync the sleep data to Apple Health, were it can be read by other apps as needed.

Strides app screenshotLast fall, as a part of my ongoing effort to get better sleep, I was looking for a tool to help encourage me to go to bed on time and wake up when my alarm goes off, rather than staying up too late and hitting snooze or turning off the alarm altogether. A regular schedule is generally believed to be helpful for sleeping better. I started using an app called Strides to keep track of my progress. In January, I added a workouts tracker to provide me with an easy overview of how often I’m doing that this year and how close to the 218 in 2018 I’m able to hit. At the current rate (37  out of 112 days), it will be more like 118, but it’s better than sitting on the couch. None of this syncs with Apple Health, but I don’t need it to.

Fitocracy app screenshot mid-workoutI’ve been a member of the Fitocracy website for several years now, but only in the past couple years did it become useful to me thanks to a more functional app. I prefer to do strength training rather than cardio at the gym (though I make myself do some cardio), and this app is very helpful for keeping track of what I did the last time and guiding me through the workout this time. There’s also a community/social aspect, as well as gamification (you get points for each exercise depending on how challenging it is to do), if that’s your thing. Since so few of my friends use it regularly, I don’t focus on those features much. As you’re going through your workout, you can edit the weight, reps, and sets if you end up doing more or less than you planned. If there is an example video, it will be at the top of the screen, and clicking on it will make the video play. This is helpful if you want to make sure you’re using proper form or need to remember an exercise you haven’t done in a while. It also indicates how many more exercises you have planned (the circles at the bottom of the screen) and what the next exercise set will be. None of the data from this app will sync with anything else, but it’s so useful in an of itself, I don’t mind.

MyFitnessPal screenshotI reinstalled MyFitnessPal in mid-January and have been diligently tracking my food, water, and cardio minutes. This syncs with Apple Health and Fitbit, which is useful for keeping tabs on key nutrients, since Fitbit’s food logger is not great. I used this app a few years ago, and as with others, got frustrated because it was so hard to be precise without measuring out every morsel I consumed. This time it seems to be easier, in part because the food database has expanded, and in part because I’ve let myself not care about the details too much. Part of changing my diet means eating less convenience food and eating more whole foods, preferably that I cooked myself. The challenge is that convenience foods also conveniently have their nutritional content displayed on the package with a barcode to save me even the effort of typing. I’m taking a horseshoes and hand-grenades approach this time — close enough will work. Also, I’m trying to focus more on the macro goals in addition to the caloric limits. What I’ve re-learned in all of this is that lean protein is not nearly as appealing as fatty protein, and I tend to eat very dense foods such that I can consume a lot of calories without feeling like I’m over-eating.

Happy Scale app screenshotLastly (and most recently), on the recommendation of my friend Jenica, I’ve started using the Happy Scale app to track my weight trends. It takes a long-term goal and break it down in to smaller, more immediately achievable goals, showing progress along the way. Although I am logging my weight every day (immediately after I wake up and use the toilet), it’s focused more on averages than that specific day’s weight. The data syncs with Apple Health, which is how other apps like MyFitnessPal and Fitbit get updated. The scale and this app only measure my entire body mass, which isn’t the whole focus of my fitness goals, so I also have a hand-held body fat monitor that I check periodically, usually only when the scale numbers have moved. Since I’m strength training, the scale numbers might go up with the addition of muscle mass that is denser than fat mass. At some point, I should do tape measurements, but for now I’m relying on the fit of my clothing to let me know if things are changing there.

notes from #OpenConVA

Open Access Principles and Everyday Choices

Speaker: Hilda Bastian, National Center for Biotechnology Information

It’s not enough to mean well — principles and effects matter.

Everyday choices build culture. You can have both vicious circles and virtuous spirals of trust.

There is a fine line between idealism and becoming ideological.

Principles can clash — and be hard to live up to.

One of the unintended consequences of OA is the out of control explosion of APC costs for institutions who’s principles call upon researchers to publish OA. APC costs have not decreased as expected.

http://blogs.plos.org/absolutely-maybe/2017/08/29/bias-in-open-science-advocacy-the-case-of-article-badges-for-data-sharing/

Take critics seriously. What can you learn from them? But also, make sure you take care of yourself — it can be overwhelming. If you can’t learn anything useful from the criticism, only then can you dismiss it.


Lightning Talks

  1. Ian Sullivan, Center for Open Science – how to pitch openness to pragmatists: Openness is seen (and often times presented as) extra work. Reframe it as time-shifting. It’s work you’re already doing (documenting, moving material to somewhere else, etc.). Think about it as increasing efficiency and reducing frustration if you plan for it early on. Version control or document management will save time later, and includes all the documentation through the process that you’ll need in the end, anyway. Open practices are not going away, and they are increasingly be required by grant agencies and publication outlets. If you can become the “open expert” in your lab, that will build your reputation.
  2. Anita Walz, Virginia Tech – what is open in the context of OER: Lower cost educational materials; broader format types for learning materials; increasing impact for scholars/authors; collaboration; identity/ego? Maybe. Values of open that benefit higher education: access is for everyone (inclusive), providing feedback (collaborating); sharing new ideas about teaching & research; embracing the use of open licenses; giving credit where it’s due, even when not expected; outward facing, thinking about audience; using it as a lever to positively effect change in your work and the world around you.
  3. Pamela Lawton and Cosima Storz, Virginia Commonwealth University – incorporating art in the community: handed out a zine about zines, with the project of making a zine ourselves. Zines are collaborative, accessible, and community-driven.
  4. Eric Olson, ORCID – increasing research visibility: Names are complicated when it comes to finding specific researchers. One example, is a lab that has two people with the same rather common name or first initial and last name (this is not as uncommon as you might think). A unique ID will help disambiguate this if the IDs are included with the researcher’s publications. ORCID is a system that makes this possible if authors and publications/indexes connect to it.
  5. Beth Bernhardt, UNC Greensboro – how to support open scholarship with limited library resources: created a grant that offers time rather than money — web developers, staff with expertise on OA, digitization, metadata, etc. The grant awards the equivalent of .5 FTE. In the end, they found they needed to give more staff time than originally planned to fully execute the projects.
  6. Kate Stilton, North Carolina A&T State University – open educational resources at a mid-sized university: 85% of students receive need-based financial aid, so they definitely need help with acquiring educational resources since, in part, it’s a STEM-focused institution. They have to be realistic about what they can offer — the library is understaffed and underfunded. They are focusing on adoption of OER materials, and less about creating their own. They’re also looking at what other schools in the area are doing and how they could be applied locally, as well as leaning on the expertise of early adopters.
  7. Jason Dean Henderso, Oklahoma State University – OERx, custom installation of an open source content management system MODx: received a donation to make OER content, which meant they had to find a way to host and distribute them. They’ve used open journal systems, but there isn’t great documentation for Public Knowledge Project’s Open Monograph Press software, so they modified it for their own purposes to make something easier to use out of the box. They’ve cross-listed the OER books with the course offerings for faculty to make use of them if they wish.
  8. Braddlee, NOVA Annandale – adoption of OER in VCCS’s Zx23: surveyed faculty who participated in the program across all of the VCCS schools. As you might expect, faculty still don’t see librarians in outreach or institutional leadership roles.
  9. Sue Erickson, Virginia Wesleyan College, and Gardner Campbell, Virginia Commonwealth University – Open Learning ’18: online course about open learning starting in February. Hypothes.is is an annotation tool that will be used and is a favorite of Campbell.
  10. Nina Exner, Virginia Commonwealth University – data reuse: When we talk about sharing data, we don’t mean you need to ignore other obligations like privacy of research subjects (IRB) or copyright restrictions you’ve agreed to. You don’t need to share every single piece of data generated — just the data associated with a specific finding you’ve published or received funding for. FAIR principles come into play at this point, which are generally good practices, anyway. Where you store data isn’t as important as whether it’s accessible and reusable. If you’re a librarian, please don’t talk about “scholarly communications” with non-librarians. Use terms like public access, supporting data, data availability, reproducibility, and rigor.
  11. Jason Meade, Virginia Commonwealth University – Safecast example of crowdsourcing scientific data: Created in response to the Fukushima Daiichi nuclear power plant disaster in 2011. Handed out mini Geiger counter kits, and the data was uploaded to a central site for anyone to see. The initial group to receive the kits were the hardcore skeptics. He is quite impressed with the volume of data created over a short amount of time with very little cost. This model could be used in many other fields to increase data generation at reduced costs, with increased buy-in and awareness among the public.

Student Voices in Open Education

Speakers: info coming soon

Business faculty member at Virginia Tech decided to revamp what a textbook would be, and the end result is more dynamic and useful for that particular course than any offered through traditional sources. It’s also open.

VCU language faculty agreed that teaching 200 level courses is the worst. They decided to create WordPress sites for the 201 students to create curated content that was more engaging than traditional language pedagogy. The second part of the project was to have the 202 students create OER scaffolded projects from the curated collections. The students are finding this much more engaging than the expensive textbooks.

Student says she has to choose between an older edition that is more affordable but means she may struggle more in class, and the current edition that is more expensive. Another student says that for how much they spend on the books, they can sometimes be surprisingly out of date.

Faculty are concerned about inclusion and equity, and the cost of materials can have inequitable impact on learning between students from different economic backgrounds. There is also concern about the texts having relevance to current culture (ie Madonna references aren’t great in 2017), so they need to be regularly updated, but that can increase the costs. Additionally, supplemental tools require access code purchases, but often are used sub-optimally. When fields are changing rapidly, textbooks are out of date before they are even adopted.

Language faculty working with students on this project have learned a lot more about how they learn, despite what their own training about pedagogy told them. The students were quite frank about what worked and what didn’t.

Student says that the curation project has given her tools for lifelong language learning and application.


Predatory Publishing: Separating the Good from the Bad

Speakers: info coming soon

Predatory, parasitic, vanity, disreputable — these are journals that are not interested in scholarly communication, just in making money. They lack peer review (i.e. they say they do, but it takes 24 hours), charge fees for submissions, and they want to retain all copyright.

Open Access has been tainted by predatory publishing, but they aren’t the same thing. Look out for: a lack of clearly defined scope (or a bunch of SEO-oriented keywords), small editorial board and/or no contact information, lack of peer review process, article submission fees, and the publisher retaining all copyright. Not necessarily related, but are kind of murky regarding credibility: lack of impact factor, geographical location (one of the issues with Beall’s list), article processing charges (to publish, not to submit), and poor quality.

If you’re still uncertain about a specific journal: ask your colleagues; see if it’s indexed where the journal claims to be indexed; if it’s OA, see if it is listed in DOAJ, see if the publisher belongs to OASPA or COPE.

Other tools:
Think. Check. Submit.
COPE principles of transparency & best practices in scholarly publishing
ODU LibGuide

Watch out for predatory conferences. They will fake speakers, locations, schedules, etc., just to get your registration money.

Sometimes it’s hard to tell if a new journal is legitimate because there are a lot of characteristics that overlap with predatory publishers. Check with the editorial board members — do they even know they are on the editorial board?


Open in the Age of Inequality

Speaker: Tressie McMillan Cottom, Virginia Commonwealth University

She’s been at VCU for three years, and one of the first things she and her colleagues tackled was revamping the digital sociology program. In part, there was an effort to make it more open/accessible. Open is inherently critical, and her perspective about sociology is that it’s critical of every aspect of systems and institutions, including the one you exist within.

The program is mostly made up of professionals, so part of it involved developing specific types of skills. They needed to learn professional digital practice, being sociological with their critique of digital life, and analysis of digital data and the creation of that data.

They wanted to practice what they were preaching: open, accessible, rigorous, and critical. They had access to OER materials and SocArXiv (social sciences open archive).

VCU faculty were incentivized to use eportfolios, but no one really knows how to do it well. The tool is a blog. Because it was inconsistently required, the students get the impression it’s not important. However, it’s supposed to show growth over time and potentially be used for getting a job after graduating.

To fix this, they started by shifting to a cohort model. This meant switching to a fall-only enrollment. The second thing they did was to create a course sequence that all students must follow. This meant that faculty could build assignments based on previous assignments. The cohort structure emphasized theory-building and problem solving.

What/why worked: leadership that was wiling to embrace the changes; trust among the faculty teaching in the program; approaches to teaching had to be restructured with different cohorts, which required a lot of communication.

What kinda worked: open data was easier to implement than OER (quality and vigor varied tremendously, not much available in critical sociology at the graduate student level, most of the important topics from the past 30 years was not included); OER resources lacked the critical sociology content they were interested in, such as race, gender, class, intersectionality.

What chafed: accretion (five offices are in charge of “online”, with different staff and objectives; often they don’t know who does what); market logics (why we are supposed to adopt open as a model — things aren’t less expensive when you consider the faculty time it takes to implement them); working without a model (had to develop everything they use from scratch, such as how the eportfolios would be assessed, protect the student’s identities online, adopting open access products from for-profit sources).

OER can be created by people with institutional support time, cumulative advantage of tenure and digital skills without immediate need for pay, job security, mobility, or prestige. What happens is that those who can do it, tend to be homogeneous, which is not what critical sociology is interested in, and in fact, their institutions are often the topics of critical sociology.

They are working on figuring out how to have online classes that protect students who may be vulnerable to critique/attack online. They are trying to build a community around this — it’s very labor-intensive and can’t be done by a small group.

They are trying to reuse the student work as much as possible, generally with data rather than theory work (it’s not really up to par — they’re graduate students). They need to constantly revisit what colleagues have taught or how syllabus shifted in response to that particular cohort as they are planning the next semester of work.

There is a big concern about where to put the data for reuse, but not for reuse by for-profit agencies wanting to create better targeted ads, for example. For now, it’s restricted to use by students at VCU.

“Pay to play” mode of OA journals/books is neo-liberal open access. How is the open model simply repackaging capitalists systems? This is also something they need to be incorporating into a critical study of digital sociology.

Online is a way to generate revenue, not as a learning tool. Marketing/communications departments have far too much power over how faculty use online platforms.

Charleston 2014 – How Does Ebook Adoption Vary By Discipline? What Humanists, Social Scientists and Scientists Say They Want (A LibValue Study)

ebook
“ebook” by Daniel Sancho

Speakers: Tina Chrzastowski, Santa Clara University; Lynn Wiley, University of Illinois at Urbana-Champaign

Ebook use in their survey follows the definition of COUNTER BR2. They cannot get all ebook publishers to provide that data, and not all of the ones that provide the data uses COUNTER.

They saw a spike in use in FY14. Two things happened: ebooks get used more over time, and they’ve got 8 years of data now. And, they implemented Primo. Discovery has a huge impact on ebook use.

UIUC science faculty love ebooks. They didn’t do a DDA program for them because they already buy just about everything.

It was easier to buy non-science ebooks in packages, though this caused complications with trying to figure out the funding from many different pots. They would prefer to buy new ebook content title-by-title.

They used eBrary with a mix of STL and DDA. After the third STL at 10-15% of list price, they purchased the ebook. This meant they didn’t buy everything that was used, but they ended up spending much less over-all as a result.

They loaded the records, including those they already owned in print. They were alerted of any STL use. Purchased titles were also checked to see if there was other availability such as the print copy. They do book delivery for faculty, so they were interested in which version would be used when both are fairly easy and fast.

There was a lot of good use across the disciplines, but relatively small numbers of titles were used enough to trigger the purchase (more than 3 STLs). These were all multi-user titles, so for each STL triggered, many people could use it during that 24 hour period. On average, there were around 4-5 user sessions per title for both the Humanities and Social Science pilots.

For the Social Sciences, they found that 67% of the STLs were owned in print, with 73% of them available to be requested if the user wanted that format. For the Humanities, 80% were owned in print and 71% of those titles were available.

Based on the metrics from eBrary, they could make some conclusions about what the users were doing in that content. “Quick dips” were less than 9 pages looked at, printed, copied, and no downloads. “Low” was 10-25 pages viewed, printed, etc. and no downloads. “Moderate” 26-45 pages used with a chaper download. “High” up to 299 pages and chapter downloads. “Deep” significant views or whole-book download. They might want to combine deep and high.

In the social sciences, 80% of the use came from quick and low, with no real deep reading. In the humanities, about 46% of the use came from quick and low, with the remaining coming from moderate and high use, and two books fell into the deep category.

They followed up the DDA program with surveys of the faculty and graduate students for the books triggered in the program. They used SurveyMonkey and gift cards for incentives.

They had questions about the perceptions of ebooks, and used skip logic to direct them to specific books in their discipline to use and then respond to questions about that book. It took about 20 minutes to complete. Around 15% of the Humanities responded and 25% of the Social Sciences responded.

They included a question about ejournals to put them in the frame of mind of other electronic things they use. Given the options for book formats, though, they found that most of the respondents would prefer to use mostly print with sometimes ebook.

Every discipline expects that they would be able to download most or all of an ebook, and that the ebook will always be accessible and available (translating to unlimited simultaneous users, no DRM, etc.).

The humanities haven’t quite reached the tipping point of the shift towards ebook use, but the social sciences think that in 5 years, most of their monographic use will be electronic.

Availability and accessibility is the tipping point for chosing a preferred format. If the print book is unavailable, then they are most likely to use the ebook than ask the library to buy another copy or borrow another copy.

In the end, though, it’s still just a “big ol’ hassle” to work with ebooks compared to how we’re used to using books. For many, the note-taking ability or the technology to better use the ebooks were a hinderance.

What do all disciplines want? More ebooks, more current ebook titles, fewer restrictions on copying and printing.

Image reproduction and copyright are big issues — ebooks need all the content that is in the print book. People want consistency between platforms.

We’re still in the early evolution of ebooks. Many changes are yet to come, including copyright changes, tablet and reader evolution, platform consolidation, and things we have not yet thought of.

Readers and scholars are ready for the ebook revolution. How will the library respond?

2014 Parsec Awards finalists are announced!

DragonCon 2013 - Parsec Awards
photo by Kyle Nishioka

As some of you may know, I’ve been on the steering committee for the Parsec Awards for several years now. The awards seeks to celebrate the best in speculative fiction podcasting. If you have an interest in audio fiction of the science fiction, fantasy, horror, and steampunk flavors (just to name a few), then I can recommend nothing better than the current and past lists of finalists and winners.

It took a bit longer than usual for us to listen through and evaluate this year’s round of nominee samples, so I’m happy to announce the finalists for 2014! Check out these podcasts for stories, audio dramas, science behind the stories, and geeking out about favorite speculative fiction content.

why I type my notes

her hands
“her hands” by Vyacheslav Bondaruk

When done with pen and paper, that act involves active listening, trying to figure out what information is most important, and putting it down. When done on a laptop, it generally involves robotically taking in spoken words and converting them into typed text. –Joseph Stromberg, Vox Magazine

Many of you long-time readers know that I take notes of presentations at conferences and post them here. I get lots of thank yous from folks for doing it, and that’s the main reason why I keep posting them publicly. It’s probably obvious, but just to be sure, you should know that I don’t handwrite them and then transcribe them later. I type them out on some sort of mobile computing device (laptop or iPad) and publish them after I do a look-see to make sure there aren’t any egregious errors.

What I don’t do with my typed notes is try to capture every word the speaker says, which is I think the digital note-taking that the author of the above linked article is critiquing. Instead, I actively listen to the speaker, and quickly synthesize their point into a sentence or two.

Sometimes I will quote directly if the phrasing or word choice is particularly poignant, but that’s hard if they are a fast speaker, because I end up missing a lot of what they say next in my attempt to capture it accurately. However, if I wait until they pause before their next point, I usually have enough time to quickly type out the point they just made.

This was an active choice on my part some years ago. I used to take notes with lots of bullet points and half-formed phrases, but they were virtually useless to me later on, and certainly not helpful to anyone who wasn’t there. When I take notes, I think about the audience who will read them later, even if it’s myself.

Which is another reason why I type. My handwriting is terrible, and it gets worse the longer and faster I write. If I want to know what I wrote more than a few hours ago, I need to type it.

So yes, students might get distracted by their neighbor’s laptop, but I think certain researchers will always find some classroom thing that distracts students and recommend we go back to the good old days. Instead, I think we need to work on the skills students (and future meeting attendees) will need in order to use their tools effectively and maintain focus.

If I can do it, surely they can, too.

ER&L 2014 — More Licenses, More Problems: How To Talk To Your Users About Why Ebooks Are Terrible

“DRM PNG 900 2” by listentomyvoice

Speakers: Meghan Ecclestone (York University) & Jacqueline Whyte Appleby (OCUL)

Ebooks aren’t terrible. Instead, we’d like to think of them as teenagers. They’re always changing, often hard to find, and difficult to interact with. Lots of terrible teenagers turn into excellent human beings. There is hope for ebooks.

Scholar’s Portal is a repository for purchased ebooks. Used to be mostly DRM-free, but in 2013, they purchased books from two sources that came with DRM and other restrictions. One of those sources were from Canadian UPs, and they really needed to be viable for course adoption (read: sell many copies to students instead of one to the library). The organization wanted everything, so they agreed to the terms.

In adding this content, with very different usability, they had to determine how they were going to manage it: loan periods, Adobe Digital Editions, and really, how much did they want to have to explain to the users?

One of the challenges is not having control over the ADE layout and wording for alerts and error messages. You can’t use a public computer easily, since your user ID can be logged in on six devices at most.

Faculty can’t use the books in their class. Communicating this to them is… difficult.

Ecclestone did a small usability test. Tried to test both a user’s ability to access a title and their perception of borrowable ebooks. The failure rate was 100%.

Lessons learned: Adobe = PDFs (they don’t get that ADE is not the same); .exe files are new to students, or potentially viruses; returning ebooks manually is never going to happen; and terms like “borrow” and “loan” are equated with paying.

The paradox is that it’s really challenging the first time around, but once they have the right software and have gone through the download process, it’s easier and they have a better opinion.

Suggestions for getting ready to deal with DRM ebooks: Train the trainer. Test your interface.

They put error messages in LibAnswers and provide solutions that way in case the user is searching for help with the error.

NASIG 2013: Knowledge and Dignity in the Era of Big Data

CC BY 2.0 2013-06-10
“Big Data” by JD Hancock

Speaker: Siva Vaidhyanathan

Don’t try to write a book about fast moving subjects.

He was trying to capture the nature of our relationship to Google. It provides us with a services that are easy to use, fairly dependable, and well designed. However, that level of success can breed hubris. He was interested in how this drives the company to its audacious goals.

It strikes him that what Google claims to be doing is what librarians have been doing for hundreds of years already. He found himself turning to the core practices of librarians as a guideline for assessing Google.

Why is Google interested in so much stuff? What is the payoff to organizing the world’s information and making it accessible?

Big data is not a phrase that they use much, but the notion is there. More and faster equals better. Google is in the prediction/advertising business. The Google books project is their attempt to reverse engineer the sentence. Knowing how sentences work, they can simulate how to interpret and create sentences, which would be a simulation of artificial intelligence.

The NSA’s deals that give them a backdoor to our data services creates data insecurity, because if they can get in, so can the bad guys. Google keeps data about us (and has to turn it over when asked) because it benefits their business model, unlike libraries who don’t keep patron records in order to protect their privacy.

Big data means more than a lot of data. It means that we have so many instruments to gather data, cheap/ubiquitous cameras and microphones, GPS devices that we carry with us, credit card records, and more. All of these ways of creating feed into huge servers that can store the data with powerful algorithms that can analyze it. Despite all of this, there is no policy surrounding this, nor conversations about best ways to manage this in light of the impact on personal privacy. There is no incentive to curb big data activities.

Scientists are generally trained to understand that correlation is not causation. We seem to be happy enough to draw pictures with correlation and move on to the next one. With big data, it is far too easy to stop at correlation. This is a potentially dangerous way of understanding human phenomenon. We are autonomous people.

The panopticon was supposed to keep prisoners from misbehaving because they assumed they were always being watched. Foucault described the modern state in the 1970s as the panopticon. However, at this point, it doesn’t quite match. We have a cryptopticon, because we aren’t allowed to know when we are being watched. It wants us to be on our worst behavior. How can we inject transparency and objectivism into this cryptopticon?

Those who can manipulate the system will, but those who don’t know how or that it is happening will be negatively impacted. If bad credit can get you on the no-fly list, what else may be happening to people who make poor choices in one aspect of their lives that they don’t know will impact other aspects? There is no longer anonymity in our stupidity. Everything we do, or nearly so, is online. Mistakes of teenagers will have an impact on their adult lives in ways we’ve never experienced before. Our inability to forget renders us incapable of looking at things in context.

Mo Data, Mo Problems

css.php