Don’t try to write a book about fast moving subjects.
He was trying to capture the nature of our relationship to Google. It provides us with a services that are easy to use, fairly dependable, and well designed. However, that level of success can breed hubris. He was interested in how this drives the company to its audacious goals.
It strikes him that what Google claims to be doing is what librarians have been doing for hundreds of years already. He found himself turning to the core practices of librarians as a guideline for assessing Google.
Why is Google interested in so much stuff? What is the payoff to organizing the world’s information and making it accessible?
Big data is not a phrase that they use much, but the notion is there. More and faster equals better. Google is in the prediction/advertising business. The Google books project is their attempt to reverse engineer the sentence. Knowing how sentences work, they can simulate how to interpret and create sentences, which would be a simulation of artificial intelligence.
The NSA’s deals that give them a backdoor to our data services creates data insecurity, because if they can get in, so can the bad guys. Google keeps data about us (and has to turn it over when asked) because it benefits their business model, unlike libraries who don’t keep patron records in order to protect their privacy.
Big data means more than a lot of data. It means that we have so many instruments to gather data, cheap/ubiquitous cameras and microphones, GPS devices that we carry with us, credit card records, and more. All of these ways of creating feed into huge servers that can store the data with powerful algorithms that can analyze it. Despite all of this, there is no policy surrounding this, nor conversations about best ways to manage this in light of the impact on personal privacy. There is no incentive to curb big data activities.
Scientists are generally trained to understand that correlation is not causation. We seem to be happy enough to draw pictures with correlation and move on to the next one. With big data, it is far too easy to stop at correlation. This is a potentially dangerous way of understanding human phenomenon. We are autonomous people.
The panopticon was supposed to keep prisoners from misbehaving because they assumed they were always being watched. Foucault described the modern state in the 1970s as the panopticon. However, at this point, it doesn’t quite match. We have a cryptopticon, because we aren’t allowed to know when we are being watched. It wants us to be on our worst behavior. How can we inject transparency and objectivism into this cryptopticon?
Those who can manipulate the system will, but those who don’t know how or that it is happening will be negatively impacted. If bad credit can get you on the no-fly list, what else may be happening to people who make poor choices in one aspect of their lives that they don’t know will impact other aspects? There is no longer anonymity in our stupidity. Everything we do, or nearly so, is online. Mistakes of teenagers will have an impact on their adult lives in ways we’ve never experienced before. Our inability to forget renders us incapable of looking at things in context.
NITLE does a lot of research for liberal arts undergraduate type schools. One of the things that he does is publish a monthly newsletter covering trends in higher education, which may be worth paying some attention to (Future Trends). He is not a librarian, but he is a library fanboy.
What is mobile computing doing to the world, and what will it do in the future?
Things have changed rapidly in recent years. We’ve gone from needing telephone rooms at hotels to having phones in every pocket. The icon for computing has gone from desktop to laptop to anything/nothing — computing is all around us in many forms now. The PC is still a useful tool, but there are now so many other devices to do so many other things.
Smartphones are everywhere now, in many forms. We use them for content delivery and capture, and to interact with others through social tools. Over half of Americans now have a smartphone, with less than 10% remaining who have no cell phone, according to Pew. The mobile phone is now the primary communication device for the world. Think about this when you are developing publishing platforms.
The success of the Kindle laid the groundwork for the iPad. Netbooks/laptops now range in size and function.
Clickers are used extensively in the classroom, with great success. They can be used for feedback as well as prompting discussion. They are slowly shifting to using phones instead of separate devices.
Smartpens capture written content digitally as you write them, and you can record audio at the same time. One professor annotates notes on scripts while his students perform, and then provides them with the audio.
Marker-based augmented reality fumbled for a while in the US, but is starting to pick up in popularity. Now that more people have smartphones, QR codes are more prevalent.
The mouse and keyboard have been around since the 1960s, and they are being dramatically impacted by recent changes in technology. Touch screens (i.e. iPad), handhelds (i.e. WII), and nothing (i.e. Kinect).
If the federal government is using it, it is no longer bleeding edge. Ebooks have been around for a long time, in all sorts of formats. Some of the advantages of ebooks include ease of correcting errors, flexible presentation (i.e. font size), and a faster publication cycle. Some disadvantages include DRM, cost, and distribution by libraries.
Gaming has had a huge impact in the past few years. The median age of gamers is 35 or so. The industry size is comparable to music, and has impacts on hardware, software, interfaces, and other industries. There is a large and growing diversity of platforms, topics, genres, niches, and players.
Mobile devices let us make more microcontent (photo, video clip, text file), which leads to the problem of archiving all this stuff. These devices allow us to cover the world with a secondary layer of information. We love connecting with people, and rather than separating us, technology has allowed us to do that even more (except when we focus on our devices more than the people in front of us).
We’re now in a world of information on demand, although it’s not universal. Coverage is spreading, and the gaps are getting smaller.
When it comes to technology, Americans are either utopian or dystopian in our reactions. We’re not living in a middle ground very often. There are some things we don’t understand about our devices, such as multitasking and how that impacts our brain. There is also a generational divide, with our children being more immersed in technology than we are, and having different norms about using devices in social and professional settings.
The ARIS engine allows academics to build games with learning outcomes.
Augmented reality takes data and pins it down to the real world. It’s the inverse of virtual reality. Libraries are going to be the AR engine of the future. Some examples of AR include museum tours, GPS navigators, and location services (Yelp, Foursqure). Beyond that, there are applications that provide data overlaying images of what you point your phone at, such as real estate information and annotations. Google Goggles tries to provide information about objects based on images taken by a mobile device. You could have a virtual art gallery physically tied to a spot, but only displayed when viewed with an app on your phone.
Imagine what the world will be like transformed by the technology he’s been talking about.
1. Phantom Learning: Schools are rare and less needed. The number of people physically enrolled in schools has gone down. Learning on demand is now the thing. Institutions exist to supplement content (adjuncts), and libraries are the media production sites. Students are used to online classes, and un-augmented locations are weird.
II. Open World: Open content is the norm and is very web-centric. Global conversations increase, with more access and more creativity. Print publishers are nearly gone, authorship is mysterious, tons of malware, and privacy is fictitious. The internet has always been open and has never been about money. Identities have always been fictional.
III. Silo World: Most information is experienced in vertical stacks. Open content is almost like public access TV. Intellectual property intensifies, and campuses reorganize around the silos. Students identify with brands and think of “open” as radical and old-fashioned.
Sees herself as a community builder for the greater benefit of the profession as a whole.
There is a lot going on in libraries, and it can be overwhelming. At the same time, it’s an exciting time to be a librarian. If we embrace the challenge of the change and see the opportunities, we will be okay.
We are at “the incunabula period of the digital age.” -T. Scott Plutchak
The network changes everything. See also: Networked by Lee Raine & Barry Wellman
This can be good, or it can be bad (see also: Google Buzz). We have the opportunity to reach beyond our home institutions to have a broader impact.
Data has many different facets. We are talking about data-driven decision making, research data, data curation, linked (open) data, and library collections as data.
When we started digitizing our collections, we had a very library/museum portal view that was very proscribed. The DPLA wanted to avoid this, letting folks pull data through APIs and display it however they want. When we start freeing our stuff from the containers, we start seeing some new research questions and scholarship.
“Local collections are the dark matter of a linked data world.” -Susan Hildreth, Director of IMLS
Catalog and pay attention to the unique things that are at your institution. We need original catalogers back. This is the golden age for catalogers. We need to reinvent the way we process the hard and difficult things to describe. It’s about the services, not the stuff.
If the car was developed in the library, it would have been called the e-horse. Please don’t hire a data curation librarian or eresearch librarian or … data and local content is everyone’s job. The silos have to come down in our services, too. By silo-ing off the jobs, we’re not harnessing the power of the network.
Print-based societies needed the buildings, but in the digital society, it’s more about the connections. We should talk about what librarians do, not what libraries do. Do we want to serve our buildings or serve our communities? We cannot allow the care and feeding of our buildings to define us. The mission is what defines us.
Our mission is greater than our job. “Our mission is to improve society through facilitating knowledge creation in their communities.” (R. David Lankes) If this isn’t why you show up every day, then maybe it’s time to reassess your life and career choice.
We are a community, with permeable borders, and room at the table for everyone. But, this causes a lot of fear and anxiety, and can raise the spectre of the snark. This is detrimental to open community development.
Snark: “I really wish the DPLA would do ___.”
Frick: “The DPLA is you! Show up!”
If we come with our 10lb hammer to smack down every new idea, we will not be able to move forward.
Vulnerability is “the courage to show up and allow ourselves to be seen.” (Dr. Brene Brown) Be open to feedback — it is a function of respect. Admitting a vulnerability builds strength and trust, and a culture of shared struggle/experience.
We need to hang out with not the usual suspects. If this is the 10th time in a row that you’ve attended a particular conference, maybe you need to try something new. We need to think of librarianship outside of our normal communities.
The hacker epistemology says to adopt a problem-solving mindset, and the truth is what works. Our “always” of doing things will not translate to the networked world. The #ideadrop house was a wild success. People wanted to share their ideas with librarians!
Jason Griffey created the library boxes — small hard drives with wifi capability that allow anyone to access and download the content. They put them everywhere at SXSW — pedicabs, volunteers carrying them around, etc.
How do you communicate your ideas to people outside of your community?
In this world of networked individualism, our success is up to us. We have to have a personal responsibility to the longevity and success of our profession. This golden moment for librarianship is brief, so we have to act now. Be engaged. Be there.
How do you lead? Leadership is not being an AUL or head of a department. We lead by example, no matter where you are.
The stuff that’s easy to count really isn’t important. We need to have a national holiday from performance metrics.
Dare a little. Be more open. Take more risks, even if they’re small. Be easy on yourself.
It’s safe to say that discovery products have not received a positive response from the librarians who are expected to use them. We always talk about the users, and we forget that librarians are users, and are probably in them more than the typical freshman. They are new, and new things can be scary.
OSU has Summon, which they brought up in 2010. She thinks that even though this is mostly about her experience with Summon, it can be applied to other discovery tools and libraries. They had a federated search from 2003-2010, but toward the latter years, librarians had stopped teaching it, so when discovery services came along, they went that way.
Initially, librarians had a negative view of the one search box because of their negative experience with federated searching. Through the process of the implementation, they gathered feedback from the librarians, and the librarians were hopeful that this might live up to the promise that federated search did not. They also surveyed librarians outside of OSU, and found a broad range from love it to not over my dead body, but most lived in the middle, where it depended on the context.
Most librarians think they will use a discovery tool in teaching lower division undergraduates, but it depends if it’s appropriate or not. The promise of a discovery tool is that librarians don’t have to spend so much time teaching different tools, so they could spend more time talking about evaluating sources and the iterative process of research. Some think they actually will do that, but for now, they have simply added the discovery tool to the mix.
Participation in the implementation process is key for getting folks on board. When librarians are told “you must,” it doesn’t go over very well. Providing training and instruction is essential. There might be some negative feedback from the students until they get used to it, and librarians need to be prepared for that. Librarians need to understand how it works and where the limitations fall. Don’t underestimate the abilities of librarians to work around you.
These tools are always changing. Make sure that folks know that it has improved if they don’t like it at first. Make fun (and useful) tools, and that the librarians know how to create scoped tools that they can use for specific courses. If you have a “not over my dead body,” team teaching might be a good approach to show them that it could be useful.
Initially there were mixed perceptions, but more are starting to incorporate it into their instruction. With so many products out there, we really need to move away from teaching all of them and spending more time on good research/search skills.
Students “get” discovery services faster if it is introduced as the Google of library stuff.
Move away from teaching sources and towards teaching the process. Enhance the power of boolean searching with faceted searching. Shift from deliberate format searching (book, article, etc.) toward mixed format results that are most relevant to the search.
Moderator: Dan Tonkery Panel: Roger Schonfeld (ITHAKA S + R), Jon Law (ProQuest), Amira Aaron (Northeastern University), Brian Duncan (EBSCO), & Susan Stearns (Ex Libris)
What features of discovery services do students prefer? What ones do they dislike?
Law: The search box is intuitive and familiar, and their expectations of speed are set by web search engines. Being able to quickly scan the abstract to see if it is relevant, and then quickly retrieve the content when they want it.
Stearns: Needs to be flexible and reflective of different user types and the environment they are in. Contextual searching based on who they are and how they look for information. Students also expect to access related content about their relationship with libraries (i.e. materials checked out, notices).
Duncan: Finding the results on the first page, and at least the second page. Metadata and relevancy are important.
What impact is open access having on discovery?
Aaron: Depends on the model of OA. Not really sure if it has an impact on discovery systems yet. It has and will have an impact on discovery in general, but not sure if it’s impacting library discovery systems any more or less than open web searches.
Law: Our customers are turning OA links on in the discovery service.
Stearns: It’s easy to make the OA content available, but are you managing it? How does this impact back-office workflows?
Will discovery services replace the online catalog?
Stearns: It’s been painful for some libraries, but yes. There is no OPAC in next generation library systems, it’s all about discovery. And we need to get over it. Discovery services need to have the functionality of the OPAC (things librarians like). This is an opportunity to rethink workflows and what you do with metadata in a discovery environment.
What are the advantages of selling both a family of databases and a discovery service?
Duncan: Users have automatic full-text because it’s built into the system and doesn’t need to go through OpenURL. Thinking a lot about how to make this simpler for students and integrating high-quality metadata from A&I sources along with the full-text.
Aaron: That’s fine for the vendor, but it takes away the choice for the librarian as to where to send the user. It’s taking away choice.
Law: We want our discovery service to be content-provider neutral.
What impact can libraries reasonably expect discovery services to have on traffic patterns?
Schonfeld: We see the majority of traffic coming from Google and Google Scholar, at least for JSTOR. If the objective is to change where users are starting their research, then we need different ways of measuring that and determining success.
Stearns: Our customers are thinking about not only having the one search box on the web page, but also where else can you embed linking and making sure the connections work, particularly when users come in from different sources.
Aaron: Success is not measured by how many people come to your website and start there, it’s how they get to the content from wherever they go.
What metrics do librarians expect from discovery services?
Aaron: Search statistics aren’t very meaningful in the context of discovery services. Click-through, content sources — those are the important metrics.
Schonfeld: This is not just a new product – it replaces old products, so we need to think about it differently. Libraries might want to know what share of their users is coming from what sources (i.e. discovery services, Wikipedia, Google, etc.). It’s still early days to be able to come to any strong conclusions.
Duncan: Need to measure searches that don’t result in any click-throughs as well.
Does your discovery product provide title-level information to the user community and how often is it updated?
Law: How do you measure your collection? We need some definition around this in order to know how to tell libraries how much of it is indexed in our discovery service. We are starting to do more collection analysis for libraries.
Duncan: The title list doesn’t equate to the deep metadata of an A&I database. If we don’t have the deep metadata, we don’t say we have the same coverage as that database. Full text searching is not a replacement for controlled vocabulary and metadata, it’s just a component of it.
Stearns: We also want to make sure the collections we expose are actually the ones the users access, by looking at historical usage information.
Aaron: It’s important to have the deep metadata, and it’s troubling that the content providers aren’t playing well together. I should be able to display content we purchase to our users in whatever interface I want. If I can’t, I may not continue to purchase or lease that content. It’s the same problem we had with link resolvers years ago. If you really care about the user and libraries, then start playing together.
[Missed the last question because I was still flying high from Aaron’s call-out, but it was something dull about how much customization is available in the discovery system, or something like that. Couldn’t tell from the responses. Go read product information for the answers.]
Libraries have been relatively quietly collecting ebooks for years, but it wasn’t until the Kindle came out that public interest in ebooks was aroused. Users exposure and expectations for ebooks has been raised, with notable impact on academic libraries. From 2010-2011, the number of ebooks in academic libraries doubled.
Wellesley is platform agnostic — they look for the best deal with the best content. Locally, they have seen an overall increase in unique titles viewed, a dramatic increase in pages viewed, a modest decrease in pages printed, and a dramatic increase in downloads.
In February 2012, they sent a survey to all of their users, with incentives (iPad, gift cards, etc.) and a platform (Zoomerang) provided by Springer. They had a 57% response rate (likely iPad-influenced), and 71% have used ebooks (51% used ebooks from the Wellesley College Library). If the survey respondent had not used ebooks, they were skipped to the end of the survey, because they were only interested in data from those who have used ebooks.
A high percent of the non-library ebooks were from free sources like Google Books, Project Gutenberg, Internet Archive, etc. Most of the respondents ranked search within the text and offline reading or download to device among the most important functionality aspects, even higher than printing.
Most of the faculty respondents found ebooks to be an acceptable option, but prefer to use print. Fewer students found ebooks an acceptable option, and preferred print more than faculty. There is a reason that will be aparent later in the talk.
The sciences preferred ebooks more than other areas, and found them generally more acceptable than other areas, but the difference is slight. Nearly all faculty who used ebooks would continue to, ranging from preferring them to reluctant acceptance.
Whether they love or hate ebooks, most users skimmed/search and read a small number of consecutive pages or a full chapter. However, ebooks haters almost never read an entire book, and most of the others infrequently did so. Nearly everyone read ebooks on a computer/laptop. Ebook lovers used devices, and ebook haters were more likely to have printed it out. Most would prefer to not use their computer/laptop, and the ebook lovers would rather use their devices.
Faculty are more likely to own or plan to purchase a device than students, which may be why faculty find ebooks more acceptable than students. Maybe providing devices to them would be helpful?
For further research:
How does the robustness of ebook collections effect use and attitudes?
Is there a correlation between tablet/device use and attitudes?
Are attitudes toward shared ebooks (library) different from attitudes toward personal ebooks?
Speakers: Jennifer Bazeley (Miami University) & Nancy Beals (Wayne State University)
Despite all the research on what we need/want, but no one is building commercial products that meet all our needs and addresses the impediments of cost and dwindling staff.
Beals says that the ERM is not used for workflow, so they needed other tools, with a priority on project management and Excel proficiency. They use an internal listserv, UKSG Transfer, Trello (project management software), and a blog, to keep track of changes in eresources.
Other tools for professional productivity and collaboration: iPads with Remember the Milk or Evernote, Google spreadsheets (project portfolio management organization-wide), and LibGuides.
Bazeley stepped into the role of organizing eresources information in 2009, with no existing tool or hub, which gave her room to experiment. For documentation, they use PBWiki (good for version tracking, particularly to correct errors) with an embedded departmental Google calender. For communication, they use LibGuides for internal documents, and you can embed RSS, Google Docs, Yahoo Pipes aggregating RSS feeds, Google forms for eresource access issues, links to Google spreadsheets with usage data, etc.. For login information, they use KeePass Password Safe. Rather than claiming in the ILS, they’ve moved to using the claim checker tool from the subscription agent.
What the “Google Generation” Says About Using Library & Information Collections, Services, and Systems in the Digital Age
Speaker: Michael Eisenberg, University of Washington Information School
We’ve moved from scarcity to abundance to overload. We have so many great resources our students don’t know where to begin. They’re overwhelmed.
Think about how our computing technology has evolved and shrank in both size and price while increasing in power over the past 30 years. Where will be 20 years from now?
We live in a parallel information universe that is constantly feeding information back to us. The library is anywhere anytime, so how can we best meet the information needs of our users?
Project Information Literacy seeks to answer what it means to be a student in the digital age. They have been assessing different types of students on how they find and use information to get generalized pictures of who they are.
Why, when you have an information need, do you turn to Google first and not research databases?
Students ignore faculty warnings about Wikipedia. They still use it, but they just don’t cite it.
Students aren’t really procrastinators, they’re just busy. They are working to the last minute because every minute is highly scheduled. Have we changed our staffing or the nature of our services to help them at point of need?
Students don’t think of librarians as people who can help them with their research, they think of them as people who can help them with resources. They are more likely to go to their instructors and classmates before librarians during the research process. The hardest part for them is getting started and defining the topic (and narrowing it down). They don’t think librarians can help them with that, even though we can, and do (or should if we aren’t already).
Students are more practiced at writing techniques than research strategies. Professors complain that students can’t write, but maybe writing shouldn’t be the only method of expression.
Most students don’t fully understand the research process and what is expected. They need clarity on the nature and scope of assignments, and they aren’t used to critical thinking (“just tell me what you want and I’ll give it to you”). Most handouts from profs don’t explain this well, focusing more on mechanics and sending students to the library shelves (and not to databases or online resources). Rarely do they suggest talking to the librarian.
Students are not the multi-taskers we think they are, particularly during crunch time. Often they will use the library and library computers to force themselves to limit the distractions and focus. They use Facebook breaks as incentives to get things done.
After they graduate, former students are good with technology, but not so good with low-tech, traditional research/information discovery skills.
Information literacy needs are more important than ever, but they are evolving. Search to task to use to synthesis to evaluation — students need to be good at every stage. The library is shifting from the role of information to space, place, and equipment. Buying the resources is less of an emphasis (although not less in importance), and the needs change with the academic calendar.
What do we do about all this?
Infuse high quality, credible resources and materials into courses and classes. Consider resources and collections in relation to Wikipedia. Infuse information literacy learning opportunities into resources, access systems, facilities, and services (call it “giving credit,” which they understand more than citing). Provide resources, expertise, and services related to assignments. Re-purpose staff and facilities related to calendar and needs. Offer to work with faculty to revise handouts — emphasize the quality of resources not the mechanics. Offer flexible and collaborative spaces with a range of capabilities and technology, less emphasis on print collection development. Consider school-to-work transitions in access systems, resources, services, and instruction.
Beyond formal instruction, what are the ways we can help students gain the essential information literacy skills they need? That is the challenge for eresources librarians.
Updates from Serials Solutions – mostly Resource Manager (Ashley Bass):
Keep up to date with ongoing enhancements for management tools (quarterly releases) by following answer #422 in the Support Center, and via training/overview webinars.
Populating and maintaining the ERM can be challenging, so they focused a lot of work this year on that process: license template library, license upload tool, data population service, SUSHI, offline date and status editor enhancements (new data elements for sort & filter, new logic, new selection elements, notes), and expanded and additional fields.
Workflow, communication, and decision support enhancements: in context help linking, contact tool filters, navigation, new Counter reports, more information about vendors, Counter summary page, etc. Her most favorite new feature is “deep linking” functionality (aka persistent links to records in SerSol). [I didn’t realize that wasn’t there before — been doing this for my own purposes for a while.]
Next up (in two weeks, 4th quarter release): new alerts, resource renewals feature (reports! and checklist!, will inherit from Admin data), Client Center navigation improvements (i.e. keyword searching for databases, system performance optimization), new license fields (images, public performance rights, training materials rights) & a few more, Counter updates, SUSHI updates (making customizations to deal with vendors who aren’t strictly following the standard), gathering stats for Springer (YTD won’t be available after Nov 30 — up to Sept avail now), and online DRS form enhancements.
In the future: license API (could allow libraries to create a different user interface), contact tools improvements, interoperability documentation, new BI tools and reporting functionality, and improving the Client Center.
Also, building a new KB (2014 release) and a web-scale management solution (Intota, also coming 2014). They are looking to have more internal efficiencies by rebuilding the KB, and it will include information from Ulrich’s, new content types metadata (e.g. A/V), metadata standardization, industry data, etc.
Summon Updates (Andrew Nagy):
I know very little about Summon functionality, so just listened to this one and didn’t take notes. Take-away: if you haven’t looked at Summon in a while, it would be worth giving it another go.
Goal #1: Allow users to easily link to full-text resources. Solution: Go beyond the out-of-the box 360 Link display.
Goal #2: Allow users to report problems or contact library staff at the point of failure. Solution: eresources problem report form
They created the eresources problem report form using Drupal. The fields include contact information, description of the resource, description of the problem, and the ability to attach a screenshot.
Some enhancements included: making the links for full-text (article & journal) butttons, hiding additional help information and giving some hover-over information, parsing the citation into the problem report page, and moving the citation below the links to full-text. For journal citations with no full-text, they made the links to the catalog search large buttons with more text detail in them.
Some of the challenges of implementing these changes is the lack of a test environment because of the limited preview capablities in 360 Link. Any changes actually made required an overnight refresh and they would be live, opening the risk of 24 hour windows of broken resource links. So, they created their own test environment by modifying test scenarios into static HTML files and wrapping them in their own custom PHP to mimic the live pages without having to work with the live pages.
[At this point, it got really techy and lost me. Contact the presenters for details if you’re interested. They’re looking to go live with this as soon as they figure out a low-use time that will have minimal impact on their users.]
Customizing 360 Link menu with jQuery (Laura Wrubel, George Washington University)
They wanted to give better visual clues for users, emphasize the full-text, have more local control over linkns, and visual integration with other library tools so it’s more seamless for users.
They started with Reidsma’s code, then then forked off from it. They added a problem link to a Google form, fixed ebook chapter links and citation formatting, created conditional links to the catalog, and linked to their other library’s link resolver.
They hope to continue to tweak the language on the page, particularly for ILL suggestion. The coverage date is currently hidden behind the details link, which is fine most of the time, but sometimes that needs to be displayed. They also plan to load the print holdings coverage dates to eliminate confusion about what the library actually has.
In the future, they would rather use the API and blend the link resolver functionality with catalog tools.
Custom document delivery services using 360 Link API (Kathy Kilduff, WRLC)
License information for course reserves for faculty (Shanyun Zhang, Catholic University)
Included course reserve in the license information, but then it became an issue to convey that information to the faculty who were used to negotiating it with publishers directly. Most faculty prefer to use Blackboard for course readings, and handle it themselves. But, they need to figure out how to incorporate the library in the workflow. Looking for suggestions from the group.
Advanced Usage Tracking in Summon with Google Anaytics (Kun Lin, Catholic University)
Use of ERM/KB for collection analysis (Mitzi Cole, NASA Goddard Library)
Used the overlap analysis to compare print holdings with electronic and downloaded the report. The partial overlap can actually be a full overlap if the coverage dates aren’t formatted the same, but otherwise it’s a decent report. She incorporated license data from Resource Manager and print collection usage pulled from her ILS. This allowed her to create a decision tool (spreadsheet), and denoted the print usage in 5 year increments, eliminating previous 5 years use with each increment (this showed a drop in use over time for titles of concern).
Discussion of KnowledgeWorks Management/Metadata (Ben Johnson, Lead Metadata Librarian, SerialsSolutions)
After they get the data from the provider or it is made available to them, they have a system to automatically process the data so it fits their specifications, and then it is integrated into the KB.
They deal with a lot of bad data. 90% of databases change every month. Publishers have their own editorial policies that display the data in certain ways (e.g., title lists) and deliver inconsistent, and often erroneous, metadata. The KB team tries to catch everything, but some things still slip through. Throught the data ingestion process, they apply rules based on past experience with the data source. After that, the data is normalized so that various title/ISSN/ISBN combinations can be associated with the authority record. Finally, the data is incorporated into the KB.
Authority rules are used to correct errors and inconsistencies. Rule automatically and consistently correct holdings, and they are often used to correct vendor reporting problems. Rules are condified for provider and database, with 76,000+ applied to thousands of databases, and 200+ new rules are added each month.
Why does it take two months for KB data to be corrected when I report it? Usually it’s because they are working with the data providers, and some respond more quickly than others. They are hoping that being involved with various initiatives like KBART will help fix data from the provider so they don’t have to worry about correcting it for us, but also making it easier to make those corrections by using standards.
Client Center ISSN/ISBN doesn’t always work in 360 Links, which may have something to do with the authority record, but it’s unclear. It’s possible that there are some data in the Client Center that haven’t been normalized, and could cause this disconnect. And sometimes the provider doesn’t send both print and electronic ISSN/ISBN.
What is the source for authority records for ISSN/ISBN? LC, Bowker, ISSN.org, but he’s not clear. Clarification: Which field in the MARC record is the source for the ISBN? It could be the source of the normalization problem, according to the questioner. Johnson isn’t clear on where it comes from.
Speakers: Ladd Brown, Andi Ogier, and Annette Bailey, Virginia Tech
Libraries are not about the collections anymore, they’re about space. The library is a place to connect to the university community. We are aggressively de-selecting, buying digital backfiles in the humanities to clear out the print collections.
Guess what? We still have our legacy workflows. They were built for processing physical items. Then eresources came along, and there were two parallel processes. Ebooks have the potential of becoming a third process.
Along with the legacy workflows, they have a new Dean, who is forward thinking. The Dean says it’s time to rip off the bandaid. (Titanic = old workflow; iceberg = eresources; people in life boats = technical resources team) Strategic plans are living documents kept on top of the desk and not in the drawer.
With all of this in mind, acquisitions leaders began meeting daily in a group called Eresources Workflow Weekly Work, planning the changes they needed to make. They did process mapping with sharpies, post-its, and incorporated everyone in the library that had anything to do with eresources. After lots of meetings, position descriptions began to emerge.
Electronic Resource Supervisor is the title of the former book and serials acquisitions heads. The rest — wasn’t clear from the description.
They had a MARC record service for ejournals, but after this reorganization process, they realized they needed the same for ebooks, and could be handled by the same folks.
Two person teams were formed based on who did what in the former parallel processes, and they reconfigured their workspace to make this more functional. The team cubes are together, and they have open collaboration spaces for other groupings.
They shifted focus from maintaining MARC records in their ILS to maintaining accurate title lists and data in their ERMS. They’re letting the data from the ERMS populate the ILS with appropriate MARC records.
They use some Python scripts to help move data from system to system, and more staff are being trained to support it. They’re also using the Google Apps portal for collaborative projects.
They wanted to take risks, make mistakes, fail quickly, but also see successes come quickly. They needed someplace to start, and to avoid reinventing the wheel, so they borrowed heavily from the work done by colleagues at James Madison University. They also hired Carl Grant as a consultant to ask questions and facilitate cross-departmental work.
Big thing to keep in mind: Administration needs to be prepared to allow staff to spend time learning new processes and not keeping up with everything they used to do at the same time. And, as they let go of the work they used to do, please tell them it was important or they won’t adopt the new work.