Don’t make it into a spreadsheet when creating model licences. Think creatively. Check lists, ERM records, HTML pages, etc. Does it need to be shared? Will you be copying from it to send to licencors for negotiation? Also, find out if there is standard language for IT that your institution uses. Review model licenses from the field.
Things are changing, though, and we’re licensing new things that we don’t yet know how to handle them. Data, images, streaming collections, etc. When exceptions become the rule, what do we do?
If you have all of this figured out, put it out there in a discoverable way so the rest of us don’t spin our wheels reinventing your brilliance. Community! Communication! Collaboration!
Do we need to have new standard licensing language for….? Autorenewal — replace it with language about mutual written agreement. Alumni might have access three months post graduation because of the way IT is set up, which might be a license violation. New vendors might not be familiar with libraries and who our authorized users might be. New uses/rights: repository, text mining, use on website/promotional materials, rip & stream on secure server, cloud hosting/distribution of CD-ROMs.
Where do we go from here? How do we as a community keep our resources up to date? Should we have more of a shared collection of exceptions? What can we do to help each other?
Updates from Serials Solutions – mostly Resource Manager (Ashley Bass):
Keep up to date with ongoing enhancements for management tools (quarterly releases) by following answer #422 in the Support Center, and via training/overview webinars.
Populating and maintaining the ERM can be challenging, so they focused a lot of work this year on that process: license template library, license upload tool, data population service, SUSHI, offline date and status editor enhancements (new data elements for sort & filter, new logic, new selection elements, notes), and expanded and additional fields.
Workflow, communication, and decision support enhancements: in context help linking, contact tool filters, navigation, new Counter reports, more information about vendors, Counter summary page, etc. Her most favorite new feature is “deep linking” functionality (aka persistent links to records in SerSol). [I didn’t realize that wasn’t there before — been doing this for my own purposes for a while.]
Next up (in two weeks, 4th quarter release): new alerts, resource renewals feature (reports! and checklist!, will inherit from Admin data), Client Center navigation improvements (i.e. keyword searching for databases, system performance optimization), new license fields (images, public performance rights, training materials rights) & a few more, Counter updates, SUSHI updates (making customizations to deal with vendors who aren’t strictly following the standard), gathering stats for Springer (YTD won’t be available after Nov 30 — up to Sept avail now), and online DRS form enhancements.
In the future: license API (could allow libraries to create a different user interface), contact tools improvements, interoperability documentation, new BI tools and reporting functionality, and improving the Client Center.
Also, building a new KB (2014 release) and a web-scale management solution (Intota, also coming 2014). They are looking to have more internal efficiencies by rebuilding the KB, and it will include information from Ulrich’s, new content types metadata (e.g. A/V), metadata standardization, industry data, etc.
Summon Updates (Andrew Nagy):
I know very little about Summon functionality, so just listened to this one and didn’t take notes. Take-away: if you haven’t looked at Summon in a while, it would be worth giving it another go.
Goal #1: Allow users to easily link to full-text resources. Solution: Go beyond the out-of-the box 360 Link display.
Goal #2: Allow users to report problems or contact library staff at the point of failure. Solution: eresources problem report form
They created the eresources problem report form using Drupal. The fields include contact information, description of the resource, description of the problem, and the ability to attach a screenshot.
Some enhancements included: making the links for full-text (article & journal) butttons, hiding additional help information and giving some hover-over information, parsing the citation into the problem report page, and moving the citation below the links to full-text. For journal citations with no full-text, they made the links to the catalog search large buttons with more text detail in them.
Some of the challenges of implementing these changes is the lack of a test environment because of the limited preview capablities in 360 Link. Any changes actually made required an overnight refresh and they would be live, opening the risk of 24 hour windows of broken resource links. So, they created their own test environment by modifying test scenarios into static HTML files and wrapping them in their own custom PHP to mimic the live pages without having to work with the live pages.
[At this point, it got really techy and lost me. Contact the presenters for details if you’re interested. They’re looking to go live with this as soon as they figure out a low-use time that will have minimal impact on their users.]
Customizing 360 Link menu with jQuery (Laura Wrubel, George Washington University)
They wanted to give better visual clues for users, emphasize the full-text, have more local control over linkns, and visual integration with other library tools so it’s more seamless for users.
They started with Reidsma’s code, then then forked off from it. They added a problem link to a Google form, fixed ebook chapter links and citation formatting, created conditional links to the catalog, and linked to their other library’s link resolver.
They hope to continue to tweak the language on the page, particularly for ILL suggestion. The coverage date is currently hidden behind the details link, which is fine most of the time, but sometimes that needs to be displayed. They also plan to load the print holdings coverage dates to eliminate confusion about what the library actually has.
In the future, they would rather use the API and blend the link resolver functionality with catalog tools.
Custom document delivery services using 360 Link API (Kathy Kilduff, WRLC)
License information for course reserves for faculty (Shanyun Zhang, Catholic University)
Included course reserve in the license information, but then it became an issue to convey that information to the faculty who were used to negotiating it with publishers directly. Most faculty prefer to use Blackboard for course readings, and handle it themselves. But, they need to figure out how to incorporate the library in the workflow. Looking for suggestions from the group.
Advanced Usage Tracking in Summon with Google Anaytics (Kun Lin, Catholic University)
Use of ERM/KB for collection analysis (Mitzi Cole, NASA Goddard Library)
Used the overlap analysis to compare print holdings with electronic and downloaded the report. The partial overlap can actually be a full overlap if the coverage dates aren’t formatted the same, but otherwise it’s a decent report. She incorporated license data from Resource Manager and print collection usage pulled from her ILS. This allowed her to create a decision tool (spreadsheet), and denoted the print usage in 5 year increments, eliminating previous 5 years use with each increment (this showed a drop in use over time for titles of concern).
Discussion of KnowledgeWorks Management/Metadata (Ben Johnson, Lead Metadata Librarian, SerialsSolutions)
After they get the data from the provider or it is made available to them, they have a system to automatically process the data so it fits their specifications, and then it is integrated into the KB.
They deal with a lot of bad data. 90% of databases change every month. Publishers have their own editorial policies that display the data in certain ways (e.g., title lists) and deliver inconsistent, and often erroneous, metadata. The KB team tries to catch everything, but some things still slip through. Throught the data ingestion process, they apply rules based on past experience with the data source. After that, the data is normalized so that various title/ISSN/ISBN combinations can be associated with the authority record. Finally, the data is incorporated into the KB.
Authority rules are used to correct errors and inconsistencies. Rule automatically and consistently correct holdings, and they are often used to correct vendor reporting problems. Rules are condified for provider and database, with 76,000+ applied to thousands of databases, and 200+ new rules are added each month.
Why does it take two months for KB data to be corrected when I report it? Usually it’s because they are working with the data providers, and some respond more quickly than others. They are hoping that being involved with various initiatives like KBART will help fix data from the provider so they don’t have to worry about correcting it for us, but also making it easier to make those corrections by using standards.
Client Center ISSN/ISBN doesn’t always work in 360 Links, which may have something to do with the authority record, but it’s unclear. It’s possible that there are some data in the Client Center that haven’t been normalized, and could cause this disconnect. And sometimes the provider doesn’t send both print and electronic ISSN/ISBN.
What is the source for authority records for ISSN/ISBN? LC, Bowker, ISSN.org, but he’s not clear. Clarification: Which field in the MARC record is the source for the ISBN? It could be the source of the normalization problem, according to the questioner. Johnson isn’t clear on where it comes from.
Why use WordPress as a CMS for a small website? It’s flexible enough to build all sorts of kinds of sites. It’s free as in beer and there is a huge support community. It has a beautiful admin (particularly compared to other CMS like Drupal) that clients like to use, which means it is more likely to succeed and make them happy repeat clients.
First things first. Set up a local development server (MAMP or XAMPP) or use a web host. This allows you to develop on a desktop machine as if it were a web server.
Next, download dummy content like posts and comments. There are plugins (WP Dummy Content, Demo Data Creator) or imports in XML form.
Start with a blank theme. You could start from scratch, but nobody needs to reinvent the wheel. Really good ones: Starkers (semantic, thorough, and functional), Naked (created for adding your own XHTML), Blank (now with HTML5), and more.
A blank theme will come with several php files for pages/components and a css file. To create a theme, you really only need an index.php, screenshot.png, and style.css files. Lanier begs you to name your theme (i.e. sign your work).
Now that you have a theme name, start with the header and navigation. Next, take advantage of WPs dynamic tags. Don’t use an absolute path to your style sheet, home page, or anywhere else on your site if possible.
Make things even more awesome with some if/then statements. You can do that in PHP. [I should probably dig out my PHP for Dummies reference type books and read up on this.] This allows you to code elements different depending on what type of page you use.
Once you have your header file, build your footer file, making sure to close any tags you have in your header. Code the copyright year to be dynamic.
It doesn’t have to be a blog!
If you’re going to create a static homepage, make sure you name the custom template. If you don’t do this, the WP admin can’t see it. Go into Reading Settings to select the page you created using the homepage template.
Now that you have all that, what goes into the custom template? Well, you have the header and footer already, so now you put THE LOOP in between a div wrapper. The loop is where WP magic happens. It will display the content depending on the template of the page type. It will limit the number of posts shown on a page, include/exclude categories, list posts by author/category/tag, offset posts, order posts, etc.
Once you have your home page, you’ll want to build the interior pages. There are several strategies. You could let page.php power them, but if you have different interior page designs, then you’ll want to create custom page templates for each. But, that can become inefficient, so Lanier recommends using if/then statements for things like custom sidebars. A technique of awesomeness is using dynamic body IDs, which allows you to target content to specific pages using the body_class tag depending on any number of variables. Or, once again you can use an if/then statement. Other options for body classes.
Finish off your theme with the power of plugins. Basics: Akismet, All-In-One SEO, Google XML Sitemaps, Fast Secure Contact Form (or other contact form plugin), WPtouch iPhone theme. For blogs, you’ll want plugins like Author Highlight, Comment Timeout, SEO Slugs (shortens the URL to SEO-friendly), Thank Me Later (first-timer comments will get an email thanking them and links to other content), and WordPress Related Posts. For a CMS, these are good: Custom Excerpts, Search Permalink, Search Unleashed (or Better Search, since the default search is bit lacking), WP-PageNavi (instead of older/newer it creates page numbering), and WP Super Cache (caches content pages as static HTML and reduces server load).
What about multi-user installations? She used Daren Hoyt’s Mimbo theme because it was primarily a magazine site.
At what point do you have too many conditional statements in a template? It’s a balancing act between which is more efficient: conditional statements or lots of PHP files.
How do you keep track of new plugins and the reliability of programmers? Daren Hoyt & Elliot J. Stock are two designers she follows and will check out their recommendations.
What is your opinions of premium themes? For most people, that’s all they need. She would rather spend her time developing niche things that can’t be handled by standard themes.
How do you know when plugins don’t mesh well with each other? Hard to keep up with this as patches are released and updates to WP code.
Where can you find out how to do what you want to do? The codex can be confusing. It’s often easier to find a theme that does the element you are wanting to do, and then figure out how they designed the loop to handle it.
Are parent templates still necessary? Lanier hasn’t really used them.
Leave WP auto-P on or off? She turns them off. Essentially, WP automatically wraps paragraphs with a p tag, which can mess with your theme.
Medicine 0.1: in dealing with the influenza outbreak of 1837, a physician administered leeches to the chest, James’s powder, and mucilaginous drinks, and it worked (much like take two aspirin and call in the morning). All of this was written up in a medical journal as a way to share information with peers. Journals have been the primary source of communicating scholarship, but what the journal is has become more abstract with the addition of non-text content and metadata. Add in indexes and other portals to access the information, and readers have changed the way they access and share information in journals. “Non-linear” access of information is increasing exponentially.
Even as technology made publishing easier and more widespread, it was still producers delivering content to consumers. But, with the advent of Web 2.0 tools, consumers now have tools that in many cases are more nimble and accessible than the communication tools that producers are using.
Web 1.0 was a destination. Documents simply moved to a new home, and “going online” was a process separate from anything else you did. However, as broadband access increases, the web becomes more pervasive and less a destination. The web becomes a platform that brings people, not documents, online to share information, consume information, and use it like any other tool.
Heterarchy: a system of organization replete with overlap, multiplicity, mixed ascendandacy and/or divergent but coextistent patterns of relation
Apomediation: mediation by agents not interposed between users and resources, who stand by to guide a consumer to high quality information without a role in the acquisition of the resources (i.e. Amazon product reviewers)
NEJM uses terms by users to add related searches to article search results. They also bump popular articles from searches up in the results as more people click on them. These tools improved their search results and reputation, all by using the people power of experts. In addition, they created a series of “results in” publications that highlight the popular articles.
It took a little over a year to get to a million Twitter authors, and about 600 years to get to the same number of book authors. And, these are literate, savvy users. Twitter & Facebook count for 1.45 million views of the New York Times (and this is a number from several years ago) — imagine what it can do for your scholarly publication. Oh, and NYT has a social media editor now.
Blogs are growing four times as fast as traditional media. The top ten media sites include blogs and the traditional media sources use blogs now as well. Blogs can be diverse or narrow, their coverage varies (and does not have to be immediate), they are verifiably accurate, and they are interactive. Blogs level that media playing field, in part by watching the watchdogs. Blogs tend to investigate more than the mainstream media.
It took AOL five times as long to get to twenty million users than it did for the iPhone. Consumers are increasingly adding “toys” to their collection of ways to get to digital/online content. When the NEJM went on the Kindle, more than just physicians subscribed. Getting content into easy to access places and on the “toys” that consumers use will increase your reach.
Print digests are struggling because they teeter on the brink of the daily divide. Why wait for the news to get stale, collected, and delivered a week/month/quarter/year later? People are transforming. Our audiences don’t think of information as analogue, delayed, isolated, tethered, etc. It has to evolve to something digital, immediate, integrated, and mobile.
From the Q&A session:
The article container will be here for a long time. Academics use the HTML version of the article, but the PDF (static) version is their security blanket and archival copy.
Where does the library as source of funds when the focus is more on the end users? Publishers are looking for other sources of income as library budgets are decreasing (i.e. Kindle, product differentiation, etc.). They are looking to other purchasing centers at institutions.
How do publishers establish the cost of these 2.0 products? It’s essentially what the market will bear, with some adjustments. Sustainability is a grim perspective. Flourishing is much more positive, and not necessarily any less realistic. Equity is not a concept that comes into pricing.
The people who bring the tremendous flow of information under control (i.e. offer filters) will be successful. One of our tasks is to make filters to help our users manage the flow of information.
Three options: do it yourself, gather and format to upload to a vendor’s collection database, or have the vendor gather the data and send a report (Harrassowitz e-Stats). Surprisingly, the second solution was actually more time-consuming than the first because the library’s data didn’t always match the vendor’s data. The third is the easiest because it’s coming from their subscription agent.
Evaluation: review cost data; set cut-off point ($50, $75, $100, ILL/DocDel costs, whatever); generate list of all resources that fall beyond that point; use that list to determine cancellations. For citation databases, they want to see upward trends in use, not necessarily cyclical spikes that average out year-to-year.
Future: Need more turnaway reports from publishers, specifically journal publishers. COUNTER JR5 will give more detail about article requests by year of publication. COUNTER JR1 & BR1 combined report – don’t care about format, just want download data. Need to have download information for full-text subscriptions, not just searches/sessions.
Speaker: Benjamin Heet, librarian
He is speaking about University of Notre Dame’s statistics philosophy. They collect JR1 full text downloads – they’re not into database statistics, mostly because fed search messes them up. Impact factor and Eigen factors are hard to evaluate. He asks, “can you make questionable numbers meaningful by adding even more questionable numbers?”
At first, he was downloading the spreadsheets monthly and making them available on the library website. He started looking for a better way, whether that was to pay someone else to build a tool or do it himself. He went with the DIY route because he wanted to make the numbers more meaningful.
Avoid junk in junk out: HTML vs. PDF downloads depends on the platform setup. Pay attention to outliers to watch for spikes that might indicate unusual use by an individual. The reports often have bad data or duplicate data on the same report.
CORAL Usage Statistics – local program gives them a central location to store user names & passwords. He downloads reports quarterly now, and the public interface allows other librarians to view the stats in readable reports.
Speaker: Justin Clarke, vendor
Harvesting reports takes a lot of time and requires some administrative costs. SUSHI is a vehicle for automating the transfer of statistics from one source to another. However, you still need to look at the data. Your subscription agent has a lot more data about the resources than just use, and can combine the two together to create a broader picture of the resource use.
Harrassowitz starts with acquisitions data and matches the use statistics to that. They also capture things like publisher changes and title changes. Cost per use is not as easy as simple division – packages confuse the matter.
High use could be the result of class assignments or hackers/hoarders. Low use might be for political purchases or new department support. You need a reference point of cost. Pricing from publishers seems to have no rhyme or reason, and your price is not necessarily the list price. Multi-year analysis and subject-based analysis look at local trends.
Rather than usage statistics, we need useful statistics.