WordCamp Richmond: Starting From Scratch – Introduction to Building Custom Themes

presenter: Wren Lanier

Why use WordPress as a CMS for a small website? It’s flexible enough to build all sorts of kinds of sites. It’s free as in beer and there is a huge support community. It has a beautiful admin (particularly compared to other CMS like Drupal) that clients like to use, which means it is more likely to succeed and make them happy repeat clients.

First things first. Set up a local development server (MAMP or XAMPP) or use a web host. This allows you to develop on a desktop machine as if it were a web server.

Next, download dummy content like posts and comments. There are plugins (WP Dummy Content, Demo Data Creator) or imports in XML form.

Start with a blank theme. You could start from scratch, but nobody needs to reinvent the wheel. Really good ones: Starkers (semantic, thorough, and functional), Naked (created for adding your own XHTML), Blank (now with HTML5), and more.

A blank theme will come with several php files for pages/components and a css file. To create a theme, you really only need an index.php, screenshot.png, and style.css files. Lanier begs you to name your theme (i.e. sign your work).

Now that you have a theme name, start with the header and navigation. Next, take advantage of WPs dynamic tags. Don’t use an absolute path to your style sheet, home page, or anywhere else on your site if possible.

Make things even more awesome with some if/then statements. You can do that in PHP. [I should probably dig out my PHP for Dummies reference type books and read up on this.] This allows you to code elements different depending on what type of page you use.

Once you have your header file, build your footer file, making sure to close any tags you have in your header. Code the copyright year to be dynamic.

It doesn’t have to be a blog!

If you’re going to create a static homepage, make sure you name the custom template. If you don’t do this, the WP admin can’t see it. Go into Reading Settings to select the page you created using the homepage template.

Now that you have all that, what goes into the custom template? Well, you have the header and footer already, so now you put THE LOOP in between a div wrapper. The loop is where WP magic happens. It will display the content depending on the template of the page type. It will limit the number of posts shown on a page, include/exclude categories, list posts by author/category/tag, offset posts, order posts, etc.

Once you have your home page, you’ll want to build the interior pages. There are several strategies. You could let page.php power them, but if you have different interior page designs, then you’ll want to create custom page templates for each. But, that can become inefficient, so Lanier recommends using if/then statements for things like custom sidebars. A technique of awesomeness is using dynamic body IDs, which allows you to target content to specific pages using the body_class tag depending on any number of variables. Or, once again you can use an if/then statement. Other options for body classes.

Finish off your theme with the power of plugins. Basics: Akismet, All-In-One SEO, Google XML Sitemaps, Fast Secure Contact Form (or other contact form plugin), WPtouch iPhone theme. For blogs, you’ll want plugins like Author Highlight, Comment Timeout, SEO Slugs (shortens the URL to SEO-friendly), Thank Me Later (first-timer comments will get an email thanking them and links to other content), and WordPress Related Posts. For a CMS, these are good: Custom Excerpts, Search Permalink, Search Unleashed (or Better Search, since the default search is  bit lacking), WP-PageNavi (instead of older/newer it creates page numbering), and WP Super Cache (caches content pages as static HTML and reduces server load).

Questions:

What about multi-user installations? She used Daren Hoyt’s Mimbo theme because it was primarily a magazine site.

At what point do you have too many conditional statements in a template? It’s a balancing act between which is more efficient: conditional statements or lots of PHP files.

How do you keep track of new plugins and the reliability of programmers? Daren Hoyt & Elliot J. Stock are two designers she follows and will check out their recommendations.

What is your opinions of premium themes? For most people, that’s all they need. She would rather spend her time developing niche things that can’t be handled by standard themes.

How do you know when plugins don’t mesh well with each other? Hard to keep up with this as patches are released and updates to WP code.

Where can you find out how to do what you want to do? The codex can be confusing. It’s often easier to find a theme that does the element you are wanting to do, and then figure out how they designed the loop to handle it.

Are parent templates still necessary? Lanier hasn’t really used them.

Leave WP auto-P on or off? She turns them off. Essentially, WP automatically wraps paragraphs with a p tag, which can mess with your theme.

NASIG 2009: ERMS Integration Strategies – Opportunity, Challenge, or Promise?

Speakers: Bob McQuillan (moderator), Karl Maria Fattig, Christine Stamison, and Rebecca Kemp

Many people have an ERM, some are implementing it, but few (in the room) are where they consider to be finished. ERMS present new opportunity and challenges with workflow and staffing, and the presenters intend to provide some insight for those in attendance.

At Fattig’s library, their budget for electronic is increasing as print is decreasing, and they are also running out of space for their physical collections. Their institution’s administration is not supportive of increasing space for materials, so they need to start thinking about how to stall or shrink their physical collection. In addition, they have had reductions in technical services staffing. Sound familiar?

At Kemp’s library, she notes that about 40% of her time is spent on access setup and troubleshooting, which is an indication of how much of their resources is allocated for electronic resources. Is it worth it? They know that many of their online resources are heavily used. Consorital “buying clubs” makes big deals possible, opening up access to more resources than they could afford on their own. Electronic is a good alternative to adding more volumes to already over-loaded shelves.

Stamison (SWETS) notes that they have seen a dramatic shift from print to electronic. At least two-thirds of the subscriptions they handle have an electronic component, and most libraries are going e-only when possible. Libraries tell them that they want their shelf space. Also, many libraries are going direct to publishers for the big deals, with agents getting involved only for EDI invoicing (cutting into the agent’s income). Agents are now investing in new technologies to assist libraries in managing e-collections, including implementing access.

Kemp’s library had a team of three to implement Innovative’s ERM. It took a change in workflow and incorporating additional tasks with existing positions, but everyone pulled through. Like libraries, Stamison notes that agents have had to change their workflow to handle electronic media, including extensive training. And, as libraries have more people working with all formats of serials, agents now have many different contacts within both libraries and publishers.

Fattig’s library also reorganized some positions. The systems librarian, acquisitions librarian, and serials & electronic resources coordinator all work with the ERMS, pulling from the Serials Solutions knowledge base. They have also contracted with someone in Oregon to manage their EZproxy database and WebBridge coverage load. Fattig notes that it takes a village to maintain an ERMS.

Agents with electronic gateway systems are working to become COUNTER compliant, and are heavily involved with developing SUSHI. Some are also providing services to gather those statistics for libraries.

Fattig comments that usage statistics are serials in themselves. At his library, they maintained a homegrown system for collecting usage statistics from 2000-07, then tried Serials Solutions Counter 360 for a year, but now are using an ERM/homegrown hybrid. They created their own script to clean up the files, because as we all know, COUNTER compliance means something different to each publisher. Fattig thinks that database searches are their most important statistics for evaluating platforms. They use their federated search statistics to weigh the statistics from those resources (will be broken out in COUNTER 3 compliance).

Kemp has not been able to import their use stats into ERM. One of their staff members goes in every month to download stats, and the rest come from ScholarlyStats. They are learning to make XML files out of their Excel files and hope to use the cost per use functionality in the future.

Fattig: “We haven’t gotten SUSHI to work in some of the places it’s supposed to.” Todd Carpenter from NISO notes that SUSHI compliance is a requirement of COUNTER 3.

For the next 12-18 months, Fattig expects that they will complete the creation of license and contact records, import all usage data, and implement SUSHI when they can. They will continue to work with their consorital tool, implement a discovery layer, and document everything. Plans to create a “cancellation ray gun and singalong blog” — a tool for taking criteria to generate suggested cancellation reports.

Like Fattig, Kemp plans to finish loading all of the data about license and contacts, also the coverage data. Looking forward to eliminating a legacy spreadsheet. Then, they hope to import COUNTER stats and run cost/use reports.

Agents are working with ONIX-PL to assist libraries in populating their ERMS with license terms. They are also working with CORE to assist libraries with populating acquisitions data. Stamison notes that agents are working to continue to be liaisons between publishers, libraries, and system vendors.

Dan Tonkery notes that he’s been listening to these conversations for years. No one is serving libraries very well. Libraries are working harder to get these things implemented, while also maintaining legacy systems and workarounds. “It’s too much work for something that should be simple.” Char Simser notes that we need to convince our administrations to move more staff into managing eresources as our budgets are shifting more towards them.

Another audience member notes that his main frustration is the lack of cooperation between vendors/products. We need a shared knowledge base like we have a shared repository for our catalog records. This gets tricky with different package holdings and license terms.

Audience question: When will the ERM become integrated into the ILS? Response: System vendors are listening, and the development cycle is dependent on customer input. Every library approaches their record keeping in different ways.

NASIG 2009: Moving Mountains of Cost Data

Standards for ILS to ERMS to Vendors and Back

Presenter: Dani Roach

Acronyms you need to know for this presentation: National Information Standards Organization (NISO), Cost of Resource Exchange (CORE), and Draft Standard For Trial Use (DSFTU).

CORE was started by Ed Riding from SirsiDynix, Jeff Aipperspach from Serials Solutions, and Ted Koppel from Ex Libris (and now Auto-Graphics). The saw a need to be able to transfer acquisitions data between systems, so they began working on it. After talking with various related parties, they approached NISO in 2008. Once they realized the scope, it went from being just an ILS to ERMS transfer to also including data from your vendors, agents, consortia, etc, but without duplicating existing standards.

Library input is critical in defining the use cases and the data exchange scenarios. There was also a need for a data dictionary and XML schema in order to make sure everyone involved understood each other. The end result is the NISO CORE DSFTU Z39.95-200x.

CORE could be awesome, but in the mean time, we need a solution. Roach has a few suggestions for what we can do.

Your ILS has a pile of data fields. Your ERMS has a pile of data fields. They don’t exactly overlap. Roach focused on only eight of the elements: title, match point (code), record order number, vendor, fund, what paid for, amount paid, and something else she can’t remember right now.

She developed Access tables with output from her ILS and templates from her ERMS. She then ran a query to match them up and then upload the acquisitions data to her ERMS.

For the database record match, she chose the Serials Solutions three letter database code, which was then put into an unused variable MARC field. For the journals, she used the SSID from the MARC records Serials Solutions supplies to them.

Things that you need to decide in advance: How do you handle multiple payments in a single fiscal year (What are you doing currently? Do you need to continue doing it?)? What about resources that share costs? How will you handle one-time vs. ongoing purchase? How will you maintain the integrity of the match point you’ve chosen?

The main thing to keep in mind is that you need to document your decisions and processes, particularly for when systems change and CORE or some other standard becomes a reality.

LITA 2008: Five-minute Madness

Call for presentations went out a few weeks ago, with the idea of gathering fresh content. Presenters have five minutes each.

Incorporating ICT into a New Vision for Caribbean Libraries
Presenter: Gracelyn Cassell

Delivers distance education for the West Indies — looked at the library situations in 15 countries. The libraries have inadequate budgets, limited facilities, small and dated collections, poor technology, under-trained staff, and inadequate services. However, the libraries are eager for dialogue, willing to listen to suggestions, strong interest in training, and the librarians are craving refresher courses.

The university has capacity for training, as well as tele- and video-conferences. Need to use the resources of the university to deliver training and services for the regional libraries.

How can LITA help? Provide on-site technical support (in the winter, of course).

 

Using Delicious to Select Teaching Materials Collaboratively
Presenter: Emily Molanphy

Sakai is their CMS (open source). Like it, but needed more multi-media and less PowerPoint. Asked library for help.

Wanted the links to the resources be easy to share, and to be able to annotate the links. Faceted using tag bundles, but the most important aspect is that the recipient can choose their access point.

Known issues: Need to share password for a single account. For:username is too limited because the tags and the description are stripped. Faceting is flawed because everything is listed alphabetically.

Good way to supplement personal meetings.

 

Help Systems Based on Solr
Presenter: Krista Wilde

Solr is an open-source software that serves as a front-end access point to a database that returns queries in XML. Created a Solr instance specifically for help, and then created webforms for adding and modifying web pages with details about what pages or topics the help document is related to.

Wanted to make the help searchable and dynamic, to allow non-technical staff members to update and modify the pages, and using their tools to support their tools (they use Solr quite a bit).

 

RFID Self Checkout User Interface Redesign
Presenter: Robert Keith

Were using a self-check machine before, but felt that six steps were too many and were frustrating. Interface was too busy, small text (and lots of it), distracting animations, and the public & staff did not like it.

Re-designed with larger (briefer) text, uses audio commands to prompt user, and automatically prints the receipt (and thus not resulting in hung patron records). The result is that they have increased self-check use by 10% for adults and 30% for children.

 

The Endeca Project at Triangle Research Libraries Network
Presenter: Derek Rodriguez

March 2008 – launched Serach TRLN, a union catalog for the network. In August, they launched local interfaces at three universities. Licensed Syndetics data, and are indexing the table of contents. Plan to tune the search and relevance ranking, add new indexes, shopping cart, and ingesting non-MARC data.

 

Handheld Project Scope at Penn State
Presenter: Emily Rimland

Impetus for the group: iPhone lust. Librarians thought that mobile devices could support roving reference. Necessary for a library made up of three buildings mashed together. Would also be useful in faculty liaison activity, and to test the accessibility of their web-based resources.

The team of librarians and IT staff mapped the uses to the requirements and the requirements to the mobile devices. As it turns out, none of the four that fit were the iPhone: Nokia N-810, Sony Vaio UX-490, Fujitsu Lifebook, and OQO. (Some they were able to borrow from IT staff.)

The testing showed that there was a learning curve to using each device. The best was the Fujitsu Lifebook.

 

Unmanned Technology Projects
Presenter: Mike McGuire & Suzi Cole

They had big plans & user expectations, and consortial pressure to be an equal partner, but their limited staff did not have time to do or learn more. And, ultimately, a lack of coordination which lead to frustration, stress, and potential burnout. Solution: Library Technology Working Group that includes key players (library and IT), monthly meetings, and a wiki that tracks projects, meeting minutes, timelines, and what’s new.

Communication has been great. They have clear priorities and resource needs and a place to organize and share documentation. The results have been unexpectedly positive.

 

Texting at the Reference Desk
Presenter: Keith Weimer

Single service desk for phone, email, and chat, as well as walk-up reference. Wanted to reach users at new points of need, so investigated SMS. Upside Wireless is a Canadian company that provides SMS-to-email and a local phone number. But, it’s expensive to develop and maintain.

Did a soft rollout, with a link on the web page and table tents. A few months later, did a hard rollout with larger promotion around campus, including posters. After the hard rollout, the use has spiked. Has been used mostly for short queries like circulation info and hours, but about a quarter of the use was for reference type questions.

May move to AIM Hack, which is cheaper.

 

Digital Past: Ten Years and Growing
Presenter: Katy Schlumpf

Local history digitization project focusing on Illinois records, but the scope may need to be widened to encompass other collections housed in the system. Struggling with what to do for the future, particularly with tight budgets.

NASIG 2008: Next Generation Library Automation – Its Impact on the Serials Community

Speaker: Marshall Breeding

Check & update your library’s record on lib-web-cats — Breeding uses this data to track the ILS and ERMS systems used by libraries world-wide.

The automation industry is consolidating, with several library products dropped or ceased to be supported. External financial investors are increasingly controlling the direction of the industry. And, the OPAC sucks. Libraries and users are continually frustrated with the products they are forced to use and are turning to open source solutions.

The innovation presented by automation companies falls below the expectations of libraries (not so sure about users). Conventional ILS need to be updated to incorporate the modern blend of digital and print collections.

We need to be more thoughtful in our incorporation of social tools into traditional library systems and infrastructures. Integrate those Web 2.0 tools into existing delivery options. The next NextGen automation tools should have collaborative features built into them.

Open source software isn’t free — it’s just a different model (pay for maintenance and setup v. pay for software). We need more robust open source software for libraries. Alternatively, systems need to open up so that data can be moved in and out easily. Systems need APIs that allow local coders to enhance systems to meet the needs of local users. Open source ERMS knowledge bases haven’t been seriously developed, although there is a need.

The drive towards open source solutions has often been motivated by disillusionment with current vendors. However, we need to be cautious, since open source isn’t necessarily the golden key that will unlock the door to paradise. (i.e. Koha still needs to add serials and acquisitions modules, as well as EDI capabilities).

The open source movement motivates the vendors to make their systems more open for us. This is a good thing. In the end, we’ll have a better set of options.

Open Source ILS options: Koha (commercial support from LibLime) used mostly by small to medium libraries, Evergreen (commercial support from Equinox Software) tested and proven for small to medium libraries in a consortia setting, and OPALS (commercial support from Media Flex) used mostly by k-12 schools.

In making the case for open source ILS, you need to compare the total cost of ownership, the features and functionality, and the technology platform and conceptual models. Are they next-generation systems or open source versions of legacy models?

Evaluate your RFPs for new systems. Are you asking for the things you really need or are you stuck in a rut of requiring technology that was developed in the 70s and may no longer be relevant?

Current open source ILS products lack serials and acquisitions modules. The initial wave of open source ILS commitments happened in the public library arena, but the recent activity has been in academic libraries (WALDO consortia going from Voyager to Koha, University of Prince Edward Island going from Unicorn to Evergreen in about a month). Do the current open source ILS products provide a new model of automation, or an open source version of what we already have?

Looking forward to the day when there is a standard XML for all ILS that will allow libraries to manipulated their data in any way they need to.

We are working towards a new model of library automation where monolithic legacy architectures are replaced by the fabric of service oriented architecture applications with comprehensive management.

The traditional ILS is diminishing in importance in libraries. Electronic content management is being done outside of core ILS functions. Library systems are becoming less integrated because the traditional ILS isn’t keeping up with our needs, so we find work-around products. Non-integrated automation is not sustainable.

ERMS — isn’t this what the acquisitions module is supposed to do? Instead of enhancing that to incorporate the needs of electronic resources, we had to get another module or work-around that may or may not be integrated with the rest of the ILS.

We are moving beyond metadata searching to searching the actual items themselves. Users want to be able to search across all products and packages. NextGen federated searching will harvest and index subscribed content so that it can be searched and retrieved more quickly and seamlessly.

Opportunities for serials specialists:

  • Be aware of the current trends
  • Be prepared for accelerated change cycles
  • Help build systems based on modern business process automation principles. What is your ideal serials system?
  • Provide input
  • Ensure that new systems provide better support than legacy systems
  • Help drive current vendors towards open systems

How will we deliver serials content through discovery layers?

Reference:

  • “It’s Time to Break the Mold of the Original ILS,” Computers in Libraries, Nov/Dec 2007.

I got skillz and I know how to use them

What I wouldn’t give for a pre-conference workshop on XML or SQL or some programming language that I could apply to my daily work!

Recently, Dorothea Salo was bemoaning the lack of technology skills among librarians. I hear her, and I agree, but I don’t think that the library science programs have as much blame as she wants to assign to them.

Librarianship has created an immense Somebody Else’s Problem field around computers. Unlike reference work, unlike cataloguing, unlike management, systems is all too often not considered a librarian specialization. It is therefore not taught at a basic level in some library schools, not offered as a clear specialization track, and not recruited for as it needs to be. And it is not often addressed in a systematic fashion by continuing-education programs in librarianship.

I guess my program, eight years ago, was not one of those library schools that doesn’t teach basic computer technology. Considering that my program was not a highly ranked program, nor one known for being techie, I’m surprised to learn that we had a one-up on some other library science programs. Not only were there several library tech (and basic tech) courses available, everyone was required to take at least one computer course to learn hardware and software basics, as well as rudimentary HTML.

That being said, I suspect that the root of Salo’s ire is based in what librarians have done with the tech knowledge they were taught. In many cases, they have done nothing, letting those who are interested or have greater aptitude take over the role of tech guru in their libraries. Those of us who are interested in tech in general, and library tech in specific, have gone on to make use of what we were taught, and have added to our arsenal of skills.

My complaint, and one shared by Salo, is that we are not given very many options for learning more through professional continuing education venues that cover areas considered to be traditional librarian skills. What I wouldn’t give for a pre-conference workshop on XML or SQL or some programming language that I could apply to my daily work!

redirects

I set up my feed on FeedBurner to pick up the index.xml file, and then I stupidly created an .htaccess redirect for that file that sent agents to the FeedBurner feed. Thus, the FeedBurner feed was stuck in a loop and wasn’t being updated. I fixed that just now by creating a copy of the … Continue reading “redirects”

I set up my feed on FeedBurner to pick up the index.xml file, and then I stupidly created an .htaccess redirect for that file that sent agents to the FeedBurner feed. Thus, the FeedBurner feed was stuck in a loop and wasn’t being updated. I fixed that just now by creating a copy of the index.xml feed and directing FeedBurner to pick that up instead. Those of you reading this blog via RSS will suddenly have several new posts.

D’oh.

rss for opacs

Anna gets semi-techie about RSS and OPACs.

Yesterday, I was thinking some more about uses for RSS with library OPACs. The idea of having an RSS feed for new books continues to nag me, but without more technical knowledge, I know this is something that I couldn’t make work. Then something clicked, and I called up our library systems administrator to ask him a few questions. As I suspected, our new books list in the OPAC is a text file that is generated by a script that searches the catalog database once a week. I began to ponder what it would take to convert that flat file into XML, and if would it be possible to automate that process.

I grabbed a copy of the flat file from the server and took a look at it, just to see what was there. First off, I realized that there was quite a bit of extraneous information that will need to be stripped out. That could be done easily by hand with a few search & replace commands and some spreadsheet manipulation. So, the easy way out would be to do it all by hand every week. Here* is what I was able to do after some trial and error, working with books added in the previous week.

A harder route would be to put together a program that would take the cleaned up but still raw text file and convert each line into <item> entries, with appropriate fields for <title> (book title), <description> (publication information & location), <category> (collection), etc. This new XML file would replace the old one every week. If I knew any Perl or ColdFusion, I’m certain that I could whip something up fairly quickly.

The ideal option would be to write a program that goes into the catalog daily and pulls out information about new books added and generates the XML file from that. I suspect that it would work similarly to Michael Doran‘s New Books List program, but would go that extra step of converting the information in to RSS-friendly XML.

If anyone knows of some helper programs or if someone out there in library land is developing a program like this, please let me know.

* File is now missing. I think I may have delete it by accident. 1/13/05

library jobs

Job ads from the Chronicle are available through RSS feeds by category.

I have a keyword search set up that crawls Feedster every day, looking for blog entries using the keywords. Then, those entries get fed into my RSS feed reader for me to browse and view. This morning, I discovered that the job ads for The Chronicle of Higher Education are available through career-specific RSS feeds. If you browse to a particular catetory, you will find the XML button for that category at the top of the list next to the email notification option. Here is the librarian feed.

css.php