my twitter infographic

my twitter infographicIt’s a mashup of two of my favorite things — data visualization and social media. Of course I’m going to make one.

The interesting thing is that for some reason I come across as a gamer according to the algorithms. Unless you count solitaire, sudoku, and Words with Friends, I’m not really a gamer at all. The PS2, games, and accessories I bought from my sister last November that is are sitting in a corner unassembled are also a testament to how little I game.

Anyway, click on the image to get the full-sized view, and if you make your own, be sure to share the link in the comments.

NASIG 2010: Linked Data and Libraries

Presenter: Eric Miller, Zepheira, LCC

Nowadays, we understand what the web is and the impact it has had on information sharing, but before it was developed, it was in a “vague but exciting” stage and few understood it. When we got started with the web, we really didn’t know what we were doing, but more importantly, the web was being developed so that it was flexible enough for smarter and more creative people to do amazing things.

“What did your website look like when you were in the fourth grade?” Kids are growing up with the web and it’s hard for them to comprehend life without it. [Dang, I’m old.]

This talk will be about linked data, its legacy, and how libraries can lead linked data. We have a huge opportunity to weave libraries into the fabric of libraries, and vice versa.

About five years ago, the BBC started making their content available in a service that allowed others to use and remix the delivery of the content in new ways. Rather than developing alternative platforms and creating new spaces, they focus on generating good content and letting someone else frame it. Other sources like NPR, the World Bank, and Data.gov are doing the same sorts of things. Within the library community, these things are happening, as well. OCLC’s APIs are getting easier to use, and several national libraries are putting their OPACs on the web with APIs.

Obama’s open government initiative is another one of those “vague but exciting” things, and it charged agencies to come up with their own methods of making their content available via the web. Agencies are now struggling with the same issues and desires that libraries have been tackling for years. We need to recognize our potential role in moving this forward.

Linked data is a best practice for sharing data, connecting data, and uses the semantic web. Rather than leaving the data in their current formats, let’s put them together in ways they can be used on the wider web. It’s not the databases that make the web possible, it’s the web that makes the databases usable.

Human computation can be put to use in ways that assist computers to make information more usable. Captcha systems are great for blocking automated programs when needed, and by using human computation to decipher scanned text that is undecipherable by computers, ReCaptcha has been able to turn unusable data into a fantastic digital repository of old documents.

LEGOs have been around for decades, and their simple design ensures that new blocks work with old blocks. Most kids end up dumping all of their sets into one bucket, so no matter where the individual building blocks come from, they can be put together and rebuild in any way you can imagine. We could do this with our blocks of data, if they are designed well enough to fit together universally.

Our current applications, for the most part, are not designed to allow for the portability of data. We need to rethink application design so that the data becomes more portable. Web applications have, by neccesity, had to have some amount of portability. Users are becoming more empowered to use the data provided to them in their own way, and if they don’t get that from your service/product, then they go elsewhere.

Digital preservation repositories are discussing ways to open up their data so that users can remix and mashup data to meet their needs. This requires new ways of archiving, cataloging, and supplying the content. Allow users to select the facets of the data that they are interested in. Provide options for visualizing the raw data in a systematic way.

Linked data platforms create identifiers for every aspect of the data they contain, and these are the primary keys that join data together. Other content that is created can be combined to enhance the data generated by agencies and libraries, but we don’t share the identifiers well enough to allow others to properly link their content.

Web architecture starts with web identifiers. We can use URLs to identify things other than just documents, but we need to be consistent and we can’t change the URL structures if we want it to be persistent. A lack of trust in identifiers is slowing down linked data. Libraries have the opportunity to leverage our trust and data to provide control points and best practices for identifier curation.

A lot of work is happening in W3C. Libraries should be more involved in the conversation.

Enable human computation by providing the necessary identifiers back to data. Empower your users to use your data, and build a community around it. Don’t worry about creating the best system — wrap and expose your data using the web as a platform.

ER&L 2010: Where are we headed? Tools & Technologies for the future

Speakers: Ross Singer & Andrew Nagy

Software as a service saves the institution time and money because the infrastructure is hosted and maintained by someone else. Computing has gone from centralized, mainframe processing to an even mix of personal computers on an networked enterprise to once again a very centralized environment with cloud applications and thin clients.

Library resource discovery is, to a certain extent, already in the cloud. We use online databases and open web search, WorldCat, and next gen catalog interfaces. The next gen catalog places the focus on the institution’s resources, but it’s not the complete solution. (People see a search box and they want to run queries on it – doesn’t matter where it is or what it is.) The next gen catalog is only providing access to local resources, and while it looks like modern interfaces, the back end is still old-school library indexing that doesn’t work well with keyword searching.

Web-scale discovery is a one-stop shop that provides increased access, enhances research, and provides and increase ROI for the library. Our users don’t use Google because it’s Google, they use it because it’s simple, easy, and fast.

How do we make our data relevant when administration doesn’t think what we do is as important anymore? Linked data might be one solution. Unfortunately, we don’t do that very well. We are really good at identifying things but bad at linking them.

If every component of a record is given identifiers, it’s possible to generate all sorts of combinations and displays and search results via linking the identifiers together. RDF provides a framework for this.

Also, once we start using common identifiers, then we can pull in data from other sources to increase the richness of our metadata. Mashups FTW!

IL2009: Mashups for Library Data

Speakers: Nicole Engard

Mashups are easy ways to provide better services for our patrons. They add value to our websites and catalogs. They promote our services in the places our patrons frequent. And, it’s a learning experience.

We need to ask our vendors for APIs. We’re putting data into our systems, so we should be able to get it out. Take that data and mash it up with popular web services using RSS feeds.

Yahoo Pipes allows you to pull in many sources of data and mix it up to create something new with a clean, flow chart like interface. Don’t give up after your first try. Jody Fagan wrote an article in Computers in Libraries that inspired Engard to go back and try again.

Reading Radar takes the NYT Bestseller lists and merges it with data from Amazon to display more than just sales information (ratings, summaries, etc.). You could do that, but instead of having users go buy the book, link it to your library catalog. The New York Times has opened up a tremendous amount of content via APIs.

Bike Tours in CA is a mashup of Google Maps and ride data. Trulia, Zillow, and HousingMaps use a variety of sources to map real estate information. This We Know pulls in all sorts of government data about a location. Find more mashups at ProgrammableWeb.

What mashups should libraries be doing? First off, if you have multiple branches, create a Google Maps mashup of library locations. Share images of your collection on Flickr and pull that into your website (see Access Ceramics), letting Flickr do the heavy lifting of resizing the images and pulling content out via machine tags. Delicious provides many options for creating dynamically updating lists with code snippets to embed them in your website.

OPAC mashups require APIs, preferably those that can generate JavaScript, and finally you’ll need a programmer if you can’t get the information out in a way you can easily use it. LexisNexis Academic, WorldCat, and LibraryThing all have APIs you can use.

Ideas from Librarians: Mashup travel data from circulation data and various travel sources to provide better patron services. Grab MARC location data to plot information on a map. Pull data about media collection and combine it with IMDB and other resources. Subject RSS feeds from all resources for current articles (could do that already with a collection of journals with RSS feeds and Yahoo Pipes).

Links and more at her book website.

Learning 2009: Keynote

Speaker: Bryan Alexander

He is interested in how social media is used to disseminate information. Shortly after CDC set up a Twitter account, many folks started following their updates with information. Many people and organizations created Google Maps mashups of incidents of H1N1. Alexander gathered examples of the variety of responses, and he doesn’t think that any institution in higher education is prepared to discuss or teach on this use of social media and how to critically respond to it.

Web 2.0 Bullshit Generator:
1. Click the button.
2. Watch the bullshit appear in the box.

Twitter has taken off among an unusual demographic for social media: adults with jobs. The news of the plane that landed in the Hudson was scooped by a Twitter user. It’s now one out of many news sources, and soon there will be better ways of aggregating news information that includes it. The number of individuals arrested for blogging (or microblogging like Twitter) has gone up dramatically in recent years. These tools are important.

LinkedIn: least sexy social media site on the net. However, they are making a profit! Regardless of how spiffy it could be, people are still using it.

Scott Sigler shout-out! Future Dark Overlord gets a mention for being the first podcast novelist to break the NYT bestseller list.

Recommended reading — The Wealth of Networks: How Social Production Transforms Markets and Freedom by Yochai Benkler

photo of Bryan Alexander by Tom Woodward
photo of Bryan Alexander by Tom Woodward

Before Web 2.0, you had to know HTML, have FTP access, and server space somewhere. The learning curve was high. With Web 2.0, it’s easy to create, publish, and share microcontent from a variety of free or open sources. The learning curve is much lower — barriers to access are torn down in favor of collaboration and information dissemination.

2.0 conversations are networked across many sites, not just in one or two locations like 1.0 or print. The implications for how we teach students is huge!

Mashups are great ways to take data or textual information and create visual representations of them that enhance the learning process. For example, Lewis & Clark University created a Google Maps mashup of the locations of the potters in their contemporary American pottery collection. This map shows groupings that the text or images of the pottery does not easily convey.

Alexander used the blog format to publish a version of Stoker’s Dracula, which was easily adaptable to the format. It took little time, since he had the text in a document file already (he was preparing an annotated version for print). This brought interested readers and scholars out of the woodwork, including many experts in the field of Dracula research, who left comments with additional information on the entries.

If you’re not using technology in teaching, you’re not Luddite — you’re Amish.

According to Google Labs’ Trends tool, “Web 2.0” is going down as a search term. That doesn’t mean it’s going away. Rather, it means that it’s becoming “normal” and no longer a new technology.

The icon for computing used to be the desktop, then it became the laptop. Now it has exploded. There are many devices all over the map, from pocket size to much larger. Wireless means nothing anymore — it’s defining something by what it is not, and there are a heck of a lot of things that are not “wired.”

Mobile computing is not a panacea — there are problems. The devices are too small to do serious editing of video or audio. The interfaces are difficult for many users to do much more than basic things with them.

Information on demand at one’s fingertips is challenging for pedagogy. Students can be looking up information during lectures and possibly challenging their teachers with what they have found. Backchannel conversations can either enhance or derail classroom conversations, depending on how they are managed by the presenters, but one main advantage is that it increases participation from those who are too shy to speak.

The pedagogical aspects of video games are finally making their way into higher education scholarship and practice. The gaming industry is currently more profitable than the movie or music industries. We need to be paying attention to how and what games are teaching our students.

thing 5 & 6: flickr

So, we’ve been doing this thing at work following the Learning 2.0 model that Helene Blowers developed for the Public Library of Charlotte and Mecklenburg County a few years ago. We’re on week three and thing five & six, which involve posting something on Flickr (been doing that since March 2005), adding it to the pool, and then blogging about the experience. Alternatively, participants can find an interesting photo on Flickr and include it in a blog entry (with proper attribution, of course).

For me, doing the basics was nothing new, so that part was… well… boring. However, it leads into thing six, which is to explore Flickr mashups. I made this using the Spell with Flickr tool:
E C coloured card disc letter l DSC_1576 c T C
DSC_1410 Copper Uppercase Letter I Brass Letter B _R (2 days left) A R i a in the pavement, kalmar, sweden McElman_080417_6572_N

I have played with Tag Galaxy before, and I have an old Librarian Trading Card. I decided to make a new card, and I’ve bookmarked the color pickr for later use.

For my fellow TechLearners and anyone else out there who cares, I suggest you don’t use Flickr’s Uploadr if you need to upload a bunch of images (as in, more than 10). Something broke with version 3.0 and more often than not I get upload errors when I try to use it. I have not tried it on the Mac, so it’s possible that my problems are Windows-specific.

CiL 2008: Going Local in the Library

Speaker: Charles Lyons

[Speakers for track C will all be introduced in haiku form.]

Local community information has been slower to move online than more global information provided by sources such as search engines and directories, but that is changing. Google can provide directory information, but they can’t tell you which of the barbers listed near you, for example, are good ones. They’re trying, but it’s much harder to gather and index that information. “In ur community, inforimin ur localz.” The local web can be something deeper, hyper, semantic.

Local information can sound boring if it doesn’t effect you directly. For example, information about street repairs can be important if it is happening along the routes you drive regularly. The local web connects the real world and the virtual world. The local web is bringing a sense of place to the Internet.

Libraries provide access to local information such as genealogy, local history, local government info, etc.

Local search engines started off as digital phone books, but now they are becoming more integrated with additional information such as maps and users reviews. Ask.com provides walking directions as well as driving directions, which I did not know but plan to make use of in the future. By using tools like Google Custom Search, libraries are creating local search engines, rather than just having a page of local links. MyCommunityInfo.ca is a popular search engine for residents of London, Ontario.

Local blogs also provide information about communities, so creating a local blog directory might be useful. Be sure to add them to your local search engine. Local news sites blend user-generated information with traditionally published sources. Useful news sites will allow users to personalize them and add content. Libraries are involved in creating local online communities (see Hamilton Public Library in Ontario).

Local data is being aggregated by sites like EveryBlock, which pulls information from the deep web. It’s currently available in three cities (Chicago, New York, & San Francisco) with plans for expansion, and once the grant ends, the code will be opened to anyone.

Wikipedia is a start for providing local information, and a few libraries are using wiki technology to gather more detailed local information.

Metadata such as geotagging allows more automation for gathering locally relevant information. Flickr provides geographic feeds, which can be streamed on local information sites.

Libraries are using Google Maps mashups to indicate their locations, but could be doing more with it. Libraries could create maps of historical locations and link them to relevant information sites.

No successful revenue generation has been formulated for local information sites. Most local sites are generated by passionate individuals. Libraries, which are not revenue generating sources anyway, are better poised to take on the responsibility of aggregating and generating local information. Particularly since we’re already in the role of information provision.

Libraries can be the lense into local information.

acrl northwest 2006 – day one

“The Emerging Youth Literacy Landscape of Joy” -Dr. Anthony Bernier (San Jose State University) New Youth Literacies state of current research research shifted from what young people knew to how they knew it young people learn bibliographic skills differently from adults as a result, pedagogy itself must become more flexible ethnographic research can help us … Continue reading “acrl northwest 2006 – day one”

“The Emerging Youth Literacy Landscape of Joy” -Dr. Anthony Bernier (San Jose State University)

New Youth Literacies

  • state of current research
    • research shifted from what young people knew to how they knew it
    • young people learn bibliographic skills differently from adults
    • as a result, pedagogy itself must become more flexible
    • ethnographic research can help us
  • gaps in research
    • students are reduced to one-dimensional themes
    • information seeking is individual
    • games structure and play can inform us about youth information seeking
    • young people are viewed only as information consumers
  • libraries need to be asking why questions about young people information seeking choices
  • new paths for research
    • consider the daily life of young people
    • email is now just a quaint way to communicate with old people
    • New Youth Literacy – young people as literacy producers
      • fugitive literacy produced in small lots, non-sequential, and non-serial; using all forms of media – ephemera
      • Berkeley High School Slang Dictionary, 2002
    • Information futures and young people
      • emerging technologies for education – The Horizon Report 2006 Edition – collaboration and social computing needs to be embraced by university libraries – IM reference, Flickr, Skype, pod/webcasting, etc.
      • future challenges
        • intellectual property
        • continuing information literacy skills
        • technical support

“A Sensible Approach to New Technologies in Libraries: How do you work Library 2.0 into your 1.5 library with your 1.23 staff and your .98 patrons?” – Jessamyn West
http://librarian.net/talks/acrl-or

  • It isn’t about being expert on the latest and greatest, it’s about being flexible enough to learn the technologies you and your patrons use.
  • Smart people read the manual – knowing how to use tools to solve your problems is almost the same as solving them on your own.
  • In the end, it’s what you want out of your computer.
  • Web 2.0: “Your cats have profiles on Catster.”
  • Library 2.0 is a service philosophy: being willing to try new things and constantly evaluating your services – look outside the library world to find solutions to internal problems – the Read/Write Web
  • Librarian 2.0: not being the bottleneck between patrons and the information they want
  • Email is for talking to your colleagues.
  • Technocracy lives in chat.
  • “Internet interprets censorship as damage and routes around it.” — so do our users
  • “Blogs are like courseware, only easy to use.”
  • “Pew reports are like crack to librarians.”
  • It doesn’t matter if you think Wikipedia is good or bad. The reality is that’s where the eyeballs are.
  • Open APIs allow people to do nerdy type of stuff – mashups turn nifty things into tools you use for work.
  • People who have broadband connections are the ones interacting with the internet, and web-based tools are being created for them, not for dialup people.

I really liked this talk. Jessamyn is an engaging speaker.


“Web 2.0 Is the Web” or “We’re All Millenials Now” – Rachel Bridgewater
del.icio.us tag “menucha06

  • “born digital people”
  • Match the tool to the job – you can learn how to use them, so the question is do you need it?
  • How does Web 2.0 effect scholarship? Sort of is the original vision of what the web would be – everyone is a publisher and information is shared freely.
  • What is 2.0 for librarians?
    • web as platform
    • radical openness: open source, open standards (API, etc.)
    • flattened hierarchy
    • user focused
    • micro-content: blog post as unit of content; atomization of content
  • Web 1.0 is a framework based on the print world – the NetGens don’t need them

Web 2.0 that enhances library stuff

  • Social bookmarks can be constantly evolving bibliographies.
  • Blogs are a platform for sharing scholarly ideas that are not developed as a part of complex papers or monographs, and they allow for more immediate discourse.
  • Networked books (Library Journal article about the social book) – how do they effect our ideas of authorship when they can be created and contributed to by anonymous writers via wikis and other similar tools? See Lawrence Lessig’s book Code. Does canon mean anything anymore?
  • Peer review – can it be replaced by real-time peer review through comments and/or wiki edits? “open peer review”
  • Open data – using distributed computing networks to crunch numbers – more than just searching for aliens. Link to the raw data from the online journal article. Libraries could/should be the server repositories.

Maybe we should be listening to our patrons to find out where information is going. Maybe Wikipedia is the future. Instead of saying that our databases are like the Reader’s Guide, we should be saying they’re like Wikipedia, only created by known scholars and proven to be authoritative.

updated to fix the tweaky code — didn’t have time to do it until now — sorry!

la la

A few weeks ago, I heard about a new online service that’s sort of like a mashup of Netflix and a swap meet. Members can list CDs they’re willing to swap on la la, along with their wishlist. The system alerts members when one of their available CDs is wanted by another member, at which … Continue reading “la la”

A few weeks ago, I heard about a new online service that’s sort of like a mashup of Netflix and a swap meet. Members can list CDs they’re willing to swap on la la, along with their wishlist. The system alerts members when one of their available CDs is wanted by another member, at which point they have a the choice of sending it to the member in the pre-paid envelope provided by la la, or declining. You can receive one CD for free, but if you want more it’ll cost $1 + $0.49 for shipping. Not bad considering that it costs $0.99+tax for just one song on iTunes. As long as you are sending as much as you’re receiving, you get new music for cheap. So far, I’ve sent out two CDs with no problems and I’m waiting on my first request to arrive. The only down side is that it’s all dependent on what other members are willing to swap. It may cost more to buy a new or used CD elsewhere (or download a bunch of iTunes songs), but you get them as soon as you buy them, and sometimes that’s worth paying for.

css.php