thing 9: finding RSS feeds

Part of why I have so many RSS feeds in my reader (234 at the moment — picked up three more this week) is because it is so easy to subscribe to things I run across in my day-to-day online activity. I’m currently using the Better GReader plugin for Firefox, which compiles some of the best Greasemonkey scripts for Google Reader. One thing I really like about it is the “Auto Add to Reader (Bypass iGoogle Choice)” feature, which saves me a few clicks.

This particular assignment asks us to make use of directories like Technorati and Feedster to locate feeds we want to subscribe to. I’m going to not do that, since I already have more to read than I have time to read. In any case, those tools have not been particularly useful to me in the past. I tend to find new feeds through links from the ones I’m currently reading.

thing 8: RSS

The first part of the assignment is to set up a feed reader. I’ve used a variety of feed readers, from desktop readers to online readers, and by far I prefer the online readers. The mobility alone makes them a winner, since I read feeds using several different computers. Here’s my current OPML file, which has been slightly edited and reorganized for public consumption (i.e. you don’t need to know about my ego feeds).

Over the years, have had to cull my feeds periodically. There are several news sites or blogs that I would love to be able to keep up with, but I don’t have the time to process the volume of content they generate on a daily basis. Currently, I have about 231 subscriptions, several of which are for dead feeds that I haven’t cleaned out yet.

I am perpetually behind on reading all of my subscriptions. There are a few that I hit regularly, but the rest are saved for times when I need to take my mind off of whatever problem I am working on at the moment. With this many feeds, RSS is a time shifting or bookmarking tool, and I’m okay with that. Twitter has become my source for the latest OMG news.

nasig 2008

I am getting ready to fly out to Phoenix early (too early) tomorrow morning for the NASIG annual conference (and executive board meeting). The conference begins on Thursday, but my session blogging probably won’t start until Friday. Posts will be erratic and coming in several at once, most likely, because I won’t be able to upload them until I’m back in my room. We’d like to have free wifi in the conference area, but the Hilton charges more than it costs to fill your gas tank and then some, which is well beyond what this intimate conference can afford to provide.

If you’d like to see what others have to say about NASIG 2008, be sure to check out our nifty little Netvibes page. Kudos to Steve Lawson, who inspired me to put that together this year. If you are attending the conference and plan to blog or post photos on Flickr, be sure to use the nasig2008 tag!

blogs are old skool?

I read a post in The Chronicle of Higher Education blog that declared that the end of blogs is near. Perhaps, but I think we have a few more months at least.

One of the tools that the writer points to is Shyftr, which looks like it could be as cool an RSS reader as Google Reader, and as handy a comment aggregator as coComment, but all in one place. Unfortunately, they don’t (yet) have a way to import an OPML file, so I’ll be leaving my nearly 250 feeds in Google Reader for now.

Eric Berlin, the Online Media Cultist, has some interesting things to say about Shyftr and its ilk.

CiL 2008: What’s Hot in RSS & Social Software

Speaker: Steven M. “I’m just sayin'” Cohen

[More links to cool stuff that I did not included can be found at the presentation wiki linked above.]

Google Reader is now more popular than Bloglines, which Cohen thinks has to do with the amount of money that Google can sink into it. Both have tools that tell you how many people are subscribed/reading it, which can be helpful in convincing administrators to support the use of RSS feeds from various sources. Offline feed readers don’t make much sense, since so often the things you are reading will direct you to other sources online.

If you’re not using Google Reader, do it now.

No, really. Steven says to do it.

Google + Feedburner = advertisements on your feeds, which means that they are now revenue generating, like the ads on your website. RSS is no longer sucking away your revenue source, so get over it and add feeds for your content! Plus, anyone using Page2RSS can scrape your content and turn it into a feed, so really, you should give them something that benefits you, too.

LibWorm is a site that indexes library-related blogs and news sources, and it provides RSS feeds, so use it for keeping current if you’re not already doing so.

Follow what is been twittered on your topic of choice using TweetScan. Follow all of your friends’ online activities at FriendFeed (notification once a day, which seems possibly even reasonably infrequent enough that I might actually use it).

Go check out his top ten eleven twelve favorite tools. They’re all really cool and worth playing with.

rss agregator

I have been using Feed on Feeds as my RSS agregator for the past month, but I have decided to go back to using Bloglines. I liked the clean lines of Feed on Feeds, as well as the ability to host my feeds on my own website. However, it uses Magpie RSS to parse the … Continue reading “rss agregator”

I have been using Feed on Feeds as my RSS agregator for the past month, but I have decided to go back to using Bloglines. I liked the clean lines of Feed on Feeds, as well as the ability to host my feeds on my own website. However, it uses Magpie RSS to parse the feeds, and it can be quite persnickety if the feed does not completely validate. This limited me in the feeds I could track, as well as causing headaches every time I tried to update the feeds. Also, I couldn’t get the silent update feature to work. I tweaked my crontab file until I was blue in the face, but nothing worked. Overall, Bloglines requires less maintenance or headaches on my part. Feed on Feeds has great potential, but for now, I will give it some time to mature.

bloglines irony

You’d think that a feed agregator would have a feed for its own newsletter.

Bloglines has announced that they have a new newsletter to “help inform you and provide a glimpse into the different ways people are using the service.” I found great irony in the following paragraph from the announcement:

You can choose to receive the newsletter via email or simply stay subscribed to Bloglines News, and we’ll let you know when each issue is posted.

I guess having an RSS feed for the newsletter and its contents would make too much sense.

journal feeds

Woo-hoo! More RSS feeds for online journals!

I was excited to read Karen Coombs’ entry about IngentaConnect-supplied RSS feeds of table of contents. I’ve already created a new Bloglines folder for Journals and moved some journal TOC feeds from the Library folder and added a few new ones from the IngentaConnect list. Unfortunately, it appears that the Elsevier (no surprise here) journals do not have TOC RSS feeds. I guess I’ll have to keep up with Serials Review the old fashioned way: by checking the TOC online every three months.

marketplace feed

Public radio is catching on to the RSS feed craze.

Yesterday I discovered that one of my favorite news programs on public radio, Marketplace, has an RSS feed for their daily programs. Each story is broken down into an entry with brief descriptions. It’s not as slick as their daily newsletter email, but it’s much more functional for feed readers/aggregators.

National Public Radio (NPR) already has a variety of RSS feeds to choose from, including individual non-news-based programs, as well as local feeds from a handful of member stations. It’s been a while since I checked out their offerings, and I was surprised to see all of the new feeds. I ended up subscribing to several.

overloading the ‘net

Will RSS feeds overload the ‘net?

Wired News has a short article about RSS feed readers and the potential they have for increasing web traffic. I knew about this article because it was listed in the RSS feed that I get from Wired. Go figure. Anyway, the author and others are concerned that because aggregators are becoming more and more popular among those who like to read regularly published electronic content, eventually a large chunk of web traffic will consists of desktop aggregators regularly downloading that data throughout the day.

The trouble is, aggregators are greedy. They constantly check websites that use RSS, always searching for new content. Whereas a human reader may scan headlines on The New York Times website once a day, aggregators check the site hourly or even more frequently.

If all RSS fans used a central server to gather their feeds (such as Bloglines or Shrook), then there wouldn’t be as much traffic, because these services check feeds once per hour at most, regardless of the number of subscribers. So, if you have 100 people subscribed to your feed, rather than getting 100 hits every hour (or some other frequency), you would only get one. The article notes two difficulties with this scenario. First, a lot of RSS fans prefer their desktop aggregators to a web-based aggregator such as Bloglines. Second, the Shrook aggregator is not free, and probably that will be the model that its competitors will take.

I don’t completely agree with the premise that having a central server distributing content to feed subscribers will reduce the flow of traffic on the ‘net anymore than it currently is. Whether my aggregator checks my feeds once an hour or whether Bloglines does it for me, I still use up bandwidth when I log in and read the content on the Bloglines site. For some feeds, if I want to read the whole entry or article, I still have to click to the site. Frankly, I think the problem has more to do with aggregators that “are not complying with specifications that reduce how often large files are requested.”

Readers are supposed to check if the RSS file has been updated since the last visit. If there has been no update, the website returns a very small “no” message to the reader.

But Murphy says the programs often don’t remember when they last checked, or use the local computer’s clock instead of the website’s clock, causing the reader to download entries over and over.

Perhaps the best thing for us to do is to educate ourselves about which RSS aggregator we use and how it may affect the bandwidth of the feeds we download through it.

css.php