The first part of the assignment is to set up a feed reader. I’ve used a variety of feed readers, from desktop readers to online readers, and by far I prefer the online readers. The mobility alone makes them a winner, since I read feeds using several different computers. Here’s my current OPML file, which has been slightly edited and reorganized for public consumption (i.e. you don’t need to know about my ego feeds).
Over the years, have had to cull my feeds periodically. There are several news sites or blogs that I would love to be able to keep up with, but I don’t have the time to process the volume of content they generate on a daily basis. Currently, I have about 231 subscriptions, several of which are for dead feeds that I haven’t cleaned out yet.
I am perpetually behind on reading all of my subscriptions. There are a few that I hit regularly, but the rest are saved for times when I need to take my mind off of whatever problem I am working on at the moment. With this many feeds, RSS is a time shifting or bookmarking tool, and I’m okay with that. Twitter has become my source for the latest OMG news.
Will RSS feeds overload the ‘net?
Wired News has a short article about RSS feed readers and the potential they have for increasing web traffic. I knew about this article because it was listed in the RSS feed that I get from Wired. Go figure. Anyway, the author and others are concerned that because aggregators are becoming more and more popular among those who like to read regularly published electronic content, eventually a large chunk of web traffic will consists of desktop aggregators regularly downloading that data throughout the day.
The trouble is, aggregators are greedy. They constantly check websites that use RSS, always searching for new content. Whereas a human reader may scan headlines on The New York Times website once a day, aggregators check the site hourly or even more frequently.
If all RSS fans used a central server to gather their feeds (such as Bloglines or Shrook), then there wouldn’t be as much traffic, because these services check feeds once per hour at most, regardless of the number of subscribers. So, if you have 100 people subscribed to your feed, rather than getting 100 hits every hour (or some other frequency), you would only get one. The article notes two difficulties with this scenario. First, a lot of RSS fans prefer their desktop aggregators to a web-based aggregator such as Bloglines. Second, the Shrook aggregator is not free, and probably that will be the model that its competitors will take.
I don’t completely agree with the premise that having a central server distributing content to feed subscribers will reduce the flow of traffic on the ‘net anymore than it currently is. Whether my aggregator checks my feeds once an hour or whether Bloglines does it for me, I still use up bandwidth when I log in and read the content on the Bloglines site. For some feeds, if I want to read the whole entry or article, I still have to click to the site. Frankly, I think the problem has more to do with aggregators that “are not complying with specifications that reduce how often large files are requested.”
Readers are supposed to check if the RSS file has been updated since the last visit. If there has been no update, the website returns a very small “no” message to the reader.
But Murphy says the programs often don’t remember when they last checked, or use the local computer’s clock instead of the website’s clock, causing the reader to download entries over and over.
Perhaps the best thing for us to do is to educate ourselves about which RSS aggregator we use and how it may affect the bandwidth of the feeds we download through it.