overloading the ‘net

Will RSS feeds overload the ‘net?

Wired News has a short article about RSS feed readers and the potential they have for increasing web traffic. I knew about this article because it was listed in the RSS feed that I get from Wired. Go figure. Anyway, the author and others are concerned that because aggregators are becoming more and more popular among those who like to read regularly published electronic content, eventually a large chunk of web traffic will consists of desktop aggregators regularly downloading that data throughout the day.

The trouble is, aggregators are greedy. They constantly check websites that use RSS, always searching for new content. Whereas a human reader may scan headlines on The New York Times website once a day, aggregators check the site hourly or even more frequently.

If all RSS fans used a central server to gather their feeds (such as Bloglines or Shrook), then there wouldn’t be as much traffic, because these services check feeds once per hour at most, regardless of the number of subscribers. So, if you have 100 people subscribed to your feed, rather than getting 100 hits every hour (or some other frequency), you would only get one. The article notes two difficulties with this scenario. First, a lot of RSS fans prefer their desktop aggregators to a web-based aggregator such as Bloglines. Second, the Shrook aggregator is not free, and probably that will be the model that its competitors will take.

I don’t completely agree with the premise that having a central server distributing content to feed subscribers will reduce the flow of traffic on the ‘net anymore than it currently is. Whether my aggregator checks my feeds once an hour or whether Bloglines does it for me, I still use up bandwidth when I log in and read the content on the Bloglines site. For some feeds, if I want to read the whole entry or article, I still have to click to the site. Frankly, I think the problem has more to do with aggregators that “are not complying with specifications that reduce how often large files are requested.”

Readers are supposed to check if the RSS file has been updated since the last visit. If there has been no update, the website returns a very small “no” message to the reader.

But Murphy says the programs often don’t remember when they last checked, or use the local computer’s clock instead of the website’s clock, causing the reader to download entries over and over.

Perhaps the best thing for us to do is to educate ourselves about which RSS aggregator we use and how it may affect the bandwidth of the feeds we download through it.

css.php