rss agregator

I have been using Feed on Feeds as my RSS agregator for the past month, but I have decided to go back to using Bloglines. I liked the clean lines of Feed on Feeds, as well as the ability to host my feeds on my own website. However, it uses Magpie RSS to parse the … Continue reading “rss agregator”

I have been using Feed on Feeds as my RSS agregator for the past month, but I have decided to go back to using Bloglines. I liked the clean lines of Feed on Feeds, as well as the ability to host my feeds on my own website. However, it uses Magpie RSS to parse the feeds, and it can be quite persnickety if the feed does not completely validate. This limited me in the feeds I could track, as well as causing headaches every time I tried to update the feeds. Also, I couldn’t get the silent update feature to work. I tweaked my crontab file until I was blue in the face, but nothing worked. Overall, Bloglines requires less maintenance or headaches on my part. Feed on Feeds has great potential, but for now, I will give it some time to mature.

bloglines irony

You’d think that a feed agregator would have a feed for its own newsletter.

Bloglines has announced that they have a new newsletter to “help inform you and provide a glimpse into the different ways people are using the service.” I found great irony in the following paragraph from the announcement:

You can choose to receive the newsletter via email or simply stay subscribed to Bloglines News, and we’ll let you know when each issue is posted.

I guess having an RSS feed for the newsletter and its contents would make too much sense.

journal feeds

Woo-hoo! More RSS feeds for online journals!

I was excited to read Karen Coombs’ entry about IngentaConnect-supplied RSS feeds of table of contents. I’ve already created a new Bloglines folder for Journals and moved some journal TOC feeds from the Library folder and added a few new ones from the IngentaConnect list. Unfortunately, it appears that the Elsevier (no surprise here) journals do not have TOC RSS feeds. I guess I’ll have to keep up with Serials Review the old fashioned way: by checking the TOC online every three months.

overloading the ‘net

Will RSS feeds overload the ‘net?

Wired News has a short article about RSS feed readers and the potential they have for increasing web traffic. I knew about this article because it was listed in the RSS feed that I get from Wired. Go figure. Anyway, the author and others are concerned that because aggregators are becoming more and more popular among those who like to read regularly published electronic content, eventually a large chunk of web traffic will consists of desktop aggregators regularly downloading that data throughout the day.

The trouble is, aggregators are greedy. They constantly check websites that use RSS, always searching for new content. Whereas a human reader may scan headlines on The New York Times website once a day, aggregators check the site hourly or even more frequently.

If all RSS fans used a central server to gather their feeds (such as Bloglines or Shrook), then there wouldn’t be as much traffic, because these services check feeds once per hour at most, regardless of the number of subscribers. So, if you have 100 people subscribed to your feed, rather than getting 100 hits every hour (or some other frequency), you would only get one. The article notes two difficulties with this scenario. First, a lot of RSS fans prefer their desktop aggregators to a web-based aggregator such as Bloglines. Second, the Shrook aggregator is not free, and probably that will be the model that its competitors will take.

I don’t completely agree with the premise that having a central server distributing content to feed subscribers will reduce the flow of traffic on the ‘net anymore than it currently is. Whether my aggregator checks my feeds once an hour or whether Bloglines does it for me, I still use up bandwidth when I log in and read the content on the Bloglines site. For some feeds, if I want to read the whole entry or article, I still have to click to the site. Frankly, I think the problem has more to do with aggregators that “are not complying with specifications that reduce how often large files are requested.”

Readers are supposed to check if the RSS file has been updated since the last visit. If there has been no update, the website returns a very small “no” message to the reader.

But Murphy says the programs often don’t remember when they last checked, or use the local computer’s clock instead of the website’s clock, causing the reader to download entries over and over.

Perhaps the best thing for us to do is to educate ourselves about which RSS aggregator we use and how it may affect the bandwidth of the feeds we download through it.

css.php