Each morning before I leave home to go to the office, I take some time to see what is happening in the world. A few years ago, I used newspapers, radio or television for that, but nowadays we have internet. While eating a sandwich and drinking my coffee, I check my e-mail and scan the headlines of about 200 web sites. If I see something interesting, I read it or bookmark it to be read later.
This whole process takes me 20 minutes, thanks to my newsreader which fetches all information from those 200 sites and presents it to me in an easy digestible way.
However, this list of sites I want to keep up with is growing steadily; each week I discover new interesting sites, and add them to my list. When I finished browsing to my list today, I had to run to the office. I guess I’ll have to adjust my alarm with 10 minutes to give me the time to read my news.
Just before starting to write all this down, I read that Robert Scoble has exactly the same problem, but worse. He currently checks 915 feeds, and in his entry Dealing with the information flow he wonders how scalable feed are, as more and more resources use them. What happens if the number of feeds grows up to 10.000?
I guess what is going to happen is the same as happened to the web. Ten years ago I scanned the NCSA What is new? list. First I looked at this list every week, but soon new sites appeared by thousands, and I left this method to keep up with new sites. Nowadays I use search engines. They are not the same, and don’t give met the same feeling, but they scale better. With feeds the same thing is happening: at this moment I follow everything published on some sites, but soon I will have to fall back on PubSub to find what I want. I propose they will build some randomness in their queries, so I can still discover some unknown sites, and don’t have the feeling that I am reading the same as everybody else.