Ted Leung on the air: Open Source, Java, Python, and ...
Scoble asked me to write this post, so here goes. I don't mean that RSS aggregators are the kind of killer app that sells a billion computers and creates new markets (there is that possibility, though). I mean the app that does so much that it consumes all available CPU, memory, network, and disk. Perhaps I really mean that they're the "killing my computer" app.
If I take out software development activities, the application that is pushing the limits of my hardware is my RSS aggregator. This is not in any way a slam on NetNewsWire, which is very, very, fine application. It's a reflection of the way that my relationship to the web has changed. I hardly use a standalone browser anymore -- mostly for searching or printing. I don't have time to go and visit all the web sites that have information that is useful to me. Fortunately, the aggregator takes care of that. Once the aggregator has the information, I want it to fold, spindle, and mutilate it. I'm at over 1000 feeds, and on an average day, it's not uncommon to have 4000 new items flow through the aggregator. It takes 25 minutes (spread out over two sessions) just to pull the data down and process it -- and I have a very fast connection. NetNewswire uses WebKit to render HTML in line -- a feature that makes it easy to cut through piles of entries, but one which is demanding of CPU and memory.
But that's just the basics. What happens when we start doing Bayesian stuff on 4000 items a day? Latent Semantic Indexing? Clustering? Reinforcement Learning? Oh, and I want to do all of those things on all the stuff that I ever pulled down, not just the new stuff. What happens if I want to build a "real-time" trend analyzer using RSS feed data as the input? The processor vendors should be licking their chops...
If you live in or around Seattle, and want to hook up with other people, you may want to subscribe to Chris Pirillo's Seattlist.
If there are other useful Seattle lists, please feel to say so in the comments.