Paul just blogged about it: The Echo Nest are demonstrating some of the stuff they have been working on. The one I like best is "automatic song" (which they say is a composition from automatically combining about 50 songs).
I'm curious what impact their API to extract features from audio will have on MIR research. Seems like they are also targeting artists who use processing to visualize music content. I'd like to see videos of their music visualizations.