ISMIR is only halfway through and I can’t believe how many interesting things I’ve already missed. I guess it’s unavoidable given parallel sessions and so many poster presentations in limited time. Nevertheless, my brain is already overflowing and I feel burnt out. In fact there were so many interesting presentations, that I don’t have enough time to write all of them down (although it would be a great way to remember them).
One thing that might have a very high impact is that IMIRSEL is just about to launch their online evaluation system. Researchers will be able to submit their newest algorithms and find out how well they do compared to others. I think having an evaluation system like that is actually worth a lot to the whole community, and I can well imagine that (once all issues are solved) research labs will have something like a paid subscription which allows them to use a certain amount of CPU time on IMIRSELs clusters. However, to be truly successful they’d need to be 100% neutral and transparent. (Which I think means they shouldn’t have IMIRSEL show up in their rankings, and they should clarify how One Llama is linked to IMIRSEL.)
I also liked the poster Skowronek, McKinney, and Van de Par presented (“A Demonstrator for Automatic Music Mood Estimation”). They allowed me to test their system with one of my own MP3s which I had on my USB drive (I used a song from Roberta Sá) and it did really well. Another demo I liked a lot was the system Peter Knees presented (“Search & Select – Intuitively Retrieving Music from Large Collections”). Unfortunately I was asked to leave after I had been playing around with the demo for a bit too long, I guess. Ohishi, Goto, Itou, and Takeda (“A Stochastic Representation of the Dynamics of Sung Melody”) showed me some videos which I thought were simply amazing. Apparently it isn’t hard to compute them (once you know how to extract the F0 curve), but I’ve never seen the characteristics of a singing voice visualized that way. The demo of Eck, Bertin-Mahieux, and Lamere (“Autotagging Music Using Supervised Machine Learning”) was really impressive too… and it was interesting to learn that Ellis (“Classifying Music Audio with Timbral and Chroma Features”) found ways to use chroma information to increase artist identification performances. (And his Matlab source code is available!!) I once worked on a similar problem, but never got that far. Btw, it seems that chroma is everywhere now :-)
I was also happy to see that Flexer’s poster (“A Closer Look on Artist Filters for Musical Genre Classification”) was receiving a lot of attention. I liked his conclusions. There were also lots of interesting papers in the last two days. For example, I liked the paper presented by Cunningham, Bainbridge, and McKay (“Finding New Music: A Diary Study of Everyday Encounters with Novel Songs”). I particularly liked their discussion on how nice it would be to have a “scrobble everywhere” device that keeps track of everything I ever hear (including ring tones).
Tuesday, 25 September 2007
Subscribe to:
Post Comments (Atom)
3 comments:
Elias,
Regarding OneLlama:
When I told you I worked at OneLlama - I also clearly stated that IMIRSEL did not launch OneLlama and is not affiliated with it. You asked me about this specifically and I gave you the same answer I give now. We have nothing to hide, clearly you can't say the same or you would not feel the need to conveniently forget important bits of information that I made sure you had.
Please do not post further lies or misinformation on this topic. If you have concerns you know how to get hold of both us and Stephen Downie and your continuing failure to do so clearly shows your intentions are not honorable or your concerns valid.
OneLlama is owned in part by its employees and the remainder by Illinois Ventures.
Kris,
I replied to this comment here.
Btw, there's no need to post every comment twice.
Hi. This passage
"Ohishi, Goto, Itou, and Takeda (“A Stochastic Representation of the Dynamics of Sung Melody”) showed me some videos which I thought were simply amazing. Apparently it isn’t hard to compute them (once you know how to extract the F0 curve), but I’ve never seen the characteristics of a singing voice visualized that way."
leaves me wanting to see these videos of sung melody representations. Do you have pointers to these works, by chance?
Thanks much.
Post a Comment