Friday, 13 June 2008

Myths about Last.fm tags

Today I was pointed to the following: "Last.fm has thousands of tags, unfortunately they are all pretty bad." (A statement made in this video of a very interesting talk about autotagging and applications that can be built using tags, around minute 51.)

I think this needs some clarification: Last.fm has a lot more than just a few thousand tags. The 1,000,000th unique tag applied by a Last.fm user was earthbeat about half a year ago.

Related links: fun stats on Last.fm tags, Last.fm's multi-tag search.

2 comments:

Unknown said...

I wish I could watch the video, but unfortunately it requires
Microsoft Internet Explorer with windows media player, so I can't watch it. Who was the talk by? What was the abstract?

Elias said...

Paul,

The talk is by Doug Turnbull. He also held a similar talk at QMUL's C4DM which records all talks. The direct link is here. However, it's hard to read the slides and the audio quality is not as good as the Microsoft version (and it doesn't seem to include the reference to Last.fm tags).

Title is "Desining a content-based music search engine". The abstract for his talk is:

----
If you go to Amazon.com or the Apple Itunes store, your ability to search for new music will largely be limited by the query-by-metadata' paradigm: search by song, artist or album name. However, when we talk or write about music, we use a rich vocabulary of semantic concepts to convey our listening experience. If we can model a relationship
between these concepts and the audio content, then we can produce a more flexible music search engine based on a 'query-by-semantic-
description' paradigm.

In this talk, I will present a computer audition system that can both annotate novel audio tracks with semantically meaningful words and retrieve relevant tracks from a database of unlabeled audio content
given a text-base query. I consider the related tasks of content-
based audio annotation and retrieval as one supervised multi-class, multi-label problem in which we model the joint probability of acoustic features and words. For each word in a vocabulary, we use an annotated corpus of songs to train a Gaussian mixture model (GMM)
over an audio feature space. We estimate the parameters of the model using the weighted mixture hierarchies Expectation Maximization algorithm. This algorithm is more scalable to large data sets and produces better density estimates than standard parameter estimation techniques. The quality of the music annotations produced by our
system is comparable with the performance of humans on the same task. Our `query-by-semantic-description' system can retrieve appropriate
songs for a large number of musically relevant words. I also show that our audition system is general by learning a model that can annotate and retrieve sound effects.

Lastly, I will discuss three techniques for collecting the semantic annotations of music that are needed to train such a computer
audition system. They include text-mining web documents, conducting surveys, and deploying human computation games.
---