I just compiled my short list of papers I don’t want to miss at ISMIR 2007 which starts in about 4 weeks. Of course I’m interested in all papers, but if I run out of time while exploring posters, or need to choose between different sessions, I’ll prefer the ones listed here.
Fuzzy Song Sets for Music Warehouses
To be honest, this is just on the list because given the title I don’t have the slightest clue what this paper is about. I know what fuzzy sets are thanks to Klaas. I’m guessing that a music warehouse is a synonym for a digital library of music. I wonder if the second part of the title got lost?
Music Clustering with Constraints
Another title that puzzles me. Seems like titles have been cut off a lot. They forgot to mention according to what they are clustering the music. Number of musical notes in a piece? AFAIK, most clustering algorithms have some form of constraints. For example, in standard k-means the number of clusters is constrained. When using GMMs it is very common to constrain the minimum variance of an individual Gaussian. Anyway, I’m into clustering algorithms, so this could be an interesting presentation.
Sequence Representation of Music Structure Using Higher-Order Similarity Matrix and Maximum-Likelihood Approach
The author of this one has done lots of interesting stuff in the past. I’m curious what he’s up to this time. Music structure analysis is definitely something very interesting that could be very useful in many ways.
Algorithms for Determining and Labelling Approximate Hierarchical Self-Similarity
Again at least one of the authors has have done very interesting stuff in the past and I’m really interested in music structure analysis.
Transposition-Invariant Self-Similarity Matrices
I’m only guessing but this one could be about self-similarity with respect to melody. (I’m guessing that the previous 2 are focusing on self-similarity with respect to timbre or chroma.) Melodic similarity is a lot harder than timbre similarity. I’m curious how they did it.
A Supervised Approach for Detecting Boundaries in Music Using Difference Features and Boosting
If I miss this presentation I might upset my coauthors ;-)
Automatic Derivation of Musical Structure: A Tool for Research on Schenkerian Analysis
I had to Google Schenkerian. It sounds interesting.
Improving Genre Classification by Combination of Audio and Symbolic Descriptors Using a Transcription System
I’m very curious what kind of symbolic descriptors the authors used. Note density? I’ve seen lots of work on audio-based genre classification, and some work on using MIDI (which is usually referred to as symbolic information, but the authors could also mean something very different with symbolic). I’m pretty sure I’ve read at least one article on the combination of audio and MIDI information, but I don’t think I’ve ever seen anyone actually succeed. I’m curious what results the authors got, and I hope they used an artist filter.
Exploring Mood Metadata: Relationships with Genre, Artist and Usage Metadata
Let me guess: pop is usually happy and upbeat, and death metal is rather aggressive? :-) I wonder though what usage metadata is (if people listen to it while driving their cars, working, jogging etc?).
How Many Beans Make Five? The Consensus Problem in Music-Genre Classification and a New Evaluation Method for Single-Genre Categorisation Systems
Single-category classification? I think I’m good at that ;-) (Yes, I know that with single they mean binary classification.) Anyway, I’m curious what the authors say about genre classification and consensus. The authors probably have a very different perspective than I do.
Bayesian Aggregation for Hierarchical Genre Classification
I hope they either compare it to existing techniques, or use evaluation DBs that have been used previously. And I hope they used an artist filter. I’m very curious though what they aggregated.
Finding New Music: A Diary Study of Everyday Encounters with Novel Songs
If I had a very, very short list of papers I wouldn’t want to miss, than this would be on it :-)
Improving Efficiency and Scalability of Model-Based Music Recommender System Based on Incremental Training
Made in Japan, what else is there left to say? ;-)
This would also be on the very, very short list of presentations I wouldn’t want to miss.
Virtual Communities for Creating Shared Music Channels
I’m guessing that this could be really interesting, but I wish the title was more specific. Under the same title one could present, for example, how Last.fm groups and their group radio stations work, or how people get together on Last.fm to tag music to create their own radio stations.
MusicSun: A New Approach to Artist Recommendation
Another title that’s missing lots of information, nevertheless, I won’t skip this one.
Evaluation of Distance Measures Between Gaussian Mixture Models of MFCCs
I’m curious which approaches they tested and how and what their conclusions are.
An Analysis of the Mongeau-Sankoff Algorithm for Music Information Retrieval
Another title that sent me to Google. This time there were only 15 results, none of which did a good job in explaining it to me. Anyway it has MIR in the title, so I think I should have a look.
Assessment of Perceptual Music Similarity
Sounds like a follow-up of the work they presented last year. I’m very curious. I hope they got more than 2 pages in the proceedings. I’d love to read more on this topic.
jWebMiner: A Web-Based Feature Extractor
Sounds like there’s more great software from McGill for everyone to use.
Meaningfully Browsing Music Services
I’ve seen a demo that included Last.fm, so I really can’t miss this one.
Web-Based Detection of Music Band Members and Line-Up
Personally I would be tempted to just use MusicBrainz DB for that. I wonder how much more data the authors could find by crawling the web in general.
Tool Play Live: Dealing with Ambiguity in Artist Similarity Mining from the Web
Artist name ambiguity is an interesting problem, I wonder what solution they are presenting.
Keyword Generation for Lyrics
I’m guessing these are keywords that summarize the lyrics? I wonder if they use some abstraction as well to classify, for example, a song as a love song.
MIR in Matlab (II): A Toolbox for Musical Feature Extraction from Audio
I use Matlab everyday and I don’t think I’ve heard of this toolbox before, sounds interesting.
A Demonstration of the SyncPlayer System
I think I saw a demo of this at the MIREX meeting in Vienna. If I remember correctly the synchronization refers mainly to synchronizing lyrics with the audio but it can do lots of other cool stuff, too.
Performance of Philips Audio Fingerprinting under Desynchronisation
I have no clue what desynchronisation is, but I know that fingerprinting is relevant to what I work on.
Robust Music Identification, Detection, and Analysis
This could be another paper on fingerprinting?
Audio Identification Using Sinusoidal Modeling and Application to Jingle Detection
More fingerprinting fun.
Audio Fingerprint Identification by Approximate String Matching
Seems like fingerprinting has established itself as a research direction again :-)
Musical Memory of the World –- Data Infrastructure in Ethnomusicological Archives
It’s not directly related to my own work, but sounds very interesting.
Globe of Music - Music Library Visualization Using Geosom
A visualization of a music library using a metaphor of geographic maps? I’m curious how using a globe improves the experience.
Strike-A-Tune: Fuzzy Music Navigation Using a Drum Interface
I hope they’ll let me have a try :-)
Using 3D Visualizations to Explore and Discover Music
I believe I’ve seen this demo already, but I never got to try it out myself. I hope the waiting line won’t be too long.
Music Browsing Using a Tabletop Display
If the demo is interesting I’ll forgive them their not very informative title ;-)
Search&Select -– Intuitively Retrieving Music from Large Collections
I like the authors work. I’m very curious what he built this time.
Ensemble Learning for Hybrid Music Recommendation
It has the words music recommendation in the title, and the authors have done some interesting work in the past.
Music Recommendation Mapping and Interface Based on Structural Network Entropy
Another music recommendation paper, I’m guessing this one is about a certain MyStrand visualization. I’m particularly interested in the “structural network entropy” part.
Influence of Tempo and Subjective Rating of Music in Step Frequency of Running
My guess is that tempo has an impact and that this impact is even higher for music I like? But I wouldn’t expect the subjective rating to have a very high impact. I often notice how I start walking to the beats of music I hear even if I don’t like the music.
Sociology and Music Recommendation Systems
Another paper I’d put on the very, very short list :-)
Visualizing Music: Tonal Progressions and Distributions
Sounds great! I should check if they already have some videos online.
Localized Key Finding from Audio Using Nonnegative Matrix Factorization for Segmentation
I’m curious how the author used a nonnegative matrix factorization for this task. I’ve never used one, but I thought they are usually used for mixtures. However, segments (like chorus and instrument solos) are usually not best described as mixtures?
Sounds like I’ll learn interesting things about copyright, creative commons, and other intellectual property issues involved in music information retrieval.
Audio-Based Cover Song Retrieval Using Approximate Chord Sequences: Testing Shifts, Gaps, Swaps and Beats
I mainly want to know what the author has been up to, but I’m also interested in cover song detection.
Polyphonic Instrument Recognition Using Spectral Clustering
I want to see this one too, but it’s at the same time as the previous paper. The papers use rather similar techniques and deal with rather similar problems. I don’t understand why they were put up to compete with each other. Something non-audio related would have been a much better counter part.
Supervised and Unsupervised Sequence Modelling for Drum Transcription
I wonder how good their drum transcription works. I hope they have lots of demos.
A Unified System for Chord Transcription and Key Extraction Using Hidden Markov Models
Again a paper I really don’t want to miss but it’s at the same time as the one above. There are so many papers that don’t deal with extracting interesting information from audio signals that I absolutely don’t understand why they arranged this parallel session the way they did.
Combining Temporal and Spectral Features in HMM-Based Drum Transcription
I’m not sure if I’ll check out this one or the one below. Both are really interesting.
A Cross-Validated Study of Modelling Strategies for Automatic Chord Recognition in Audio
Sounds like they might have some interesting results.
Improving the Classification of Percussive Sounds with Analytical Features: A Case Study
I must see this one because I recently did some work on drum sounds. I’m curious if the authors include all sorts of percussive instruments (such as a piano) or if it’s drums mainly.
Discovering Chord Idioms Through Beatles and Real Book Songs
I’d love to see this one, too :-(
Don’t get me wrong: I fully support parallel sessions (there isn’t really an alternative given this many oral presentations) but unfortunately the sessions weren’t split in a way that would allow me to see everything I would like to see. Why not put chords and alignment parallel to each other?? To demonstrate my point I won’t list any papers of the alignment session.
Automatic Instrument Recognition in a Polyphonic Mixture Using Sparse Representations
Another strange thing about how the sessions were split is that one parallel session always ends 15 minutes earlier than the other one. Do the organizers expect that everyone from the other session runs to the other session? I’d prefer if all sessions would end at the same time and thus make it easier to find a group to go join for lunch. Anyway, sounds like an interesting paper.
ATTA: Implementing GTTM on a Computer
It’s been a while since I first heard a presentation on GTTM. I guess it’s about time to refresh my knowledge.
An Experiment on the Role of Pitch Intervals in Melodic Segmentation
I have no clue… but segments often have different “local keys”. The chords within keys are usually clearly defined. Each chord has specific pitch intervals… I wonder what experiment they did.
Vivo - Visualizing Harmonic Progressions and Voice-Leading in PWGL
Visualizing Music on the Metrical Circle
Another visualization :-)
Applying Rhythmic Similarity Based on Inner Metric Analysis to Folksong Research
I’m curious how they compute rhythmic similarity. I have seen a lot of work on extracting rhythm information, but haven’t seen much on computing similarities using it.
Music Retrieval by Rhythmic Similarity Applied on Greek and African Traditional Music
Another rhythmic similarity paper :-)
A Dynamic Programming Approach to the Extraction of Phrase Boundaries from Tempo Variations in Expressive Performances
A long time ago I did some work on segmenting tempo variations… I’m curious how they represent tempo (do they apply temporal smoothing?) and how well detecting phrase boundaries works given only tempo. (Why not use loudness as well?)
Creating a Simplified Music Mood Classification Ground-Truth Set
Sounds like this might also be related to the MIREX mood classification task.
Assessment of State-of-the-Art Meter Analysis Systems with an Extended Meter Description Model
I wonder how good state-of-the-art methods work for meter detection.
Evaluating a Chord-Labelling Algorithm
Chord detection is great.
A Qualitative Assessment of Measures for the Evaluation of a Cover Song Identification System
Cover song detection is great, too.
The Music Information Retrieval Evaluation Exchange “Do-It-Yourself” Web Service
Wow! I wonder if they will have a demo ready?
Preliminary Analyses of Information Features Provided by Users for Identifying Music
I have no clue what this one is about, but it’s probably MIREX related.
Finding Music in Scholarly Sets and Series: The Index to Printed Music (IPM)
One of the many things I know nothing about, but it sounds interesting.
Humming on Audio Databases
I wonder if they provide a demo, and if they can motivate people to use it. (It will probably be more fun listening to people sing than see if their system works.)
A Query by Humming System that Learns from Experience
Would be nice to have this one right next to the previous one.
Classifying Music Audio with Timbral and Chroma Features
Another one for the very, very short list. I’m curious how the author combined the features, and if he measured improvements, and if he did artist identification or genre classification (and if he used an artist filter if so).
A Closer Look on Artist Filters for Musical Genre Classification
Sounds like something everyone should be using :-)
A Demonstrator for Automatic Music Mood Estimation
I definitely want to see this demonstration.
Mood-ex-Machina: Towards Automation of Moody Tunes
I wonder what this sounds like.
Pedagogical Transcription for Multimodal Sitar Performance
I wonder if it’s so pedagogical that I can understand it?
Drum Transcription in Polyphonic Music Using Non-Negative Matrix Factorisation
Not sure what’s new here, but I’ll be there to find out.
Tuning Frequency Estimation Using Circular Statistics
No clue what this is about. My best guess would be that it’s related to the pitch corrections I’ve seen in chord transcription systems.
TagATune: A Game for Music and Sound Annotation
Wow another music game! I haven’t heard of this one yet and Google hasn’t either. I’m very curious how it differs from the Listen Game and the MajorMinor game.
A Web-Based Game for Collecting Music Metadata
Would be great if they publish some usage statistics.
Autotagging Music Using Supervised Machine Learning
I’m very curious what results they got.
A Stochastic Representation of the Dynamics of Sung Melody
Another Japanese production :-)
Singing Melody Extraction in Polyphonic Music by Harmonic Tracking
I wonder how high the improvements were by tracking the harmony.
Singer Identification in Polyphonic Music Using Vocal Separation and Pattern Recognition Methods
I wonder how they evaluated this. Did all singers have the same background instruments and sing in the same musical style?
Transcription and Multipitch Estimation Session
I know nothing about multipitch estimation. But I hope to hear some nice demonstrations in the session.
Identifying Words that are Musically Meaningful
I wonder what the most musically meaningful word is. At Last.fm I think it’s “rock”. Another word very high up in the Last.fm ranks is “chillout” :-)
A Semantic Space for Music Derived from Social Tags
I’m curious what their tag space looks like.
The Music Ontology
I don’t know much about ontologies, but it sounds like this is the one and only one for music, so I better not miss it.
Signal + Context = Better Classification
I love this title. I hope the first author will be presenting it.
A Music Information Retrieval System Based on Singing Voice Timbre
I’ll probably be totally exhausted from having seen so many presentations and posters by this time, but I’ll try to reserve some energy to be able to concentrate on this talk.
Poster session 3 (MIREX)
Usually one of the highlights at ISMIR. I hope the MIREX teams manages to have the results ready in time. Only about 4 weeks left to get everything done.
Methodological Considerations in Studies of Musical Similarity
I wish this paper would have been published before I wrote my thesis. But I guess it's never too late to learn :-)
Similarity Based on Rating Data
Sounds like something Last.fm has been doing since years: The ratings are measured based on how often people listen to a song. Then standard collaborative filtering techniques are applied. The results are not too bad. I’m guessing that the authors used very sparse data compared to the data Last.fm has. Another paper I’d put on my very, very short list.
A Study on Attribute-Based Taxonomy for Music Information Retrieval
I wonder if this is similar to Pandora’s music genome project?
Variable-Size Gaussian Mixture Models for Music Similarity Measures
I wonder if and how the author was able to measure significant improvements.
Towards Integration of MIR and Folk Song Research
I like folk music, and I like MIR.
From Rhythm Patterns to Perceived Tempo
I’m curious how they approach this. A rhythm pattern (as defined in music books) does (AFAIK) not have any tempo information and can be played at different tempi. But I’m sure this is an interesting paper :-)
The Quest for Ground Truth in Musical Artist Tagging in the Social Web Era
The title reminds me of one of the more important papers in the short history of ISMIR. Tags are something very subjective, there is no right or wrong. You’ll always find people complaining about how other people mistagged the genre of a song. It will be interesting to see if this paper has the potential to join the ranks of the original ISMIR paper with a similar title.
Annotating Music Collections: How Content-Based Similarity Helps to Propagate Labels
Sounds like something very useful.
A Game-Based Approach for Collecting Semantic Annotations of Music
I hope they’ll present some usage statistics.
Human Similarity Judgments: Implications for the Design of Formal Evaluations
I wonder why this paper isn’t presented before the MIREX panel. Seems like it might contain a lot of information that would be useful for the discussion.