Integrated probabilistic music representations for versatile music content processing
VERSAMUS is a joint project between the METISS Project-Team of INRIA Rennes - Bretagne Atlantique (France) and the Laboratory #1 of the Department of Information Physics and Computing at the University of Tokyo (Japan) partly funded by the Associate Team program of INRIA.
Music plays a major role in everyday use of digital media contents. Companies and users are waiting for smart content creation and distribution functionalities, such as music classification, search by similarity, summarization, chord transcription, remixing and automatic accompaniment.
So far, research efforts have focused on the development of specific algorithms and corpora for each functionality based on low-level sound features characterizing sound as a whole. Yet, music generally results from the superposition of heterogeneous sound components (e.g. voices, pitched musical instruments, drums, sound samples) carrying interdependent features at several levels (e.g. music genre, singer identity, melody, lyrics, voice signal). Integrated music representations combining all feature levels would make it possible to address all of the above functionalities with increased accuracy as well as to visualize and interact with the content in a musically relevant manner.
The aim of this project is to investigate, design and validate such representations in the framework of Bayesian data analysis, which provides a rigorous way of combining separate feature models in a modular fashion. Tasks to be addressed include the design of a versatile model structure, of a library of feature models and of efficient algorithms for parameter inference and model selection. Efforts will also be dedicated towards the development of a shared modular software platform and a shared corpus of multi-feature annotated music which will be reusable by both partners in the future and eventually disseminated.
Full proposal (intranet)