Structure-informed Positional Encoding for Music Generation - Département Image, Données, Signal Access content directly
Journal Articles IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Year : 2024

Structure-informed Positional Encoding for Music Generation

Abstract

Music generated by deep learning methods often suffers from a lack of coherence and long-term organization. Yet, multi-scale hierarchical structure is a distinctive feature of music signals. To leverage this information, we propose a structure-informed positional encoding framework for music generation with Transformers. We design three variants in terms of absolute, relative and non-stationary positional information. We comprehensively test them on two symbolic music generation tasks: next-timestep prediction and accompaniment generation. As a comparison, we choose multiple baselines from the literature and demonstrate the merits of our methods using several musically-motivated evaluation metrics. In particular, our methods improve the melodic and structural consistency of the generated pieces.
Fichier principal
Vignette du fichier
ICASSP2024_preprint.pdf (1.19 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-04432659 , version 1 (15-02-2024)
hal-04432659 , version 2 (20-02-2024)
hal-04432659 , version 3 (28-02-2024)

Identifiers

  • HAL Id : hal-04432659 , version 1

Cite

Manvi Agarwal, Changhong Wang, Gaël Richard. Structure-informed Positional Encoding for Music Generation. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024. ⟨hal-04432659v1⟩
230 View
96 Download

Share

Gmail Facebook X LinkedIn More