Towards a Characterization of Background Music Audibility in Broadcasted TV

Int J Environ Res Public Health. 2022 Dec 22;20(1):123. doi: 10.3390/ijerph20010123.

Abstract

In audiovisual contexts, different conventions determine the level at which background music is mixed into the final program, and sometimes, the mix renders the music to be practically or totally inaudible. From a perceptual point of view, the audibility of music is subject to auditory masking by other aural stimuli such as voice or additional sounds (e.g., applause, laughter, horns), and is also influenced by the visual content that accompanies the soundtrack, and by attentional and motivational factors. This situation is relevant to the music industry because, according to some copyright regulations, the non-audible background music must not generate any distribution rights, and the marginally audible background music must generate half of the standard value of audible music. In this study, we conduct two psychoacoustic experiments to identify several factors that influence background music perception, and their contribution to its variable audibility. Our experiments are based on auditory detection and chronometric tasks involving keyboard interactions with original TV content. From the collected data, we estimated a sound-to-music ratio range to define the audibility threshold limits of the barely audible class. In addition, results show that perception is affected by loudness level, listening condition, music sensitivity, and type of television content.

Keywords: background music; behaviour and cognition; broadcasted TV; complex auditory scene; everyday life environments; listening conditions; loudness perception; psychoacoustic experiments.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation / methods
  • Auditory Perception
  • Music*
  • Psychoacoustics
  • Sound

Grants and funding

This research was completed under task 1.3 of the “AI system for automatic audibility estimation of background music in audiovisual productions” project, known as the LoudSense project, funded by ACCIÓ INNOTEC-2020, with grant number ACE014/20/000051. This work is partially supported by Musical AI—PID2019-111403GB-I00/AEI/10.13039/501100011033 funded by the Spanish Ministerio de Ciencia, Innovación y Universidades (MCIU) and the Agencia Estatal de Investigación (AEI).