Immersive Audio and AI: Transforming Classical Concerts with REPERTORIUM

Picture of Repertorium Editorial Team
Repertorium Editorial Team
- 3 months ago

REPERTORIUM uses AI to digitise ancient and classical manuscripts, preserve European musical heritage, and create state-of-the-art sound processing technologies, including metaverse-ready immersive audio. These technologies are the foundation of a general musical artificial intelligence that fully unleashes the powers of machine learning upon the domain of European classical heritage, advancing us towards a human-centred digital world.

Some streaming services offer spatial audio recordings, allowing listeners to experience the music as if they were in the concert hall, while several performing arts organisations have also begun experimenting with live VR performances. However, based on current technology, they remain unaffordable for most organisations. Artificial Intelligence (AI) technology can affordably provide a fully immersive, virtual-reality, user-controllable sonic experience for classical music performances through audio separation machine learning technology trained on a large corpus of European classical music.

At the heart of the REPERTORIUM project lies the development of spatial audio systems that faithfully recreate the acoustic environment of a live orchestra concert. Leveraging advanced signal processing techniques and AI-driven models, these systems capture the nuances of sound propagation within a concert venue, allowing listeners to virtually immerse themselves in the theater from the comfort of their own homes.

One of the cornerstones of the REPERTORIUM project is the development of live streaming technology using ambisonics microphones. These specialized microphones capture sound from all directions, providing a comprehensive audio recording of the concert venue. By streaming performances in real-time, audiences worldwide can enjoy the magic of live music without being physically present. To enhance the listening experience, the system incorporates binaural rendering techniques. Binaural audio simulates the way humans perceive sound in three-dimensional space, creating a lifelike sensation of being present at the concert venue. By wearing headphones or earphones, listeners can immerse themselves in a spatially accurate audio environment, with sounds emanating from all directions.

Another project’s key innovations is navigable audio, which enables users to dynamically explore the virtual concert space. By freely navigating the environment, users can choose their preferred listening perspective. This level of interactivity enhances engagement and immersion, offering a truly personalized concert experience.

To achieve realistic sound scene recreation, the team at Politecnico di Milano investigates physics-informed and generative neural networks. These cutting-edge AI models are trained on ambisonics audio recordings, leveraging the principles of the wave equation to synthesize spatially accurate soundscapes. As a matter of fact, customary artificial neural networks are not aware of the physics of the problem they are addressing. This, in the case of spatial audio, might end up in unwanted distortion and unrealistic sound scene. The main goal of the proposed research instead is a faithful reproduction of the concert venue’s acoustic characteristics, ensuring a high-fidelity listening experience for audiences worldwide.

At REPERTORIUM, we aim to develop immersive technology for audio to record live and studio concerts and deliver enriched streaming services. However, for pre-recorded material or recordings made with classical microphone setups, immersive audio applications require decomposing the individual parts (i.e., instruments) from the mix to perform a remastering or spatial rendering of the scene. This decomposition is performed using music sound source separation techniques.

Technologies based on artificial intelligence (AI) have been successfully applied to isolate individual sounds from a mix when large training material is available, such as in speech mixes or popular music. However, in the case of orchestral music, the amount of training material is limited. Obtaining individual stems for orchestral music is very challenging due to the larger number of instruments compared to other musical styles. Additionally, rehearsal and concert conditions are always synchronous and group-oriented, making the isolated recording of individual instrument sections very unnatural for the musicians.

At REPERTORIUM, we have recorded the largest real-world music training set to encourage the research and development of novel AI-based source separation methods for classical music. Collaboration between Tampere University and the University of Jaen brings another dimension to the project, utilizing sound source separation techniques through supervised and adversarial networks, and combining multichannel and score information to provide instrumental separation results beyond the state-of-the-art.

This spatial rendering will be available for live and post-concert experiences. In the latter case, musicians can engage in “minus-one” rehearsals, practicing alongside a virtual ensemble while focusing on their own performance. This innovative approach enhances learning, collaboration, and creativity in music education and rehearsal settings.

All these technologies will be ready by the end of the year, and a series of demonstrative concerts will be streamed starting in fall 2024. Stay tuned to learn more about REPERTORIUM’s immersive sound experiences.

AD 4nXdClA4mWqo6z2ur20 jODeSvzjOrr4e1mTxCfl37lD4fr36lTCA4S0rP12HdSczILDFW2PUj39VinFoMQCOUwsjnL - Repertorium AI will revolutionise music scholarship, enhance streaming revenues, and empower musicians
AD 4nXcyGm0upZFGkd0rjbVRGDJyTG2zazDo4Zt0QDDk7CB09XPRdFKkCEMUvEz05ONyJ9SopVDQzA7M60588PvuaXNCa M3QvQYN1gLQsznTiIewhAyvK9C9Of5wJOQcjU hD8731iM FQ8FaSy25pBFBj0Yh3N2SleoICgA GeaQ?key=DvA3UyVBw2toNGj lW6EfQ - Repertorium AI will revolutionise music scholarship, enhance streaming revenues, and empower musicians
AD 4nXfeqikUNwW2lQ44KcWNNSyTaTaTYgnSJV4KKvEyB4yD0ly0oXyLHBHAF6NNyRPighXNcH2cnM4hAPEzUFIqxe2bS6xEVBsPRfNd8E2izz5vGIfpyi6pPdTFRprFeaTVNoObzLeCmVnWKK3x DViK77oEz PO5VfwFJ2lp50?key=DvA3UyVBw2toNGj lW6EfQ - Repertorium AI will revolutionise music scholarship, enhance streaming revenues, and empower musicians

Sign up to our Newsletter to stay informed!

Get in touch if you have any questions!

The Project

REPERTORIUM uses AI to digitise ancient and classical manuscripts, preserve European musical heritage, and create state-of-the-art sound processing technologies, including metaverse-ready immersive audio. These technologies are the foundation of a general musical artificial intelligence that fully unleashes the powers of machine learning upon the domain of European classical heritage, advancing us towards a human-centred digital world.