This article is about the application of VoiceFollower in theater. In the experimental context of “in-vivo” concentrating on new sonic technologies in theater, the VoiceFollower let the young Vieilleur Theater Company to develop automatic sequencing of their piece “Nous les Vagues” during live performance. After a brief introduction of the InVivo context, we will explain this implementation.
In-Vivo : An experimental laboratory on Sound in Theater
“In vivo” is a collective of researchers, computer musicians and theater companies inside a laboratory of research and experimentation on the topic of new sonic technologies in theater. The first edition of In Vivo vas based on a partnership between IRCAM and the Reims Theater Hall, under the supervision of the theater director Ludovic Lagarde. Five theater directors joined the adventure shortly: Emilie Rousset, Mathieu Roy, Cyril Teste, Guillaume Vincent et Ludovic Lagarde, accompanied by their teams composed of actors and sound designers, to elaborate on innovative projects involving IRCAM computer music designers Greg Beller, Thomas Goepfer and Olivier Pasquet. Explored subjects had tight links with problems already under consideration at IRCAM R&D teams and artistic needs from ranging from voice transformation, music generation to sound diffusion and design in general. After several weeks of collaborative work, the 5 teams presented sketches of theater works in June 2012 in the Parisian Bouffes du Nord and CENTQUATRE halls in Paris and during IRCAM’s MANIFESTE festival.
“Nous les Vagues”, the piece
“We took possession, we created fear and irruption as expected, a breaker at the heart of things and the space of decisions.”
“Nous les Vagues” is the name of a theater piece, lasting around 20 minutes, written by Mariette Navarro (published by Quartett Edition in 2011) and adapted for theater by Matthieu Roy with Philippe Canales and Johanna Silberstein as actors and Baptiste Poulain as stage manager and Gregory Beller as computer music designer. The special thing about this text is that it’s transported by two voices, the “us” (‘nous’ in French) which represents a group of people, a crowd or even a single voice.
Adapting this on stage went through three stages with three viewpoints on sound:
The First Act
The first act corresponds to the constitution of a group: Dark room, the two comedians are behind the curtains and their voices are transformed with the help of SuperVP for Max to create multiple personalities and set in space thanks to Spat. At each phase actors have specific vocal and spatial identities. Stage changes (136 in general in 6 minutes) should be tightly synchronous to actors’ actions — a nightmare for stage managers! Moreover, towards the end, voices become multiplied with the aid of CataRT. At this point, the stage manager has no means of distinguishing between actors’ and diffused voices. We managed to automatize synchrony to a great extent with the help of the VoiceFollower.
The Second Act
The second act: Curtains up, the two actors become politic leaders and talk to the crowd, revolted and pervasive. This is represented metaphorically by the mouvement of the crown in space. When the leaders speak, the (artificial) crowd becomes silent and vice-versa. This interaction was automatically realized by a device in Max4Live that works like a compressor side-chain on a live crowd audio generated by CataRT and coupled to the loudness of actors’ live feed.
The Third Act
The Third act: an intimate space deep on the stage. The crowd has disappeared and we experience the preparation of a terrorist act by a couple. The two actors say the same text together and SuperVP cross-synthesis modules in real-time is at work to create a hybrid man-female voice, strange and immaterial. A referral to the anonymity of terrorism.
The VoiceFollower module in Max allowed us to automatically assign in real-time up to a hundred cues within short intervals during the performance, with a precision of a phoneme. This was an encouraging result for a young technology in use during live theater performance. Here is the principle: The voice follower compares the incoming live speech (during performance) to a pre-recorded reference. The reference has markers corresponding to live processing actions that are triggered upon recognition. The voice follower comes from the larger concept of gesture and continuous media following used previously on gesture data. In practice, the audio reference can come from any source, given to the VoiceFollower module. This allows tight synchrony between live performance and electronic actions.