Florian Hecker was an artist in residency at Ircam. He was laureate of the Program Artistic Research Residency 2016. His research domain is the Exploration of compositional use of sound synthesis from the statistical descriptors that are in the current view of sound texture perception and texture synthesis.
In collaboration with Analysis/Synthesis Team
In this interview we asked to the artist Florian Hecker and the researcher Axel Roebel to speak about this experience.
Florian, could you describe Axel’s work in one sentence?
Axel works at the seamless intersection between concept, formula, algorithm and sound generating systems – so directly in this vital zone in which sound is being synthesised, or analyzed and decomposed on a quasi molecular-atomistic scale.
Axel, could you describe Florian’s work in one sentence?
Florian is very open minded and has a strong affinity to synthetic and noise like sounds. He likes to investigate sound transformations, from the most subtle to the most dramatic ones, and exploit these in his compositions.
How did you meet?
Florian: I’ve been aware of the work of Analysis / Synthesis team ever since I first been shown around IRCAM by Atau Tanaka, in 1999 or 2000. In the Fall of 2014 I was in Paris for a project at the newly built Foundation Louis Vuitton, and – during a day off – had lunch with Markus Noisternig at some point, Markus introduced us.
Did you write the residency application together?
What was the initial object of your residency?
‘We propose the exploration of the compositional use of sound synthesis from the statistical descriptors (auto-correlation function, cross-correlation function, skewness and kurtosis of the STFT magnitude bin signals) that are in the current view of sound texture perception and texture synthesis. These concepts will be ultimately dramatized in the form of a sound piece, featuring this novel approach of synthesis.’
At the end, did you achieve your goal, did you deviate from it or did you go beyond it?
Axel: I think we pretty much achieved the goal, the main deviation is the fact that we did change from a STFT representation to a perceptually more relevant representation using perceptual filters, but besides that, we have established a new command line utility that enables analysis and resynthesis from sound textures, and includes the possibility to merge statistics from different sounds or scale statistics from one sound while imposing them on another sound. We did quite a few tests with these statistics to investigate how changing them individually affects the synthesized sound, but the sound representation is very high dimensional so that we did not really come close to any final understanding.
Axel, can you tell us more about what is statistical sound synthesis?
Statistical sound synthesis represents sounds using statistics of the energy evolution in the critical -, and modulation bands of the auditory system. During analysis, the signal is filtered into the critical bands, from which envelopes are calculated, that subsequently are sub-divided into so called modulation bands. For the resulting envelops in all these bands and sub-bands we calculate statistics (mean, variance, skewness, kurtosis) as well as various correlations. It has been shown by Josh McDermott at MIT that when these statistics are measured from natural sound textures (like wind and rain) it is sufficient to impose the resulting statistics into a white noise signal to produce a new sound that is perceptually similar. That means it is evoking the same environmental scene, as the original sound. The interesting point here is the fact that the parameters that are extracted from the sound construct a parametric, perceptually relevant presentation of noise signals, and the question was how we can make use of this representation for compositional purposes.
Florian, you have been using Axel’s code to synthesize musical sequences, could you give some examples?
From the start on, I was curious to probe what such an algorithm might detect in sound material that does not classify as a texture in a conventional understanding per se. So preferably looking at sounds, which were already synthetic, here meaning computer generated, synthesized through various particle synthesis methods – a trek I have been following continuously in my works over many years. My other interest considers the notion of texture statistics and exchange of qualities amongst sounds – the imposition of a particular set of statistics from one sound onto another sound.
What timbral features might be added in the process, do new sounds emerge in doing this?
With this logic and the related deviation in mind, one rational was to use the algorithm to produce an entire resynthesis of an existing sound piece. As source material – input so to say – I choose the piece Formulation (2015) – which is structured as a three-channel composition – and 25 minutes duration. Taking this three channel arrangement as one guiding parameter, we worked on two texture imposition concepts – vertically and horizontally. Vertically meaning that textures of a specific timestamp in channel one will be imposed on the same timestamp in channel two and so on. Horizontally meaning that impositions take place in the same sound file but among different time-points. For this, we divided one channel into a set of segments with a short duration – around 20 seconds – analyzed these, and then progressed in the vertical imposition. The structural progress of the imposition was sequential – from one to the next, so the statistics of segment one would be imposed on segment two and so on. The resulting resynthesized versions – horizontal and vertical, were then arranged in one nine-channel piece – or preferably 3 x three-channels – Formulation As Texture. It was premiered at the IRCAM Live, Centre Pompidou, 18 March 2017. The diffusion was static; each channel was attributed to one loudspeaker in the auditorium. The results of the different resynthesized versions were highly synchronized, yet due to the different spectral content, subdividing the volume of the auditorium in zones, each accentuating one particular resynthesized version.
Axel, how far Florian influenced the programming of your statistical sound synthesizer?
Yes, Florian had a strong impact on the facilities that allow to manipulate the sound statistics when imposing them to new sounds. Also he always pushed all parameters we imagined to the extreme to explore effects, this requires on the other side that the algorithm behaves gracefully, when pushed to the limits. Finally, we developed a visualization of the sound statistics for Florian, on one hand to study the sounds, but on the other hand as well to support the documentation of his work. You can see some visualizations of the statistics of the sounds we used as part of his exhibition Synopsis,(Tramway, Glasgow, UK, 26/5 – 30/7 2017) here: https://tinyurl.com/y8vepyoo.
In your collaboration, who’s the lead, the science or the art?
I don’t think leading is the correct term for this collaboration: as it is a collaboration both sides contribute ideas and desires, but on different levels, science more on the side of how to do things, and arts more on the side to propose directions of exploration.
After the residency, are you still willing to collaborate?
We collaborated on the central piece of my recent exhibition Halluzination, Perspektive, Synthese at Kunsthalle Wien (17/11 2017–14/1 2018) positioned in the centre of the exhibition space, Resynthese FAVN is an extensive elaboration of FAVN, which was presented at Alte Oper in Frankfurt in 2016. FAVN is an abstract work that evokes issues surrounding late-19th-century psychophysics as well as Debussy’s Prélude à l’après-midi d’un faune, which itself is a musical adaption of Stéphane Mallarmé’s L’après-midi d’un faune. Taking these coordinates as its starting point, Resynthese FAVN is the product of an in-depth analysis – resynthesis process that uses the entire sound material of the original work FAVN as an input. Here Axel employed a different feature of the texture synthesis algorithm, namely imposing specific statistics of this input on a noise source. The result is eight different versions of FAVN – named Resynthese FAVN. These versions were presented at every full hour, in a sequence that gradationally improved in refinement. The version that would play at 11:00 h was significantly farther removed from the original, then the version that would play at 17:00 h – which was already much more polished in detail, and closer to the original FAVN piece. The piece also featured a synthetic voice, which Christophe Veaux at the CSTR, The University of Edinburgh developed for this piece. Axel resynthesized this in eight different, variations – using his PaN algorithm and concept. All this manifested in an entire resynthesis of FAVN on every scale.
What is for you the future of composing statistical music?
Axel: I don’t think the term statistical music is appropriate here. The effects that are employed work on the level of a statistical description, but the arrangement of these effects in Florian’s work is much more systematic then statistical. When we talk about the underlying algorithms and their compositional use, for me personally, it would be very interesting to continue to investigate into morphing the properties of different sounds, and to provide parameters that allow continuous parameter changes, that means continuously change the sound that is created (either for a musical composition or for the design of a sound scene for gaming or film environments).
Florian: The amalgamation of features of different sound, the resistance of methodical and logic driven routes in combination with the statistical description on a micro-structural level, coupled with dynamic parameter alterations – are some of the treks that afford extra investigation. To regulate and steer particular statistical moments, to halt their statistical development, in tandem with letting loose and dynamising others, might facilitate a new formal/intuitive constellation.