May 14, 2018 - May 16, 2018
From Monday, May 14 through Wednesday, May 16 at IRCAM, the final training program of the 2017-18 season is dedicated to motion and physiological sensors. Frédéric Bevilacqua (head of the Sound Music Movement Interaction team), Emmanuel Flety (head of the Prototype and Engineering Pole), and Marco Liuni (computer music designer/professor) offer the opportunity to learn about sonification by programming interfaces connected to these two types of sensors. The composer Emanuele Palumbo will also share his experience on this subject he knows well!
Marco Liuni and Emanuel Palumbo give us a few more details about this program intended for composers, musicians, performers, teachers, and sound designers ready to learn more about motion capture.
Marco, can you tell us a little bit about the types of sensors you’ll be looking at during the training?
M: The class will concentrate on two types of sensors: movement sensors (in particular accelerometers, gyroscopes, and magnetometers) that make it possible to estimate orientation and movement. We will also look at physiological sensors that provide data connected to heart rates, breathing, and electrodermal response (variation in the electric properties of the skin).
This program also offers the opportunity to study the design and realization (in Max) of a system for the sonification of data acquired through programmable interfaces. What exactly is a “system for sonification”?
M: It is a process of associating a sound with an object or phenomena that it does not produce. In our case, the sound is the result of using data captured by sensors to control Max patches for audio synthesis or transformation.
What interfaces will be studied during the training?
M : Arduino and R-IoT.
Emanuele, what types of sensors have you already used in your work as a composer?
For my works Artaud Overdrive (festival ManiFeste-2016) and Voicing the Listening (Forum Workshops 2018), I used PPG sensors and a respiration sensor. I looked after the design of the system and also worked on the coding for the Arduino and the Max patch in Ableton Live for the synthesis.
What will you present to the participants during your intervention Friday afternoon?
E: Physiological sensors (PPG sensors for heart rate monitoring, a respiration sensor, and electrodermal response of the skin), all hooked up to the LISTEN system, made with open source software and hardware on Arduino cards.
I want to have a discussion with the participants on the musical signification of physiological data and their intrinsic musicality (breathing, heart beats), and share my experience with them in order to successfully integrate data capture in a piece of music.
We will start by addressing the musical use of data and everyone will be able to create their own sounds on their own computer. If the participants are interested, we can go on to another technical level (overview of Arduino code and the LISTEN system).
From a more pragmatic point of view, what basic knowledge do you need to have in signal processing and in Max to take this class?
M: Participants will have to use advanced systems for the acquisition and digitalization of the data that communicate with the Max patches for their sonification. The focus will be on the interfaces and techniques for signal processing and mapping of the data.
The concepts necessary are in digital sound signal processing and therefore in sampling and basic manipulations of digitalized data. The sonification part requires experience in programming interactive environments for audio processing in Max.
After this training program, what will the participants be able to do in the arts?
E: They will be able to materialize their artistic ideas in a piece of installation using motion or physiological sensors.
M: I’d add that participants are also eligible for discounts if they want to buy the systems used during the class, notably 25% off R-IoT and Bitalino products.
Interview by Cyrielle Fiolet