IRCAM Forum Workshops Program – Brazil

At UNESP, (Sao Paulo)

From 4th to 6th November 2015


Not yet registered? REGISTER NOW!

Highlights 2015 :
– Hands-on workshops on Ircam Technologies, including SmartInstruments,
– Composition and sound processing master classes with composers Jérome Combier, Flo Menezes,
– Special workshop on improvisation using interactive technology and including showcases by participants,
– Concerts featuring young IRCAM and Brazilian composers with live electronics,
– Presentations by scientists, designers and artists from IRCAM and from Latin American art/science actors.

And more…


Wednesday, November 4

Wednesday, November 4

Auditorium

Time Title Speaker(s)
8:00-9:00 Registrations
9:00-9:45 Welcome Arshia.Cont,.Paola.Palumbo,.Rogério.Costa,.Stéphan.Schaub.and.Flo.Menezes
9:45-10:30 From artistic discovery to artistic creativity and viceversa Arshia Cont
10:30-11:00 Break
11:00-11:45 Instrumental Acoustics : physical modeling and Smart Instruments Adrien Mamou-Mani
11:45-12:30 Improvisation Tools Mikhaïl Malt
12:30-2:00 Buffet
2:00-2:45 Real Time Interaction questions and Max Emmanuel Jourdan
2:45-3:30 Antescofo presentation Arshia Cont
3:30-4:00 Break
4:00-4:30
Computer-assisted music composition and analysis depend largely on the formalization of mathematical and algorithmic models to either generate relevant musical parameters or to create computer programs whose outputs describe, as precisely as possible, the behavior of musical parameters of a given piece. In those cases, however, the composer, sound artists or musicologist usually knows how the given procedure works before attempting to model it as an algorithm or computational process. We present a new OpenMusic library (in development stage) that automatically generates algorithms in the form of symbolic expressions that describe, as accurate as possible, lists of data such as pitches, frequencies, onsets, durations, etc. The library uses Genetic Programming to implement the process of Symbolic Regression and generates functions that can use not only lisp operators but, also, other OM functions and extensions (libraries’ functions, for example).
José Padovani
4:30-5:00
The presentation discusses three real-time DSP tools adapted and/or implemented in Max, and used in more than one of my musical works in the last fourteen years. Therefore, these tools are permanently embedded in the set of technical-expressive elements gathered for the planning and realization of a musical composition for acoustic instruments and electronic means. The three tools are dedicated to the expansion of instrumental sounds and may be described by: artificial prolongation of sounds, sound processing by means of FM synthesis parameters, 1/4 octave spectral filtering. This presentation will focus on the basic technical description of each tool, illustrated by several musical examples. Some of these tools and works were also discussed in conference papers, published in the years 2003, 2005, 2006, 2008, 2012 and 2015.In 2001 I decided to explore long notes in a piece for cavaquinho and live-electronics, entitled “cvq”. After experimenting with different techniques in time and frequency domains, I finally became satisfied with the sonorities provided by an adaptation of a granular algorithm, based on quasi-synchronous overlapping of large grains (500 to 800 ms long). The sounds to be prolonged are recorded in real-time into a buffer, after the pressing of a pedal. To achieve the long tones, this pressing must happen just after the attack; on the other side, alternative triggering moments are able to provide rougher sonorities.A few years later (2005), I implemented a basic version of the FM synthesis using variable delay lines. This achievement opened the way to substitute real audio for fixed oscillators. With adequate filtering, estimation of fundamental frequencies and avoidance of transients, it is possible to apply the well-known parameters of FM synthesis (harmonicity and modulation index) to instrumental and vocal sonorities. Only after the first steps in the development of this algorithm, I became acquainted with the phase modulation technique, which represents a more adequate concept for what I was doing. The basic implementation is very simple: a delay line with a variable sinusoidal rate (equal to the modulating frequency) and depth given by the formula: I/(2*pi*fc), where I is the index modulation and fc the carrier frequency (fundamental frequency of the audio input). Heuristic approaches are necessary when dealing with more complex (recursive) algorithms.The last DSP tool rearranges the results of a 2048-point FFT analysis into 1/4-octave ranges. The resolution of this filtering process does not reach the two lowest octaves (ca. 23-92 Hz), but the remaining eight octaves may be adequately split. In this way, the entire spectral range is divided into 36 channels. Depending on the tessitura, it becomes possible to isolate the lower harmonics of an instrument (up to the 7th harmonic) or a restrict set of higher harmonics.Besides “cvq”, other pieces with live-electronics will be presented, some of them exploring more than one of these DSP tools. “Anamorfoses” (2007, for vibes and gongs) makes use of long tones and inharmonic FM-like sonorities. “Ciclofrênicas” (2007, for flute, by S. Rodrigo) presents a canonic section built with woodwind-like sounds based on 1973 Chowning’s instruments. The initial section of “.lá..” (2012, for flute, clarinet, violin and cello) is composed by diverse phrases, which are based on harmonicity factors expressed by successively increasing prime numbers. In the central section of this quartet, each instrument (flute, clarinet, violin, cello) plays over a texture created by its own sounds, before entering a section where a choir with up to 10 voices is built. “desfiar” (2011, for cello) begins with a long recitativo, where six delay lines – with feedback – build a texture of several harmonics filtered from different sounds played in the last phrases. In “5 elementos” (2013, for variable ensemble, by L. Souza), the modulation in frequency of a recorder and a voice is also explored. “Kandinsky Sonoro” (2015, collective creation by the ensemble klang) has sonorities based on both FM processing and ¼-octave filtering.
Sergio Freire (UFMG)
9:00-11:00 Concerto IRCAM / Camerata Aberta – SESC

Thursday, November 5

Thursday, November 5

Parallel Sessions 

Time Title Speaker(s)
9:00-10:30 Composition Master-Class Jérôme Combier
10:30-11:00 Break
11:00-11:30 Hybrid environments of collective creation: Composition, Improvisation and Live Electronics Rogerio Costa, Alessandra Bochio, Felipe Merker Castellani
11:30-12:00
Our purpose is to describe the instrument used in the composition of the orchestra that performs “Elective Affinities,” a work for computer. The orchestra is made up of instruments written in Max 7, version 7.0.1, that take the principle of additive synthesis. However, instead of the classical procedure, that directly adds sound waves with the employment of different oscillators, we use the procedure, first, of setting a limited frequency band and, secondly, extracts from the limits of this band the group of frequencies or partials composing the sound.
The instrument designed for the procedure has a band limited frequency determined by two “function” objects. The band limited frequency changes over time according to the graphic inserted in the two “function” objects. One “function” object determines the evolution of the low-frequency limit, and the other the evolution of the acute-frequency limit. The data generated by the two “function” objects are send for the “vs.explist” object, that was developed by Maurizio Giri, available in Electronic Music and Sound Design (CIPRIANI A. – GIRI M. Eletronic Music and Sound Design: Theory and Practice with Max and MSP. Vol 1. ConTempoNet s.a.s: Rome, 2014.). The “vs.explist” object receives the number of elements (in this case, partials), the lower limit (low-frequency limit), the upper limit (acute-frequency limit) and the exponent, which informs the division’s nature of the band limited frequency established by the limits given by the two “function” objects. According to the informed exponent, we can have linear (1), exponential (> 1) or logarithmic (<1) division of the band limited frequency.
As the frequency’s limits are evolutionaries, according to the graphs inserted into the two “function” objects, what produces large changes in the distance between the frequency’s limits, it was postulated the insertion of an object that determines the number of partials, evolutionarily and concurrently, in function of the evolution in the limits of the band frequency. The data generated by the two “function” objects are taken to a subtraction operation by the object “- 1”. Then, the subtraction result is divided by the object “/ 55″. This latter result is used to: 1. determine the number of elements or partials to the object “vs.explist”; 2. activate the algorithmic which assigns the intensities to every partial; 3. determine, by the object “!/ 1″, the amplitude of the sound texture; 4. using the “zl group” object, set the size of the group that will bring together the intensities of all partials, which, through the “zl lace” object, is united to the group that brings together the partials in equal number. The data gathered by the “zl lace” object is sent to the “ioscbank ~” object, and then to the algorithm that determines its envelope, and finally, to the algorithm that will determine the spatial position of the sound texture, taking as reference the stereo distribution. After the latter algorithm, the thus constituted sound texture is directed to the speakers, to interact with other textures, to unfold the process of constitution of the musical form. The information for the various objects of the instrument are transmitted via message boxes, each set of boxes composing a specific texture.
We call attention to the fundamental characteristic of the instrument, which the information of the number of partials is made as a function of the distance between the low-frequency and acute-frequency, that is informed by the two “function” objects. In this process, the choice of the “55” divider to determine the number of partials, independent of frequencies which set the low and acute limits, is intended to insert in different sound textures a “ghost” of the 55 Hz fundamental frequency.
“Elective Affinities”, inspired by the homonymous work of Goethe, adds to the efforts of the works of many composers of electronic music, including digital music, to constitute an aesthetic based on new technological resources. This aesthetic has little or no relation to the aesthetic legacy of tradition, even recent productions, unless they serve the same perceptive organ, the ear. Perhaps the products grouped by this aesthetic are not musical works. But they are, certainly, sound art.
Flávio Pereira (Universidade de Brasilia – UnB)
12:00-12:30
The demo proposal is for a Pd patch that’s a: sampler/harmonizer/Phase Vocoder (Time Stretch/Pitch Shift)/Autotuner/Granulator. The sampler records or plays mono files and runs two Phase-Vocoders with a single tempo transformation but with independent Pitch Shifters. So we have two independent voices: a “root” and a “harmony”. The patch also allows autotuning to any loaded music scale and has a built-in scale generator based on Equal Divisions. Moreover, the playing capabilities allows for automatic looping, bouncing, freeze and selection of audio slices. It also has features like a granulator and random playing settings. The patch is very user friendly, you can load and store 40 presets and switch between them to change the parameters on the fly.
Alexandre Torres Porres
12:30-1:30 Buffet
1:30-2:00
Skin Voice is an instrument-system-performance. It is a system that uses conductivity of the skin, to engage sound identities through machine learning process. Body electrical conductivity is substrate to generate different readings within voltage that are assigned to different transformations of the voice. As the performer touches specific parts of the body painted with conductive ink, voice effects are triggered.
The performance tries to bring back to the body the dissociated voice. It also tries to promote a sense of grasp of the intangible sound. Skin embodies the voice. It engages the effort of touching and pressing, to interact with the physical effort of voice production. The system promotes the voice machine status, blending speech and musical instrument.
The system is possible by the work of subordinated smaller systems, which include: an analogue voltage reading circuitry, conductive ink, an Arduino board, Arduino coding, Wekinator Machine Learning system, a MAX/MSP patch and a microphone input signal.
Alessa Camarinha
2:00-2:30
SuperCopair is a new way to integrate cloud computing into a collaborative live coding scenario with minimum efforts in the setup. This package, created in Coffee Script for Atom.io, is developed to interact with SuperCollider and provide opportunities for the crowd of online live coders to collaborate remotely on distributed performances. Additionally, the package provides the advantages of cloud services offered by Pusher. Users
can share code and evaluate lines or selected portions of code on computers connected to the same session, either at the
same place and/or remotely. The package can be used for remote performances or rehearsal purposes with just an Internet
connection to share code and sounds. In addition, users can take advantage of code sharing to teach SuperCollider online or
fix bugs in the algorithm.
Antonio de Carvalho (Universidade de São Paulo)
2:30-3:00
In this presentation we intend to approach the perceptual discontinuities related to electronic sound treatments such as delay and microtemporal decorrelation (KENDALL, 1995; VAGGIONE, 2002; SÈDES, 2015). The theoretical basis of this work is the catastrophe theory of René Thom (1976), which can be thought of as a perceptual theory based on morphological accidents (critical points) linked to continuous processes. Our objective is to indicate the critical points of these temporal electronic treatments applied to a sound source, addressing temporal values from 1 to 200ms. These points indicate where there is a qualitative leap in our perception, which passes from timbre modulations (phaser, flanger) to periodical repetitions in space and time. For this purpose, we work with Max, using objects of the HOA Library (High Order Ambisonics Library), developed by the CICM of Université Paris 8 and with AudioSculpt, to visualize the temporal and spectral result of the treated sound.
Danilo Rosseti (Universidade Estadual de Campinas/CICM Université Paris 8)
3:00-3:30
MI-IM is a musical interface that interacts with a meditator’s affective states. The core of the MI-IM project is to have the meditator influence the behavior of the musical processes while being influenced by it: the music generated by the affective state of the meditator produces a feedback loop of entrainment of the affective state. The interface requires the use an electro-encephalogram (EEG) device. The final aim is to realize a kind of musical experience in which the boundary between listener and sound is progressively blurred, so to say, and in the end cancelled.Mi-iM is an algorithmic music generator designed to interact with a meditator through an EEG device. The structure of the musical algorithm can be divided into two main components: a pattern generator and a sound generator. Both components of the algorithm are derived from the simple principle of objects bouncing in an enclosed space. The musical textures derive their densities from the number of objects relative to the size of the space they move into. The dynamic qualities of the textures depend on the speeds at which the objects move relative to the size of the space (i.e. many object moving in a small space results in denser, more active textures and vice versa).
The spatial dimension is musically translated into units of time, so that each time unit is the temporal representation of an enclosed space. At each successive temporal representation of the space, objects shift. Objects are musically represented by sound events. These events have a size (duration) and speed (variance of the their relative position within the time unit).
The advantage of using a physical simulation to generate musical textures comes from the idea of linking increases/decreases of brain activity to increases/decreases of the dynamic qualities of the physical environment. This way, the musical texture’s generative process is directly connected to EEG readings.
At the current state of research, the way that the patch reacts to the EEG data feed is based on a tentative idea that increasing levels of complexity of the sound texture are a sort of “reward” given to the listener for increased levels of méditation.
Michele Zaccagnini
3:30-4:00 Break
4:00-4:30
The Garden/Ο Κύπος/O Jardim is a composition for two amplified snare drums, amplified shekeré and live electronics. During the allotted time given for presentation, problems and solutions will be explained and expanded upon, as well as a live demonstration of how the piece works, as well as how the interaction between performer and computer transpires in each movement. The Garden/Ο Κύπος/O Jardim was written in May of 2015 for percussion great Patti Cudd. She premiered the piece at the Universidade Federal de Rio Grande do Norte, Natal, Brazil in June os 2015 during a visit there funded by her university to spread interest and knowledge in the area of new music, especially in relation to the use of the computer in music. She performed a workshop and a concert, during which she premiered the piece in question.The Garden/Ο Κύπος/O Jardim is a piece which investigates the use of text as rhythm. For philosophical interest, the composer chose the words of Epicurus, settling on two texts from his ancient writings. These two texts were translated into English, Modern Greek and Portuguese, and used as the rhythmic basis for the piece. The work is divided into 3 movements, each one focussing on a different interaction with the computer. The score does not use traditional music notation, but instead lets the performer understand the use of the words, through recordings and phonetic spellings. The phonetic spellings are placed and music-staff-like lines and the performer given an approximate time-goal for the execution of each line. All three movements use a piezoelectric microphone on the second snare drum to interact with the computer. The first snare drum and the skekeré are each amplified with dynamic microphones and the signal taken directly to the main mixer. The program used for the live-electronics was Pure Data.During the first movement, whenever the score calls for the second snare drum to be hit, a multi-level automated sequence is triggered, and when the snare is hit again, the sequence is turned off. This music in this sequence is made via subtractive synthesis. It has many layers, the first being is a repetition of 17 notes, the second, a sequence of 6 durations, another layer is made of 7 different filter levels, yet another is 5 dynamics, and finally a sequencing turning an echo and reverb effect on and off completes the final layer. Because each layer of the sequence has a different number of iterations, the repetition of the full sequence takes many days to finish. This guarantees that as the sequence is turned on and off by the snare drum, each time it will be in a different section of the sequence, and while there will be similarities, it has a fractal, self-similar quality, yet non-repetitive.In the second movement of the piece deals with the second snare drum in a different way. In this movement, the actual audio signal is being used as material for processing. An automated and randomized process evolves during each stroke of the second snare, creating swirlings and rumblings of sound. This is then juxtaposed with the un-processed state of the first snare drum and shakeré.Finally, in the last movement, the performer is instructed to fully improvise using the first snare and shekeré. At specific intervals, the second snare drum is again used as a trigger, but this time, it goes through a randomized sequence of samples. The samples are recordings of the composer reading words and phrases from the three translations of Epicurus. However, most of the recognizable speech has been processed out through equalization, leaving mostly high frequencies in order to bring out the rhythmic aspect of the speech. This climaxes at a frenetic rate, and is then slowed down until the piece finishes with one last quiet movement after the last sample has died away.
Heather Dea Jeannings (Universidade Federal de Rio Grande do Norte)
4:30-5:00
Project based on digital visual music sounds, soundscape improvisations and generative graphics.
Public expectations are deceived by the layered structure of sounds that suddenly go from moments of gestural texture to total silence. The audio portion is connected to a reactive visual environment provided for their own behavior, transformation and structured configurations for soft flickering skeletal and chaotic forms, occupying a synaesthetic system to activate sounds.
Renzo Filinich(Communidad Electroacustica de Chile)

Time Title Speaker(s)
11:00-12:30 Hands-on Computer Assisted Improvisation Mikhaïl Malt
12:30-1:30 Buffet
1:30-3:30 Hands-on Computer Assisted Improvisation Mikhaïl Malt
3:30-4:00 Break
4:00-5:00 “Jam” Session Adrien Mamou-Mani and Mikhaïl Malt

Time Title Speaker(s)
11:00-12:30 Hands-on Workshop Smart Instruments Adrien Mamou-Mani
12:30-1:30 Buffet
1:30-3:30 Hands-on Workshop Smart Instruments Adrien Mamou-Mani

9:00-11:00 Concerto IRCAM / Studio PANaroma – SESC

Friday, November 6

Friday, November 6

Parallel Sessions

Time Title Speaker(s)
9:00-10:30 Composition Master-Class  Flo Menezes
10:30-11:00 Break
11:00-12:00 Computer Music Interpretation in Practice Serge Lemouton
12:00-12:30 Question & Answer session Flo Menezes and Jérôme Combier
12:30-1:30 Buffet
1:30-5:00 Hands-on Workshop Max Emmanuel Jourdan
5:00-6:15 Coffee / Buffet

Time Title Speaker(s)
1:30-4:30 Hands-on Workshop Antescofo Arshia Cont
4:30-5:00 Hands-on Workshop Orchids Serge Lemouton
5:00-6:15 Coffee / Buffet

6:30-7:30 Presença, ausência e memórias: Concert of  Liv(r)e Electronics – in Auditorium

Last Update : October 19th, 2015