Check out the beta version of the new IRCAM Forum ⇢ https://beta.forum.ircam.fr

Abstracts


Aaron Einbond

Aaron Einbond

CaMu: Catart and Mubu for Composition

Abstract :
Diemo Schwarz, Christopher Trapani, and Aaron Einbond present an update on their 2018 Unité Projet Innovation (UPI) ”Catart for Composition.” The goal of the project is further to develop the technique of corpus-based concatenative synthesis as a tool for new musical creation by extending the functionality and documentation of the reference CataRT in the context of the Ircam Forum package MuBu for Max. The results include a series of new interoperable modules and step-by-step tutorials adapted toward a flexible system for musical composition and interaction using corpus-based synthesis for large audio databases. Taking advantage of techniques drawn from Music Information Retrieval (MIR), creative applications include some of the most prominent topics in recent composition: audio transcription, mosaicking, timbral composition, and granular spatialization.

Schedule: Day 2- Thursday 28th - 9:30am to 10am  STRAVINSKY ROOM

Alain Bonardi

Alain Bonardi

Les Songes de la nef

Les Songes de la nef is a sound installation for a set of loudspeakers that presents electroacoustic fixed music (88 min. duration).The synthesis sounds explore complex bell models developed from the ones proposed by Jean-Claude Risset.  The device is adapted to the place that welcomes it and reveals its architecture in a new way. The underlying idea is a “prise de site” tribute to philosopher Jean-Louis Déotte. The technical aspects of the work are assured by Quentin Nivromont for the Ateliers des Lutheries Numériques.

http://www.alainbonardi.net/songes/

 

Schedule: Day 2- Thursday 28th - 5pm to 5:30pm  at STUDIO 5

Alexandros Spyrou

Alexandros Spyrou

Computed assisted liquidity

Abstract :
Material and form in contemporary music have reached a threshold of disintegration on which they can no longer be defined as solid. The decomposition of the fabric of identity suggests the need for new conceptual tools which can address its elusive condition. But how can an elusive identity exist and operate within a musical composition? With the assistance of bach family objects (bach, cage, dada) in Max, I generate material which is never solid, but rather in a constant state of becoming. Working with multivariate interpolation of several parameters of music material, I propose the concept of “liquid identity” as a “modus essendi”, and “morphallaxis” as a “modus operandi”.

Schedule: Day 2- Thursday 28th - 10am to 10:30am  at STRAVINSKY ROOM 

Alireza Farhang

Alireza Farhang

Chuchotements burlesques 

2017.18 Artistic Research Residency

Traces of Expressivity: high-level score of data-stream for interdisciplinary works
In collaboration with the Musical Representations Team and the Sound Music Movement Interaction Team.

This project aims to formalise a technique tailored for score-creation in the context of music-based interdisciplinary works. In multidisciplinary works the significance of communication between artists, from the different artistic disciplines led me to think about the conception of a hybrid universal high-level score. This new paradigm should allow us to transmit the intentions and the ideas of a composer to choreographers, set designers and other artists involved in the dramatic, performing, visual or digital arts. This hybrid score consists of a notation of gestures (graphic notation), as well as a data stream score (the subject of this residency) that provides real-time data stream as a source of formalized sound and gestural information. The data stream score should be able to convert the audio signal from the music that is being performed by the performers, as well as their physical movements (gestures) into data. In this project, our attention will be focused on defining the relevant semiological parameters, which is at the heart of the problematic of this research.

Anders Lind

Anders Lind

MobilePhoneOrchestra.com

MobilePhoneOrchestra.com is an online web application developed as a platform to enable performances of fixed polyphonic contemporary art music for mobile phone orchestra. The ambition is to create a new performance paradigm for fixed electronic music in multiple parts to be performed in concert halls as stand-alone electronic orchestral music or in combination with soloists or traditional orchestra settings. The participants of a mobile phone orchestra are using their mobile phones or tablets and are performing on specially developed music instruments provided online at MobilePhoneOrchestra.com. Animated music notation gives performance instructions and conducts a performance and are also provided on MobilePhoneOrchestra.com. A mobile phone orchestra should for best performance consist of approximately 20-200 people or more regardless their musical background and age. 60 minutes of rehearsal are needed before a performance.

Schedule: Day 3- Friday 29th - 10:45am to 11:15am STRAVINSKY ROOM

Bruno Friedmann

Bruno Friedmann

Extended Total Serialism

Abstract :
A program for algorithmic music composition based on the concept of serialism ‚extreme‘ is presented and discussed; the algorithm is based entirely on a row of numbers.
The question behind this project is: What happens, if serialism is extremely enlarged, widened, developed by using the nowadays’ software and electronics? Does it sound even more random as it was stated for some serialised compositions in the fifties?
To find an answer, six FM-Generators where used to generate sound and music completely structured and controlled by a single series of numbers. The „classical“ serial rules are applied to a big bunch of musical and structural parameters of the software made by MAX/MSP, these are the parameters of the FMs, tempi and its durations, note values, timbre and some more.
The six FM-Sounds are shaped and also spatialised (spat~ ircam) all according to the chosen number row as well.

Schedule: Day 2- Thursday 28th - 10am to 10:30am  at SHANNON ROOM             

CEAMMC Team

CEAMMC Team

New CEAMMC Puredata distribution/library for education and live performance purposes

CEAMMC Puredata distribution and library – presentation of the project

CEAMMC Puredata is the new distribution of the Puredata environment and the library of external objects. The key idea of this project is to create a special product for the beginner students who start working with live electronics. With this project we try to make the learning easier in several aspects:

– The distribution is bundled with all needed components that could be used to start learning. All new objects has fixed naming convention and grouped by their functionality. We have consistent help patches for all our components.

– We focus more on higher level objects to spend less time on repeating tasks

– We try to bring more contemporary techniques from new programming languages to puredata. Library includes comprehensive set of objects that work with lists and include some concepts from functional programming. We also included special data types and sets of objects to work with them (Strings, Sets, Envelopes, Matrices etc)

– The project is written in C++ with our own API layer over Pd API. The code is covered with tests.

Authors: Serge Poltavsky, Alex Nadzharov

Schedule: Session 1 Day 2- Thursday 28th - 2pm to 3:30pm  STUDIO 3
Session 2 Day 2- Thursday 28th -4pm to 5:30pm STUDIO 3

Clément Guitard

Clément Guitard

The 4X in Répons by Boulez

Abstract: During the second half of the 20thcentury, the French conductor and composer Pierre Boulez (1925-2016), created the new musical technique “generalized serialism” which consisted of using in each score a unique ensemble of base values for the 5 elemental parameters of sound: pitch (frequency in hertz), time (milliseconds), the dynamic (decibels), timbre (attack), and space (density).

He was then faced with the question of adding electronic sounds to mixed music composed using this system.

On October 18, 1981, during the Musical Days in Donaueschingen by the Black Forest in Germany, his work “Répons” was premiered, under his baton, by the Ensemble intercontemporain.

The title comes from Catholic liturgy, from the principle of virtuoso interaction between the musical direction and the numerous musical parts presented in the piece.

This concert-event inaugurated state-of-the-art techniques of the real-time digital sound signal processor 4X (200 million operations per second) designed by the Italian engineer and physicist Giuseppe di Giugno in Paris at IRCAM.

Pierre Boulez was the founder and director of IRCAM since its opening in 1977, thanks to a call he received from the French president, Georges Pompidou in October 1969.

–        Why was the structure of this computer—the 4x—used during the first version of “Répons” in 1981 able to perform electro-acoustic sounds in a manner consistent with the multi-serial process inherent to Pierre Boulez’ music?

–        To what extent were the sound transformations produced by the 4X for the sections 1-4 in “Répons” were the accomplishment of work by Pierre Boulez using music technology that is still current today?

Our analysis will begin with the score and the original patch

Schedule: Day 3- Friday 29th - 10:30am to 11am STUDIO 3

Davíð Brynjar Franzson

Davíð Brynjar Franzson

Residency

Abstract:

An Urban Archive as an English Garden is an installation–based performance. It is presented as a grid of speakers covering as much of the available performance space as possible. From the speakers you hear field–recordings taken at various times at a single location, spatialized using phase based panning to realistically represent the relative location of sounds as they were captured. Occasionally, a performer steps in and performs against this topography, ‘naming’ points of interest and mapping connections across space and time, triggering dynamic resonances hovering in the space.  

The audience is free to come and go. They can shift their perspective, move around and in–between the instruments and speakers, and take control of their own experience. 

They are free to listen.

To listen in motion. 

To listen through motion. 

Schedule: Day 3- Friday 29th - 10:30am to 2:30pm STUDIO 5

David Kim-Boyle

David Kim-Boyle

Immersive Scores with the HoloLens

Abstract :
Microsoft’s HoloLens offers a unique platform for the display of immersive 3D performance scores. This presentation will discuss some of the aesthetic affordances and possibilities of the platform in the context of real-time notation techniques. Practical and performance limitations will be discussed, including those related to the use of network sockets for interfacing with Max as well as the challenges of synchronising multiple devices. The constraints of Unity and C# as prototyping platforms for artistic creation will be briefly considered and numerous supporting examples developed for the HoloLens from the presenter’s creative work will be discussed.

Schedule: Day 3- Friday 29th - 2pm to 2:30pm  STRAVINSKY ROOM

David Zicarelli

David Zicarelli

SOON

Edo Fouilloux

Edo Fouilloux

Building and playing sound machines in a Virtual space (With Jan-Marc)

Abstract :
What new challenges and opportunities arise while programming and playing sound in a node based approach inside 3D space?

We will demonstrate the new Interfaces, components and philosophies inside MuX, the Sandbox Instrument, where you can build and play sound machines in VR.

There will be showcase examples from the beta community and different setups that connect to existing rigs, expanding the possibilities of creation outside MuX by exploring the power of modularity.

Schedule: Day 3- Friday 29th - 2:30pm to 3pm  STRAVINSKY ROOM

Play with MuX 

Abstract :
MuX is a SANDBOX INSTRUMENT, the first of its kind.

Inside MuX, you get a set of COMPONENTS that you can interconnect to build, play, and share fantastic SOUND MACHINES. It’s a modular, playful, and social like Minecraft or LEGO, but MuX combines this with the philosophy of modular synthesizers and audio programming.

MuX is designed as a physical and spatial experience, built from the ground up with VR in mind, where your playground has unlimited space!

MuX is for the ENGINEER, the COMPOSER and the PERFORMER.

Schedule: Day 3- Friday 29th - 11:30am to 12:30 STUDIO 4

Fraction

Fraction

Paradigm of sound performance in immersive audiovisual environment. From the experience of an atypical medium to imagine adapted tools.

Drawing on examples of creation and evolution of his practice, the artist Eric Raynaud (Fraction) will present its perception of sound composition in an immersive audiovisual context. Which led to the implementation of specific tools and interfaces for real-time performance and led to think ‘Symbiosis’: the residency project in artistic research at the intersection of spatialization and generative visual synthesis, which he is currently conducting at IRCAM in partnership with the Montreal SAT, which he will outline.

Schedule: Day 2 - Thursday 28th -5:30pm to 6pm STUDIO 5

Fredrik Mathias Josefson

Fredrik Mathias Josefson

Method and Toolkit for Spatializing 3D Audio

Abstract :
The main emphasis of this research is focusing on how a method on how to spatialize sound in a 3D audio space. Today there exist many tools for spatializing sound in 3D space, but these focus on how to position and move sounds in 3D space without solving fundamental problems that can arise in the creative process and provide support to composers during the spatialization process. Some questions that will be addressed are:

• Are there characteristic of sound-objects that can be transposed into spatialization? For example, envelopes and mass?
• Can a physical model be introduced and have an influence on for example the “mass” of a sound-object?
• What happens when sound-objects “collide” in space and could/should it be avoided?
• How can Artificial Intelligence (AI) be used to spatialize sound-object?

The method and toolkit focus on the space between the spatialization tools and the compositional process.

Schedule: Day 2- Thursday 28th - 4:30pm to 5pm STRAVINSKY ROOM

Garth Paine

Garth Paine

Future Perfect Residency report

During 2018, composer Garth Paine worked at IRCAM in collaboration with the Sound Music Movement Interaction and the Acoustic and Cognitive Spaces teams on the development of his work Future Perfect.  A new smartphone framework was developed to allow the performance of layers of the score over the audiences smart phones.  The framework includes dynamic sample loading and triggering, spatialization and granulation, driven from tablet based performance interfaces.  The system utilizes prior work in the CoSiMa project and webAudio and requires only a browser to access.

Future Perfect: Performance

Future Perfect is a 46 minute musical journey. It consists of an immersive 360° ambisonic score composed from field recordings made in urban parks and cities in Paris and Karlsruhe and the baritone voice of Gordon Hawkins, performing the James Joyce poem, All Day I Hear the Noise of Waters. The 46 minute score is augmented by interactive sound performed on the audience’s smart phones and a full length film. The work was premiered in the InSonic Festival on December 8, 2018 and will be released in 2019 as a virtual reality album experience.

Garth Paine http://www.activatedspace.com

Schedule : Day 3- Friday 29th - 10am to 11am STUDIO 1

Giancarlo STAFFETTI

Giancarlo STAFFETTI

Gesture and Sound and Musical Media. Arduino Data Gloves.

Musical practices are a measure of the evolution of digital technologies at an unprecedented speed. Gestures—present until the first half of the last century only as an accessory on scores—are now part of the compositional and writing process of a work. The inclusion of gestures and movements in musical creation is carried out by mapping within a system of sonification in which data are obtained using a broad range of sensors.

Data gloves, created with Arduino boards, use a finger sensor (flex sensor). Sensors are electrical resistors that modify the voltage depending on the voltage applied to them. Since the resistor is directly proportional to the voltage exerted, they are also called flexible potentiometers. These sensors are connected to an Arduino NANO board that manages the data and sends it to the computer through a USB holder or wireless transmission.

All data generated by the flexible potentiometers are then retrieved by Max. Mapping makes it possible to use the data later in the synthesis, the spatialization and within several treatments, such as granulation, different types of filtering, delay, spectral decomposition, to name a few. It is both a tool to control the different electronic treatments and a source of raw material which nourishes the system of sonification.

The device can be used by instrumentalists (pianists, singers, flutists, string players, etc.) and conductors. It can also be used in live performances such as theater and opera.

Schedule: Day 3- Friday 29th - 2:30pm to 3pm SHANNON ROOM

Giovanni Santini

Giovanni Santini

LINEAR: improvements and new functionalities

Abstract :

Augmented Reality (AR) technology is opening up new ways for representation and interaction with real and virtual objects. LINEAR (Live-generated Interface and Notation Environment in Augmented Reality) is an environment created for musical performance, exploring possibilities provided by AR technology.

One performer using an iPhone or an HTC Vive can create virtual objects (rendered in real-time and superimposed to the real environment) according to the movement of the device; those objects are both virtual interfaces (sending OSC messages to Max/MSP) and forms of live-generated graphic notation: LINEAR allows, with some limitations, the representation of gestural movements with an exact 3-D placement in space. We can now have an analogic notation of gestures, rather than a symbolic one. The act of notation corresponds to the notated act. The resulting representation can also be approached as a form of graphic notation by other performers (the AR session is mirrored to a projector).

Schedule: Day 3- Friday 29th - 3pm to 4:30pm  STRAVINSKY ROOM

Guillaume Loizillon

Guillaume Loizillon

Sweet Algorithms:

Web Audio, Reterritorialisation Des Procedures Et Des Objets

Abstract :

This presentation offers the opportunity to explore some sound synthesis processes in what is called “web audio”.

The development of sound techniques has contributed to the emergence of experimental forms. Online sound synthesis is an example of this. Beyond the deployment of algorithms, the ability to operate online leads to territorialization in a new environment; that of a network where actor and audience comingle, reconfiguring the relationship between sounds and images.

Even if sound synthesis remains open to development in its modes of production, it now belongs to a diffuse system where electronic sounds are not exotic or new. Its reality in the current audio culture is that of a “naturalized” object, often used without any real measure the requirements for its production. The generation of sound directly online opens a reflection in which interdisciplinarity and interactivity are put into play in mobile forms.

Schedule: Day 3- Friday 29th - 10:15am to 10:45am STRAVINSKY ROOM

Haig Armen

Haig Armen

Sonic Interactions Workshop

Abstract:

The goal of the workshop is to combine old and new craft by hacking existing instruments with sensors and an open source sound kit. The workshop aims at exploring ways of gestural expression, leveraging acoustic instruments with electronics, sound processing and rapid prototyping. In the talk Haig will share his documentation of some of the workshops. initial phase of work to build knowledge at Emily Carr around the emerging area of sonic interactions.

The workshop strategy is to combine analog and digital approaches by hacking existing musical instruments to create new sonic interactions and experiences. For centuries we have carefully handcraft the interactions of musical instruments, optimizing them for intuitive learning, playability and comfort yet many of the newest tools of musical expression are cold and intangible-using technologies in ways that ignore hundreds of years of accumulated knowledge. The goal of this workshop is to integrate old and new crafting methods by mashing up an instrument that you bring with a small embedded electronic toolkit consisting of a RaspberryPi, a Sense Hat and an open source software stack. Participants of the workshop will be encouraged to maximize for musical expressivity, interactivity and experimentation. By using accessible prototyping tools we’re able to experiment and create new interactions rapidly which will conclude with a musical performance. Bring your ukulele, hand-drum, toy piano or xylophone and we’ll hack it into a new digital sound instrument.

Schedule: Day 2- Thursday 28th - 2pm to 2:30pm  SHANNON ROOM

Harin Lee

Harin Lee

Testing the ability to discriminate musical timbres

Abstract:

During my masters degree, with my supervisor Daniel Mullensiefen, we’ve developed a psychoacoustics testing software using Max MSP. The main principle of the test is to identify individual’s ability to discriminate fine differences in musical tones. Unlike other previous psychoacoustics tests that rely on force choice questions, we adopted a novel method by employing a slider bar to perform reproductive task. In this test, a stimuli target sound is played for each item and the participants task is to move the slider while listening to the sound to position the slider at a point that best matches the first heard target sound.
Finally, the score is calculated by how close the distance was to the target, and subsequently summed to produce a total score. We’ve currently recruited 70 participants and aim to reach 100. Using such interactive tool in an psychological experimental setting, we hope to bring an innovative idea to the field of music psychology.

Schedule:
Day 1- Wednesday 27th - noon to 4:00pm  at STUDIO 5 (with students of RCA)                                                             Day 2- Thursday 28th - noon to 4:00pm  at STUDIO 5 (with students of RCA)

Haydar Cengiz

Haydar Cengiz

Music of Speech

Abstract: Music of Speech aims to discuss emotional and interactive aspects of oral interchange through musical terminology, based on the suggestions that speaking is the most frequently and fluently used improvisational instrument among humans and each dialogue is a formed-piece in which introduction, development, and conclusion are executed, based on selected topic(s).
How could this naturally flowing element occurring in dialogues be adapted to music? Would it possible to make a wordless song from spoken words,  in which one would understand what words are said, topic or mood by listening to it. What kind of material would this process provide musicians that they can adopt and improve as creative and performative concepts? Finally, from a musical perspective, how is the relation and interaction of speech dialogue with the other sounds present in the environment?

Schedule: Day 2- Thursday 28th - 10:30am to 11:00am  at STRAVINSKY ROOM  

J.J Burred

J.J Burred

Machine learning for sound deconstruction- Recent developments around Factorsynth

Abstract :
Factorsynth is a sound processing tool that uses a machine learning technique (matrix factorization) to decompose any input sound into a set of temporal and spectral elements. Once these elements have been extracted, they can be modified and recombined to perform powerful transformations, such as removing notes or motifs, creating new ones, randomizing melodies or timbres, changing rhythmic patters and creating complex sound textures. Its most recent implementation as a Max For Live device, released in 2018, allows real-time remixing and randomization. I will discuss recent developments around the software, several examples of its use by composers, and its context within the emerging research field of morphological sound synthesis.

Schedule: Day 2- Thursday 28th - 11:30am to noon  at STRAVINSKY ROOM  

Stephan Kloss - Jakob Gruhl

Stephan Kloss - Jakob Gruhl

Mazetools Soniface

The sound and vision combining music instrument sets harmony, rhythm, and the acoustic ambience in relation to geometric shapes and colors. The process-oriented ways of composing include new structures of automation and input methods like motion tracking, what makes this multimedia type cross-platform app interesting for sonic experiments. In addition Soniface provides different opportunities for spatial audio arrangements and multiple visual outputs. In the workshop co-founder Jakob Gruhl will guide through the process of working on Mazetools, the creator Stephan Kloss will present the latest developments.

Schedule: Day 3- Friday 29th - 2:30pm to 3:00pm  at SHANNON ROOM  

Jesper Nordin

Jesper Nordin

Emerging from Currents and Waves

Abstract :
The piece “Emerging from Currents and Waves” was premiered in Stockholm August 31st by Esa-Pekka Salonen, Martin Fröst and the Swedish Radio Symphony Orchestra. It’s a co-commission by the SRSO, Radio France and IRCAM. It uses technology developed specifically for the piece by Jesper Nordin and Manuel Poletti as well as Nordin’s own technology Gestrument. The technology is used to extend the roles of the soloist and the conductor through motion sensors that control both the live electronics and the live visuals programmed by Thomas Goepfer.

Schedule: Day 3- Friday 29th - 4pm to 4:30pm  SHANNON ROOM

José Augusto Mannis

José Augusto Mannis

Development of a multi-channel sound recording and reproduction system with a 360º horizontal sound image applied to music, sound arts and bioacoustics.

Abstract:

This presentation focuses on ongoing research financed by CNPq 2016 Universal Call, for the purpose of the development of a multichannel audio device for the recording and reproduction of a 360o sound image, which is taken by a microphone device equal in number and placement to the speaker devices. Experiments were performed in sound recording with (a) four; and (b) six omnidirectional microphones; and with (c) six cardioid microphones; and in reproduction with (a) four; (b) (c) six speakers; all arrangements (microphones and speakers) were inscribed on equidistant points on the same circumference. Comments on each configuration are produced from a critical listening of the sound results. At each stage the evaluation of the results was the basis for planning the next stage. The evolution of the number of sound and reproduction points as well as the modification of the polar characteristic of the microphones followed this methodology. Comparative evaluations of the sound results obtained with the different configurations made it possible to infer hypotheses on the perceptual effects involved in each of them. – Research developed at LASom – Laboratory of Acoustics and Sound Arts (Depto de Música – Institute of Artes – Unicamp) in collaboration with the Laboratory of Signals, Multimedia and Telecommunications – SMT of the COPPE / UFRJ, with financing by Call Universal CNPq 2016 Proc. N. 432882 / 2016-2.

Schedule: Day 2- Thursday 28th - 5:30pm to 6pm STRAVINSKY ROOM

José Miguel Fernandez

José Miguel Fernandez

AntesCollider, librairie Antescofo pour le contrôle de SuperCollider.

AntesCollider est une librairie réalisée dans le langage de programmation Antescofo. Elle permet de communiquer directement entre Antescofo et le serveur SuperCollider. L’objectif de cette intégration est de créer dynamiquement des chaines de traitement audio en temps réel avec un contrôle fin des paramètres. L’expressivité du langage antescofo et son contrôle temporel permet des créations et des restructurations à la volée des traitements audio, de manière efficace et concise, tout en simplifiant le contrôle de la synthèse.

Cette librairie est conçue pour être utilisé par des musiciens, compositeurs, réalisateurs en informatique musicale ou designers sonores. Des exemples d’utilisation dans le contexte de musiques électroacoustiques et interactives/mixtes seront présentés ainsi qu’un tutoriel d’utilisation avec des exemples entre autres des modèles physiques de contrôle dans un espace HOA.

Schedule: Day 3- Friday 29th - noon to 12:30  STRAVINSKY ROOM

Llorenç Prats Bosca

Llorenç Prats Bosca

‘on the fly’ Interplay of Harmony, Timbre and Form: from spectral analysis, through Bach-Max to Realtime improvisation/composition

Abstract:

Last year as a composition student at Liszt Academy in Budapest I was granted a scholarship inside the UNKP Hungarian state program. My research on “Composition and Instrument Expansion by means of DSP” relied on some pieces IRCAM’s Forum software. Now I am presenting an insight into a system built in Max using Bach project’s notation facilities and realtime composition tools. The system provides realtime interaction between audio analysis data and a human improviser or composer with any kind of programmable input controller (OSC/midi). This set of tools gives the performer a flexible way to resynthesize-recompose parsed “musical language” (instead of raw audio descriptor data). The showcase features a semiacoustic set-up usingusing a Grand Piano (provided with contact microphones and contact speakers) as source material for sound input and as output means for sound projection, taking advantage of its resonant quality. Examples of pieces and improvisation with the system will be shown.

Schedule: Day 2- Thursday 28th - 4pm to 4:30pm  at STUDIO 5

Marco Bidin

Marco Bidin

Synthetic Ensemble 

Abstract:

Using OpenMusic for composing and synthesizing, creating chamber music for solo acoustic instruments and an ensemble of virtual instruments.

Short presentation of the OpenMusic workspace, using the library OM-Chroma created for the composition of my recent work “Fantasia Rapsodica” for piano and 4 channel electronics (playback), focusing on the control of different synthesis techniques applied to the same musical material.

Short presentation of the OpenMusic workspace, using the libraries OM-Chroma and OM-Chant, created the composition of “Ricercare II” for alto sax and 4 channel electronics (playback), focusing on the off-line interaction between composed improvisation and electronics as extended basso continuo.

A special note to the usage of AudioSculpt to create graphical scores, highly effective for “guided-improvisation” and learning processes based on aural experience, rather than conventional score reading.

Schedule: Day 3- Friday 29th - 11:30am to noon  at STUDIO 3

Marco Antonio Suarez-Cifuentes

Marco Antonio Suarez-Cifuentes

REVELO : L’AGNEAU MYSTIQUE

MARCO SUÁREZ-CIFUENTES, NIETO

SAINT-EUSTACHE CHURCH
PARIS, FRANCE

http://www.lebalcon.com/shows/agneau-mystique/

 

Matt Lewis

Matt Lewis

I am Speaking with the Future

Abstract: In a society dominated by the visual the quality of our acoustic environment is often of minority importance when it comes to design. Yet sonic experience is crucial in relation to health and well-being and there’s increasing recognition of the effect of noise pollution on our health and the potential for noisy spaces to disorientate and confuse. Following a thinking through sound, ‘I am Talking with the Future’ tests the potential for immersive 3D audio environments to enable us to imagine a healthier and more sustainable future for all our senses. The project combines leading-edge technology including 3D acoustic modelling, Augmented Reality, Natural Language Processing and VR to produce a user-controlled narrative led experience where speculative scenarios of an imagined future are presented through sound.
Through collaboration with acousticians, social scientists, local government and residents the work shows how we can make a better case for the role of thoughtful audio design.

Schedule: Day 3- Friday 29th - 3:30pm to 4pm  STRAVINSKY ROOM

Michelle Agnès Magalhaes

Michelle Agnès Magalhaes

Soon

Nadine Schütz

Nadine Schütz

GARDEN OF REFLEXIONS: COMPOSING WITH ECHOES

The work of Nadine Schütz explores sound as an intrinsic dimension of human environmental relatedness and pushes the integration of auditory qualities as a spatial dimension in urban landscape planning. Her artistic research project Urban Land Sound Design – Composing in(to) the Existing pertains to this context. Designing, composing in an urban or landscape environment, always implies working with the given identity of a site, its physical structure, its social conditions, its corresponding acoustic characteristics and sonic constellations. In collaboration with the two research teams Acoustic and Cognitive Spaces and Perception and Sound Design, Nadine Schütz works on the development of a respective design methodology involving reflexions on composition, simulation and project prefiguration.

Garden of Reflexions is part of an ongoing urban landscape project for the renovation of the Place de la Défense, and the central case study of Nadine Schütz’ residency at Ircam. In this installation, the exchange and processing of acoustic signatures between an existing urban space and new sounds to be integrated therein play a key compositional role and challenges the relationship between semantic and acoustic contents.

Schedule: Thursday 28, Day 2, 10- 4pm STUDIO 1

Nicholas Moroz

Nicholas Moroz

Sentient Spaces:
Interactive Spatialisation with Spat5 in ‘Unfurl’ for Bass Guitar and Electronics

Abstract :
‘Unfurl’ is a work for microtonal bass guitar and live electronics using Max and Spat5, and part of my composition PhD at Oxford University. My poster will demonstrate the work’s technology and aesthetics, presenting a live non-linear spatialisation and DSP system that the performer explores through free spatial interaction (Garcia, Carpentier, Bresson 2017), without time-based triggering or score following. An iPhone on the bass sends compass data via OSC into Max, which tracks their orientation and revolutions when moving within a loudspeaker array. These live movements are mapped to fixed and dynamic DSP and HOA spatialisation features in Spat including spiral trajectories (based on Antescofo Trajectory_Score_Library). Thus, immersive electronic sounds unfurl according to the bassist’s movement. Furthermore, Unfurl develops Fell’s (2008) notion of multistability to forward an aesthetic of ‘sentient spaces’, wherein an electronic sound-space is reimagined as a digital life-form.

Schedule: Day 3- Friday 29th - 3pm to 3:30pm SHANNON ROOM

Núria Giménez-Comas - Marlon Schumacher

Núria Giménez-Comas - Marlon Schumacher

2017.18 Artistic Research Residency

Sculpturing space: Re/Synthesis of Complex Spatial 3D Sound Structures
In collaboration with the Acoustic and Cognitive Spaces team and the ZKM.

The proposed collaborative artistic research project aims to explore and develop the notion of a “synthetic-soundscape” in the sense of working with densities in in various sound synthesis dimensions (in frequency as well as in space). The idea is to create not only the sensation of a sound expanded in space but a synthetic background around the listener that is densifying-increasing, in different emerging sound layers. To this end, composer Núria Giménez-Comas envisions the use of drawing tools connecting in an intuitive way graphical descriptions of mass densities (and their movements) with the sound synthesis and spatialisation tools in Open Music, more precisely the OMChroma/OMPrisma framework, combining synthesis/spatialization models, perceptual processing, and room effects. Marlon Schumacher’s contribution will be dedicated to R&D of new tools informed by cognitive mechanisms studied in spatial auditory perception and scene analysis, e.g. in order to develop “intelligent” systems adapting decorrelation/modulation of signals according to frequency content.

Omar Costa Hamido

Omar Costa Hamido

och.scorestream

Abstract :
och.scorestream is my take and technological implementation of “Score Streams”, a concept coined by performer-composer Michael Dessen in 2008, referring to “algorithmic, networked scores in which notations are displayed dynamically on computer screens”. Different iterations have included different kinds of components. However, in Dessen’s work as in others (Georg Hajdu’s “Quintet.net”, Grame’s “INScore”), to the increasing versatility and customizability of each platform corresponds an ever-increasing complexity of the system’s interface. This challenged me to develop a simplified implementation of this concept: a plugin-like device to be easily integrated with an otherwise already familiar interface to more users, allowing them to take advantage of the sequencing capabilities of the DAW. The och.scorestream also allows performers to submit their own graphic scores, that convey in yet another level this non-hierarchical context of collaborative and interactive music making practice.

Schedule: Day 2- Thursday 28th - 11:30am to noon SHANNON ROOM

Per Anders Nilsson & Palle Dahlstedt

Per Anders Nilsson & Palle Dahlstedt

Systemic Improvisation

Systemic improvisation refers to a class of musical improvisation systems, wherein virtual agents transform the musical interactions between players. It is a new kind of musical interaction/situation/work, and part of the author’s long-term research into technology-mediated musical creativity and performance. We define an improvisation system as a system designed by someone, with a specific configuration of human agents (musicians) and virtual agents (interactive processes) with communication among these agents. Systemic Improvisation is the activity of a number of musicians playing in such a system. These systems work with all kinds of instruments, and the sound from the instrument is heard acoustically. Our systems do not make any sounds of their own: they communicate with cues, i.e., non-musical signals, such as light, graphical shapes, sound cues or processed versions of what has been played earlier by someone. Systemic Improvisation is supported by the Swedish Research Council

Schedule: Day 2- Thursday 28th - 2pm to 2:30pm STRAVINSKY ROOM

Perception & Sound Design team (IRCAM)

Perception & Sound Design team (IRCAM)

In the broader framework of the sciences of sound design, the Perception and Sound Design team is interested in processes of conception/creation inherent to the discipline. In particular, the question of sound prototypes is studied through two environments currently under development. Firstly, “Speak” which addresses problems of listening and semantic description of sounds. And secondly, “Mimes” which looks at questions concerning the use of the voice as a tool for sketching sound. These two tools, created and developed during research projects, will be presented in detail during the presentation.

Schedule: Day 1- Wednesday 27th - 3:30pm to 4pm STRAVINSKY ROOM

Philippe Ollivier

Philippe Ollivier

Logelloop 5

Abstract:

Scheduled for release in March 2019—brings many new features that will change the habits of Live Looping by creating the concept of Modular Live Looping. This new approach will facilitate collective live looping but also complex musical writing based on loops recorded in real-time. Logelloop 5 is also equipped with a completely redesigned audio system for better performance, taking full advantage of advances in Max 8. Logelloop 5 comes with many other new features and a long list of bug fixes. It will be accompanied by a new SDK allowing Max users to program their plug-ins for Logelloop. Logelloop 5 will be available for MacOs and Windows.

Schedule: Day 2- Thursday 28th - 4:30pm to 5pm SHANNON ROOM

 

Renaud Felix

Renaud Felix

The Virtual-Soundpainting Workshop

Abstract:

What is Virtual-Soundpainting ?

It is a computer music program that lets the user simulate the presence of a soundpainting orchestra. In the case of Virtual-Soundpainting, a soundpainter (compser in real-time) uses a system of specific movements to compose musical phrases that will be produced in real-time by virtual performers. During Virtual-Soundpainting workshops, the participants are placed around the soundpainter and live the experience of being under the baton of Soundpainting.

What educational benefits could Virtual-Soundpainting provide for music schools, for example? It completes and implements—creatively—broader horizons for teaching music: musical language, composition, musical interpretation, multidisciplinary artistic creation. It makes it possible to develop creativity located at the interface of the composer with the performer.

What are the technologies used? The program itself uses sampling, synthesis, and MIDI control. The interface is realized on a tactile tablet via MIRA.

Schedule: Day 3- Friday 29th - 11:30am to 12:30 STUDIO 4

Richard Albert Bretschneider

Richard Albert Bretschneider

Using UX Design Sprints for innovation in music technology

Abstract :
The Design Sprint invented by Google is a workshop format to come up with a new product idea starting from scratch, prototype it and test the prototype with real users, all within a timespan of four days. While the original concept of Design Sprints was held over five days, the latest concept of the workshop is as follows:
Day 1: Gather information and sketch first ideas of the product
Day 2: Vote on the best ideas from day one / make a storyboard of the user journey
Day 3: Build the prototype
Day 4: Qualitative user testing of the prototype with real users

Design Sprints can be described as a peek into the future of the innovation you want to develop. Through the feedback of the users but also through the process the whole team goes through you gather valuable insights. As you have these insights at the very beginning of the development process they may help you to avoid costly mistakes.

The classical Design Sprint usually

Schedule: Day 3- Friday 29th - 11:30am to noon STRAVINSKY ROOM

Robert B. Lisek

Robert B. Lisek

Meta-learning for art and music composing

Abstract: The proposed research uses advanced AI methods (meta-learning) and cutting edge technologies including immersive environments to offer an innovative methods of composing. Specifically, the project proposes a solution for the problem of continuous adaptation of artificial agents in complex dynamic environments. This refers to the difficulty of designing artificial agents that can respond dynamically and intelligently to evolving complex situations. This project responds to the challenges posed by such dynamic changes by designing a meta-learning framework. Recently, meta-learning has evolved into an important topic: researchers are developing new techniques for fast reinforcement learning, neural network optimization, and for finding appropriate network architecture. Significantly expanding existing research, the project  designs a new multi-agent dynamic environment, and create sets of games useful for testing various aspects of continuous adaptation. The meta-learning framework will find wide-ranging applications that are characterized by dynamic interaction between AI agents and human users in dynamically evolving scenarios.

Schedule: Day 2- Thursday 28th - 4pm to 5:30pm SHANNON ROOM

Roland Cahen

Roland Cahen

Multichannel composition for Kinetic Design

Abstract: From the early 80’s, I have been searching for experiencing new music spatialisation paradigms.
I now believe that it is possible to create music in which sounds distribution and motion have its own musical expressiveness. I called this approach Kinetic Music. I composed several pieces in this purpose. Recently “Kinetic Design” commissioned by the GRM will be created for an Akousma concert in Paris the 20th Jan 2019. Most elements are composed in octophony using Max with some Ircam tools such as Mubu and Spat5.

 

Schedule: Day 2- Thursday 28th - 4pm to 4:30pm STRAVINSKY ROOM

Rosalía Soria Luz

Rosalía Soria Luz

Composing with state-space models

Abstract :
In this presentation I would like to talk about the creation process of my piece “Time Paradox”, an 8 channel tape piece. It combines modelling and abstract sound synthesis techniques. It is based on several real-time state-space models I have implemented in Max and SuperCollider.  The models represent mechanical mass-spring damper systems and a balancing mechanism. I used these implementations  to create interesting multi-channel sound synthesisers with spatialisation as well as diverse sound transformations.  I created textures, timbres and spaces by “connecting” the model’s behaviors to the available objects from Max and Super Collider and interacting with the models in real real-time.

Schedule: Day 2- Thursday 28th - noon to 12:30 STRAVINSKY ROOM

Sahin Kureta

Sahin Kureta

Zak: Towards an Artificially Intelligent Performer

ABSTRACT:
Zak (short for Zachary) is a work-in-progress Artificial Neural Network, that learns a mapping between symbolic representation of music and its audio realization, in order to be able to perform a piece of music notation as organically as a human does. This presentation will be about what I have learned as a composer and an independent researcher with regard to the applications of AI to contemporary composition, and the current state of Zak.
Schedule: Day 2- Thursday 28th - 5pm to 5:30pm  SHANNON ROOM

Stephan Schaub & Mikhail Malt

Stephan Schaub & Mikhail Malt

Presentation of the Library “symb-desc” for OpenMusic

The Symb-desc library for OM bundles together an ensemble of functions dedicated to the analysis of musical works composed over the past century, using traditional notation (possibly using a slightly different form), but without necessarily following the guidelines for tonality. It focuses on the study of unique examples—rather than corpora—in which the surfaces are complex enough to justify the use of extraction procedures and the representation of parameters that can, either alone or as coordinated forms, explain how the work unfolds. The library is specific in htat it concentrates on processing “symbolic” data that correspond to those written in the score.

After a quick look at the functions included in Symb-desc today, our presentation will focus on functions used to study data like temporal series which are associated with operations used or inspired by those often used in signal processing (sliding windows, smoothing…). Throughout our presentation, these functions will be illustrated using examples from the contemporary music repertoire.


Schedule: Day 2 - Wednesday 28th - noon to 12:30 SHANNON ROOM

Students from the Royal College of Arts

Students from the Royal College of Arts

Arman, Zohar, Alexia, Mariam

Chair Quintet

Abstract :
Chair Quintet repurposes the function of an object through sound. Five chairs are built, personalized, and sonified. Each chair contains sensors triggering sound samples. How one moves around and within the chair will give rise to specific sonic experiences. The chairs are custom designed according to their sonic function. The Chair Quintet operates on the interaction that occurs between the user, the chair, and the collective body of users and chairs.

Our creative process consists of applying ergonomic design to the everyday chair and reflecting this through sound design. The visual, physiological and sonic traits of each chair is meant to challenge users to experience new ways of creating sound.
In promoting the chair as an instrument, users will touch the dynamics of collaborative improvisation directly through movement of the body.

Schedule: Day 1 - Tuesday 27th - 11:30am to 4pm SHANNON ROOM
 Day 2 - Thursday 28th -11:30am to 4pm STUDIO 5

David Glück

20Cubed

Abstract :
20Cubed is exploring generative music for reshaping musical experience by using movement to improvise with sound in a 3D audio space.

Three geometrical shapes called Icosahedron are transformed into interactive elements to control sound by movement. Each triangle surface creates a sound. Combined they are creating 8000 sounds. More precisely: 20 to the power of 3 = 20x20x20 = 8000 ways of sounding.

The experience evokes different feelings that make musical storytelling uniquely powerful to every listener. It brings listening to music to a next level.

In the digital context, we got a crisis with measurement because we got a crisis with structure. Out of that, there’s is such a huge multiple choice involved in the digital context. Music breaks out of time-based aspect and becomes endless. Through a triangle, we can harmonise oppositions like music, sound and noise and create something that has a structural balance.

 
Day 3 - Friday 29th -noon to 12:30 SHANNON ROOM

Jordan Edge

Acclimate

Abstract

To respond physiologically or behaviourally to a change in a single environmental factor.

Acclimate is a temperature-reactive sound installation exploring the physical and psychological effects of noise on the human body.

Four prepared oscillating fans are positioned in a cross formation; each fan coupled with a loudspeaker emitting pink noise to articulate its position in space. The fans act as sonic objects, generating their own sound as well as modulating the sounds from the loudspeakers through airflow regulation. Temperature sensors dictate the speed of each fan, which in turn, create fluctuations in air pressure at a molecular level. These subtle shifts in the air affect the way in which sound is propagated and spatially diffused; creating a complex listening experience that is different with every encounter. The piece acts as a metaphor for homeostasis, the human body’s ability to constantly regulate itself in response to changes in external conditions.

Contemporary Sound Artist & Experimental Composer from the UK. Edge’s practice communicates the act of listening over time, and listening as a process to develop greater understanding of our environmental and sonic surroundings. Edge experiments with industrial objects, raw materials & architecture to manipulate the medium through which sound travels, creating sound environments that explore the physical and psychological effects of noise on the human experience.

www.jordanedge.co.uk

Schedule: Day 1 - Tuesday 27th - 11:30am to 4pm SHANNON ROOM
 Day 2 - Thursday 28th -11:30am to 4pm STUDIO 5

Dimitris Menexopoulos

CoMGoL

Abstract : CoMGoL, which stands for Conway’s Musical Game of Life, is an audiovisual algorithmic composition device, based on the famous Game of Life by the mathematician John Horton Conway. Built entirely in Max 8 and controlled through the Monome Grid, it allows the user to explore an infinite amount of monophonic or polyphonic sequencial possibilities that emerge from the simulation of the life cycle of 128 interactive cellular automata. There are two modes of operation: the user can either trigger a random initial state or manually set and trigger an initial state by using the Monome Grid as a controlling interface. This, combined with the fact that all the classic controls one can find on a subtractive synthesizer are present and MIDI mappable, makes CoMGoL a performance tool as well. Additionally, it serves as an excellent demonstration example of some of the brand new capabilities of MC, the ground-breaking multi-channelfeature introduced in Max 8.

Schedule: Day 1 - Tuesday 27th - 11:30am to 4pm SHANNON ROOM
Day 2 - Thursday 28th -11:30am to 4pm STUDIO 5
 

Vinicius Giusti

Vinicius Giusti

Max/Msp live-electronic tools for creative/improvisation sessions

Abstract: In the last few years, my composition process has been divided into two phases: creative/improvisation sessions and the formal composition/montage phase. In the first, the work is done in close collaboration with each musician, to create a source of audiovisual material that will be used to compose the piece. In the second phase, the general form of the piece is composed, in which all the layers are structured in time: video, fixed-electronic, instrumental performance and the live-electronics. The main objective of the first phase is to develop musical content that is related to that specific musician, and the social and personal interactions that were experienced in that space of collaboration. In this presentation, I would like to focus on illustrating the live-electronic tools developed in Max that I used in this first phase, to transform and create the sound material for the pieces directly related to the performers.

Those first creative/improvisation sessions consist of interactive meetings between the musicians and me. In these sessions, we improvise using their instrumental contribution and my live audio processing setup, developing sound and video materials that are recorded in different formats. Fundamentally, the activities involved in this phase aim to document and explore the possibilities of the personal sound palette of each musician, interacting with a live audio processing setup. The recordings and the meeting experience are the base material for the second phase of the composition of the piece. In these sessions the main objective of the first phase is to develop musical content that is related to that specific musician, his/her everyday life and the social and personal interactions that were experienced in that space of collaboration. An important part of my research work in the last few years was to develop the live-electronics tools in Max, which analyze the sound sources and use the analysis results to control the parameters of the sound transformation. Combined with the demonstration of examples of the Max programming, using some IRCAM tools and other specific objects, I intend to show some of the compositional results of this process.

Schedule: Day 2- Thursday 28th - 2:30pm to 3pm STRAVINSKY ROOM

 

PLANNING EN