IRCAM Forum Workshops Paris – Preliminary Program 2018

Anglaisvisuel_paris_2018_DEF

Please note: this is a preliminary program. It can be submitted to modifications and additions.


If you wish to participate to the Workshops, take a look at the registration page.

 


Wednesday 7th, March

Wednesday 7th, March

NEWS from IRCAM

Time Stravinsky Room – Conference room
9am-9:30am Registration
9:30am-10am Gregory BELLER and Paola PALUMBO (IRCAM)
Welcome Session
10am-10:30am Hugues VINET and Brigitte d’ ANDREA NOVEL (IRCAM)
IRCAM research and development news
10:30am-11am Axel ROEBEL and Charles PICASSO (IRCAM)
News from the Analysis Synthesis Team
11am-11:30am Break
11:30am-12pm Olivier WARUSFEL, Markus NOISTERNIG and Thibaut CARPENTIER (IRCAM)
News from EAC team
12pm-12:30pm Thomas HELIE, Robert PIECHAUD, Jean LOCHARD (IRCAM) and Hans Peter STUBBE 
News from S3AM Team
12:30pm-1pm Jérôme NIKA, Jean-Louis GIAVITTO, Philippe ESLING and Gérard ASSAYAG (IRCAM)
News from RepMus Team
1pm-2:30pm Lunch buffet
2:30pm-3pm Fréderic BEVILACQUA, Diemo SCHWARZ, Riccardo BORGHESI and Benjamin MATUSZEWKI (IRCAM)
News from ISMM Team
3pm-4pm Jean Julien AUCOUTURIER, Marco LIUNI, Pablo ARIAS (IRCAM)
News from Cream project Team
4pm-4:30pm Break
4:30pm-5:30pm Marta GENTILUCCI with Jérome NIKA, Axel ROEBEL and Marco LIUNI
End of artistic research residency with demo : Female singing voice’s vibrato and tremolo : Analysis, mapping and improvisation
5:30pm-6:00pm Rama GOTFRIED and RepMus Team (IRCAM)
Introducing Symbolist, a graphic notation environment for music and multimedia, developed by Rama Gottfried and Jean Bresson (IRCAM – Musical Representations) as part of Rama’s 2017-18 IRCAM-ZKM Musical Research Residency. Symbolist was designed to be flexible in purpose and function, capable of controlling computer rendering process such as spatial movement, and an open workspace for developing symbolic representations for performance with new gestural interfaces. The system is based on an Open Sound Control (OSC) encoding of symbols representing multi-rate and multidimensional control data, which can be streamed as control messages to audio processing, or any kind of media rendering system that speaks OSC. Symbols can be designed and composed graphically, and brought in relationship with other symbols. The environment provides tools for creating symbol groups and stave references, by which symbols maybe timed and used to constitute a structured and executable multimedia score.

 

2:30pm-3:30pm Shannon Room – Classroom


Thibaut CARPENTIER and Eric DAUBRESSE (IRCAM)
Hands-on: Spat Panoramix  Admix

 

6:30pm-7:30pm ANNOUNCEMENT OF THE Laureates of the Artistic Research Residency  Program 2018-2019


Drinks under the glass roof

 

8:30pm-10:00pm IRCAM LIVE CONCERT


Centre Pompidou, Grande Salle


Thursday 8th, March

Thursday 8th, March

Conference room, demos and posters

Time Stravinsky Room Studio 5 – Demos and workshops
Session: Active musicology – sound library Session: New interfaces / new instruments
9:30am-10am Maurilio CACCIATORE 
The name comes from the french definition “Musique Mixte”, that indicates the tradition of live-electronics (or even only electronics without real time interaction) with acoustical instruments on stage. It works like a middleware: the modules are ready-to-use but all their parts are open, so the users can modify them locally in their own patch as needed for their project. MMixte is targeted from middle-skill to advanced Max users; the less experienced programmers can get a training about how to organize what is generally called like “concert-patcher” (a Max patcher to use in concert for the management of the electronics of a piece), improve their programming technique or simply avoid the risk of crash during a concert because of a bad programming of own modules. Advanced Max users can trust the extreme simplicity of this collection – only the basic library of Max has been used – and build in few minutes an environment for a piece to develop further by the own. The time of preparation of a concert patcher can be reduced dramatically; the programmer can start after few passages the work about of the audio treatments, the spatialization and the other creative part of the Max patching.
This collection comes from a personal need. I started to develop for myself a hard-core for a standard concert patcher to be used in my pieces. I started to formalize these modules to avoid the lack of time to write them each time from the beginning. MMixte is, in this sense, a collection made from a composer for other composers.
The presentation will show the architecture of the package, its modules and their use.
MMixte is a Max package released in July 2017
Hervé PORCEDDA
New Instrument
10am:10:30am Rémi ADJIMAN 
Over the past decade, first the SATIS department, and the ASTRAM laboratory, and now the PRISM laboratory (Perception Représentation Image Son Musique) have developed a project to creation a browsable, online library of ambient sounds called “Sons du Sud”. This project is currently entering a new phase of development with the creation of a specially-developed thesaurus and the realization of an interface that provides an ergonomic and didactic tool for indexing sounds (sounds intended for audio-visual professionals, more specifically sound editors). Rémi Adjiman will present his current work, research subjects and possibilities. It is also possible he will present the current “Sons du Sud” website, keeping in mind development is scheduled for the end of 2018. This project is supported by the SATT PACA, PRIMI, and AFSI (Association Professionnel du Son à l’Image).
Peter CONNELLY, Eduardo FOUILLOUX, Jan-Marc HECKMANN
MuX : Create your own musical instruments and soundscapes in VR.
10:30am-11am Break
Session: Artificial intelligence & sound design
11am-11:30am Sahin KURETA 
The presentation is intended as an introduction to deep learning and its applications in music. The presentation features the use deep auto-encoders for generating novel sounds from a hidden representation of an audio corpus, audio style transfer as shown by Ulyanov (https://dmitryulyanov.github.io/audio-texture-synthesis-and-style-transfer/), and future directions based on CycleGan, WaveNet, etc; as well as a high level introductions of some basic concepts in machine learning.
Per BLOLAND
I wish to present on my ongoing work with the Modalys Induction Connection – created as a result of my Musical Research Residency in 2013. This presentation (an updated version of the one I gave in Santiago at a previous Forum) will begin with a general overview, followed by a series of audio examples drawn from my recently created project documentation pages.
This will be followed by additional examples from a piece I am currently composing, for piano and electronics, commissioned by Keith Kirchoff as part of his EAPiano Project. These new examples feature two main components: my experiments with modeling a grand piano bichord struck by a hammer, and additional experiments with the use of the Induction Connection. For most of these examples I first created a ModaLisp text document to generate audio files, then added code to generate an mlys script for real time use in a Max patch.
11:30am-12pm Karolína KOTNOUR  
Within this project, I aim to define a mutual synthesis of sound and position of the borderlines in space. Space, as well as word, has its own shape, meaning, and wording. How it affects our perception of architecture acoustic experience space? What is the relationship between sound and vision and how the brain interprets multidimensional space? As the sound spectrum affects brain activity and spatial perception, such as in the audible frequency range based on the physiological structures of the auditory organ of a living organism?
The main purpose of this project is the implementation of a structure that on the basis of this interaction creates a spatial envelope / the veil / that is changing in time and space, resulting in parallel. This creates elastic / fluid / structure of matter and the sound reflecting itself. This situation is modeled on the basis of information obtained from sensors located in space and on the basis of sensory evaluation of human auditory perception in such space. The realization of this structure will be preceded by testing and visualization of such structures in interactive virtual reality using 3D and 4D programs like Rhino, Grasshopper, Max7/MSP for Ableton Live which allows creative representation of a sound as a shape in a common space.
Summary of fundamental objectives:
Visualization of human sensory perception of an environment in a relation to the architecture forms.
Specifics of psychoacoustic space and visualization of the reality and inner structure of reality.
Possibilities and contexts of subjective visualization of sound in art and subjective graphical score for musical compositions. All objectives will be evaluated in bending details of a designing structure of “The Vail.”
Richard Albert BRETSCHNEIDER 
Project GPSES (Gestural parameter space explorer for synthesizers) proposes an approachto support divergent thinking processes of artists navigating through the enormously hugeamount of sounds of a synthesizer. The project lives at the intersection of music andinteraction design.
By using the hand tracking device Leap Motion the artist the parameter space of a synthesizer is being translated into a physical 3D space on the desk of the artist where the musician can navigatethrough the possible combinations with a set of different hand postures and handmovements.
At IRCAM Forum 2016 Richard Bretschneider already presented the theoretical foundationsof this project in a talk. Now he would love to present the enhancements as well as a liveperformance of a working prototype.
The main elements of the prototype consist of:
– Leap Motion device
-Wekinator (Software used to recognize the hand posture via machine learning) -Python script (to translate hand posture + hand position into synthesizer parameters that get sent to a synthesizer via OSC)
– A software synthesizer
Bio:
Richard Albert Bretschneider is an internationally working artist and user experienceconsultant living in Germany.He believes that technology is not only a translator for ideas – it‘s a source for inspiration as well.
Personal website
12pm:12:30pm Robert B. LISEK 
We observe the success of artificial neural networks in simulating human performance on a number of tasks: such as image recognition, natural language processing, etc. However, there are limits to state-of-the- art AI that separate it from human-like intelligence. Humans can learn a new skill without forgetting what they have already learned and they can improve their activity and gradually become better learners. Today’s AI algorithms are limited in how much previous knowledge they are able to keep through each new training phase and how much they can reuse. In practice, this means that you need to build a new algorithm for each new specific task. There is domain called AGI where will be possible to find solutions for this problem. Artificial general intelligence(AGI) describes research that aims to create machines capable of general intelligent action. “General” means that one AI program realizes number of different tasks and the same code can be used in many applications. We must focus on self-improvement techniques e.g. Reinforcement Learning and integrate it with deep learning, recurrent networks.
René CAUSSÉ 
The multidisciplinary study of trumpets—or cornua in Latin—of Pompeii, ancestors of today’s brass instruments, involving the humanities, acoustics, materials science, instrument making, sound and music synthesis, and is a part of the Paysages sonores et espaces urbains de la Méditerranée ancienne supported by the École Française de Rome. The presentation will focus primarily on one of the study’s aspects, that is the creation of virtual copied of 5 trumpets discovered in 1852 and 1884. These models help us understand their performance as well as their sonic and musical possibilities. In the beginning, the study consisted of analyzing the current reconstitutions of these instruments, which have undergone several restorations, and offer a profile that is as exhaustive and as accurate as possible based on a number of indicators. This profile made it possible to calculate, using the software program Resonans created to assist in the conception of wind instruments, the resonance of these trumpets and give us information about the notes that could be played with them and on a range of characteristics such as accuracy, timbre, ease and power of sound emission.
After this, thanks to the software program Modalys, a full, real-time model of these instruments was created in Max, letting us effectively test their sonic and musical possibilities.
12:30pm-1pm Knut KAULKE 
Where do beats begin and where does tonality end? You can find the answer in a forest of steel trees composed of resonant bark during a spectral thunderstorm“. The aim of my research project is the merger of melodic and percussion elements resulting into one conglomeration. Drum sets become more tonal without loosing their characteristic noise-like sound.
The ambition is to create entirely new sounds and complex forms driven by beat sequences which can be played via claviature.
During my scientific doctoral thesis I was driven by a passion to discover something new and to increase the understanding of complex molecular networks. In life sciences, hypothesis often have to be modified due to functional complexity of living things. Experimental results may disprove theoretical assumptions – a novel, surprising thought can appear! My curiosity and excitement of organic/natural complexity as a scientist also emerge as a musician when I perform sound studies.
My musical research focuses on a merger of melodic and percussive elements into one conglomeration. I approach my theme at two levels: 1. The sound design of percussive elements and 2. The tonal sequencing in and of this special sounds.
Applied Kyma instruments and effects are almost exclusively based on the SlipStick model. Thereby, SlipStick operates as an engine of sound synthesis. Simultaneously, it induces and influences other forms of synthesis as Frequency Modulation, Physical Modelling and Resynthesis including the combination and influence of one and the other. Subsequently, single sounds are multiplied by the Replicator. Thereby, highly complex and lively sounds are being created.
The intended tonality of percussive sounds is underpinned by the modification of Euclidean and Non-Euclidean rhythms. These rhythms were chosen because the structure of these algorithms may be used to generate very native percussive sequences, which can be modified by shifting tone pitches and other variances leading to a mergence in tonality and a subsequent collapse. A special developed Kyma sequencer, the core of my setup, can be played in half tone steps and creates instrumental sounds, which will surprise everyone.
The content of the demo:
Applied Kyma instruments and effects in my demo performance are almost exclusively based on the SlipStick model. Thereby, SlipStick operates as an engine of sound synthesis. Simultaneously, it induces and influences other forms of synthesis as Frequency Modulation, Physical Modelling and GrainCloud Resynthesis including the combination and influence of one and the other. Subsequently, single sounds are multiplied by the Replicator. The replicator can be used in Kyma’s programming environment for multiplying variables. (e. g. voices, controller values, instruments etc.)
In my demo I will show some techniques in how to excite SlipStick and how to use SlipStick as an exciter of sound synthesis. Furthermore, I will share some modification hints leading to a more lively sound.
Raphael PANIS 
The percussion is a wooden box on legs. It can be played on its top surface with hands (you can hit or rub the surface). The box itself works as a sound box. Of course, the table makes sound on its own, but the addition of electronics makes it possible to hear a larger range of timbres. Sensors detect when the surface is touched, the information is sent to a microprocessor (a BELA card). After analysis, the card sends the audio signal that vibrates the sound box, transforming the sound created by the instrument.
This table is part of my final student project, a mixed composition for solo instrument and spatialized electronics using Max with Antescofo, Spat, and the language FAUST. It also includes a video projection on several screens. The work is presented here as an interactive installation and the audience can play on the table; their actions are echoed in the electronic sounds and the image.
1pm-2pm Lunch Buffet Installation : Mux
Session: Career path with secondary-school pupil Session: From the lab to the scene
2pm-2:30pm
8 March The Women Day:Woman in Sound professions  The composer  Violeta CRUZ
Since 2011, I have been working on a research and writing project based on the musical and staged dialogue between symphonic instruments and electroacoustic sonic objects. Following up on the creation of three objects (the electroacoustic fountain, the little man machine, and the light rattle) that led to 7 concert works and 4 installations and performances, the project has recently expanded to encompass the conception and construction of the sonic décor of my opera La Princesse Légère. This décor was designed in collaboration with the set designer Oria Puppo and the director Jos Houben. In the context of an opera, the theatrical dimension of the objects becomes more important, providing new leads for their musical exploitation and offering new challenges.
Franck VIGROUX and Antoine SCHMITT  
Chronostasis is a temporal illusion that effects the neurons responsible for the prediction of the immediate future and to listening to music: time seems to stand still. But time is elastic and a stretched elastic always returns to its original form. The audio-visual performance of Chronostasis pushes this logic to its limits by diluting a catastrophic moment with temporal stretches and inversions throughout the performance. The present is frozen and diffracts forever, the past and the future cease to exist.The music is interpreted live with electronic instruments, the video is generative.
Excerpt
website
2:30pm-3pm Suguru GOTO 
These are based upon sensor technology, as well as programming of Mapping Interface and robotics in order to construct instruments virtually, and at the same time this explores the relationship between interfaces and humans (Man and Machine). Suguru Goto has been appointed Associate Professor at Tokyo University of the Arts since April, 2017, and he has been further developing this research and work productions. Having been basing on the interactive environment, it reacts images and sound in the space of virtual reality. As a feature of reproduction of the virtual space within this research, the simulation of a zero gravity space has been conceived in the process of development. The results and knowledge of this research will be able to be perhaps applied to new devices, including new art expression, and will create a new flow in the field of experiential expression using sounds and images.
3pm-3:30pm Julia BLONDEAU (IRCAM)
This presentation will focus on the creation of the work Namenlosen, premiered at the Philharmonie 2 de Paris in June 2017 by the Ensemble Intercontemporain. I will discuss the use of the language Antescofo and its connections with Pnoramix (new graphical interface for Spat) and to CSound (for the generation of synthesis in real-time). I will provide a few examples of uses of multiple times and writing as well as a library of spatialization with automatic source assignment.
3:30pm-4pm Pedro GARCIA-VELASQUEZ and Augustin MULLER (IRCAM)
Residency release in artistic research Ircam/ZKM
 
The aim of this project is to explore the possibilities of characterizing imaginary and virtual spaces. Rather than trying to create an acoustic simulation of a place, we will explore the expressive and musical possibilities of particular acoustics connected to the evocative power of sound and the expressivity of memory.
After numerous concerts and binaural sound experiences, we created a library of High Order Ambisonics (HOA) format acoustic imprints that can be used in a variety of situations for the acoustic and oneiric characteristics of existing venues, stressing the possibilities of immersion of spatialized listening. This library focuses on the remarkable acoustics of certain places, but also on their poetic nature and their evocative power. These acoustic imprints and a few of the ambiances captured in situ are used here as etudes, or sketches that make it possible to explore certain possibilities and offer an acoustic journey in these re-imagined spaces.
4pm-4:30pm Break
Session: Collective interaction and geolocation Session: Gesture – therapy – bioart
4:30pm-5pm Jan DIETRICH 
StreamCaching est un projet musical qui a débuté en juin 2017 à Hambourg. Il a été lancé au Blurred Edges festival de musique contemporaine: 10 compositions ont été commandées, numérisées, munies de données GPS et localisées à travers la ville. Le public a pu rechercher les pistes avec le smartphone et les écouter directement sur le site.
La présentation présentera l’historique et l’idée du projet StreamCaching  soit expliquer le concept de localisation de 10 œuvres d’art le long des voies au sol du satellite, montrer des extraits des compositions et de l’art visuel, donner un aperçu de la façon dont le projet se poursuivra et laisser place à la discussion sur les questions techniques et sociales.
Andreas BERGSLAND and Robert WECHSLER 
The MotionComposer is a therapy device that turns movement into music. The newest version uses passive stereo-vision motion tracking technology and offers a number of musical environments, each with a different mapping. (Previous versions used a hybrid CMOS/ToF technology). In serving persons of all abilities, we face the challenge to provide the kinesic and musical conditions that afford sonic embodiment, in other words, that give users the impression of hearing their movements and shapes. A successful therapeutic device must, a) have a low entry fee, offering an immediate and strong causal relationship, and b) offer an evocative dance/music experience, to assure motivation and interest over time. To satisfy both these priorities, the musical environment “Particles” uses a mapping in which small discrete movements trigger short, discrete sounds, and larger flowing movements make rich conglomerations of those same sounds, which are then further modified by the shape of the us
er’s body.
5pm-5:30pm Oeyvind BRANDTSEGG 
The project explores cross-adaptive processing as a drastic intervention in the modes of communication between performing musicians. Digital audio analysis methods are used to let features of one sound modulate the electronic processing of another. This allows one performer’s musical expression on his/her instrument to influence radical changes to another performer’s sound. This action affects the performance conditions for both musicians. The project method is based on iterative practical experimentation sessions. Development of processing tools and composition of interaction mappings are refined on each iteration, and different performative strategies explored. All documentation and software is available online as open source and open access.
The project is run by the Norwegian University of Technology and Science, Music Technology, Trondheim. Collaboration partners at De Montfort University, Maynooth University, Queen Mary University of London, Norwegian Music Academy, University of California San Diego, and a range of fine freelance music performers.
The presentation will look at key findings, artistic and technical issues, and future potential.
Thomas DEUEL 
The Encephalophone is a hands-free musical instrument and musical prosthetic. It measures EEG ‘brain-wave’ signal to allow users to generate music in real time using only thought control, without movement. With unique Brain-Computer Interface (BCI) algorithms it harnesses the user’s electrical brain signals to create music in real-time using mental imagery and not requiring any movement. Thus it can be used with paralyzed individuals as a musical prosthetic. It has been experimentally proven to work with reasonable accuracy, and is now being used in clinical trials with patients who are paralyzed from stroke, MS, ALS, or spinal cord injury. These patients who have lost their musical abilities due to neurological disease are empowered to create music in real-time for the first time since their injury without needing movement.
5:30pm-6pm Garth PAINE

Garth Paine present his artistic residency project in the context of residencies Ircam and ZKM.

Future Perfect will be a concert performance using smart phone virtual reality technologies and ambisonic/ WaveField sound diffusion.

Future Perfect explores the seam between virtual reality as a documentation format for environmental research and archiving nature, combining the thoughts that:

1) ‘nature’ as we know it may, in the near future, only exist in virtual reality archives, and the

2)  notion of the virtual, a hyper-real imaginative world contained by a technological mediation can be presented to individual as a personal experience.

The Future Perfect performance will not have a fixed Point of View. interactive crowd mapping using smart phone beacons will generate personal journeys through the work and determine each audience members own viewing and listening perspectives.  The work will draw on the deep expertise at IRCAM in WaveField Synthesis techniques, which through the smart phone tracking will allow sonic objects to be attached to and follow people within the concert space.  HOA ambisonics will use SPAT to creat an immersive sound field immersion. Smart Phone tracking will allow the tracking of people within the concert space, using flocking and spatial spread to drive interactive musical and animation parameters.

The work will be made from 360 VR footage shot by Paine in nature preserves in Paris and Karlsruhe, blended with procedural animations, derived from plant images and HOA record gins made by the composer  at the same location.  Participants will be able to walk freely through the space, with vector lines being drawn between people subject to proximity and vectors of movement.  Other individuals will be indicated in the VR space as outlines to make movement safe and to help develop a collective consciousness

6pm-7pm Emanuele PALUMBO 
I would like to present my research work as a performative installation that will address two areas of interest: the relationship between the musical gesture and the physiological response of a saxophonist, the relationship between listening and a the physiological “resonance” of a physiological performer, and finally, the relationship between the two. These different types of interaction will be explored throughout the spaces and moments of the installation that, via a computer, automatically generates its form. The psychological parameters of the musician and physiological interpreter are captured via the LISTEN system, processed by a computer and used to create—in real-time—both the electronic sounds and the score. The physiological interpreter is a dancer who will be in different positions in the space: standing, sitting, lying down. Here my work is combined with a colleague, the choreographer Zdenka Brungot Svitekovà. Zdenka works on the somatic power of certain techniques to manipulate the body: another dancer will be responsible for managing the “physiological interpreter”, creating changes in the quality of the fabric of the body; the result will therefore be a change in the music generated. The people who enter the installation space are also invited to interact with the physiological interpreter. In a third part of the installation, we will explore the relationship between the saxophonist and the physiological interpreter.
This installation will also display the technology used and the data captured. Monitors with this information will be displayed to the participants.
At the end of the 30-minute performance, I will present the installation

Classroom

Time Nono ClassRoom Shannon Meeting and ClassRoom
10am-10:30am Alexander MIHALIC 
Sampo is an extension for a performer playing an acoustic instrument. It was designed to play all types of electroacoustic music with an acoustic instrument.
Direct access to the settings:
Triggering sound files
Triggering control sequencesSampo lets the performer play works already in the mixed music repertory; a repertory of several hundred works written since the 60s. Works—ensembles of electroacoustic configurations and fixed contents—are accessible via a graphical interface on the Sampo’s touch screen. Distribution of the electroacoustic settings and sound files is carried out on a server and is available using Sampo, equipped with a WIFI connection and automatic access to the database.
10:30am-11am Alexander MIHALIC 
Sampo is an extension for a performer playing an acoustic instrument. It was designed to play all types of electroacoustic music with an acoustic instrument.
Direct access to the settings:
Triggering sound files
Triggering control sequencesSampo lets the performer play works already in the mixed music repertory; a repertory of several hundred works written since the 60s. Works—ensembles of electroacoustic configurations and fixed contents—are accessible via a graphical interface on the Sampo’s touch screen. Distribution of the electroacoustic settings and sound files is carried out on a server and is available using Sampo, equipped with a WIFI connection and automatic access to the database.
11am-12pm Marco LIUNI and Emanuele PALUMBO
Hands-on
12pm-12:30pm

Meet-Up

7:30pm–10pm MEET-UP HACK DAYS
Audio profession Community


Studio 5


Friday 9th, March

Friday 9th, March

Conference room, demos and posters

Time Stravinsky Conference Room Studio 5 – Demo and Workshops
Session: Notation – perception – languages
9:30am-10am Daniel MANCERO BAQUERIZO 
In the electroacoustic music field, the compositions based on soundscapes or “soundscape compositions” can be characterized by the presence of sound elements from the environment throughout their repertoire. In contrast to sound arts, this is a form of composition in that it uses musical language, rather than the result being a soundscape, it is a composition that uses a sound environment as source material for musical creation, implying a structural strategy for composition. In my PhD thesis, I start with the idea that these compositions, using a very specific logic of sound organization on a poetic-perceptive level, can be distinguished and classified in 3 groups according to morphological criteria applied to the sound mass: 1/ spectral and brilliance flattening, 2/ the distribution, spectral symmetry, and brilliance, 3/inharmonic, amplitude, and brilliance. After characterizing and categorizing a corpus representative of the repertoire [Mancero, Bonardi, and Solomos 2017], I developed computer tools for the segmentation, description, instantiation, and harmonic analysis of the pertinent sound materials with the goal of consolidating a few harmonic models for musical composition. These models respond to two logics for analysis [Bregman 1990]. The first, follows the principle of sequential regrouping of formant frequencies. The second, follows the principle of simultaneous grouping of formant peaks, made up of cords and non-octave modes. I developed a few patches for acoustic segmentation and description with the Mubu and Pipo [Norbert Schnell] libraries and ircamdescriptors [IRCAM’s Analysis/Synthesis team]. I also developed a few tools for harmonic analysis that use primarily the FTM & CO, Gabor libraries [IRCAM’s Real-Time Musical Interactions team] and Bach [Andrea Agostini & Daniele Ghisi], in Max. In complement, I used the EAnalysis [Pierre Couprie] software for the characterization of pertinent materials and the choice of acoustic descriptors. A few musical compositions were carried out during the research and perfection of the tools for harmonic analysis, making it possible to associate research with creation, notably “Chant Elliptique n°2” for Celtic harp and electronics, “la rugosité de la nuit” for accordion and electronics, “Turgescences” for mandolin, guitar, and flute, and “Estambre urdido” for an ensemble of 5 percussionists.
10am:10:30am Nadir BABOURI
A presentation of a fancy Antescofo Library converting mathematical parametric functions into curves controlling Spat sources’ trajectories. The outcome of these processes are X, Y and Z or Azimut, Elevation and Distance you can scale and convert to suitable data.
My aim is to present and propose a collaboration with a forum member – a programmer – to further develop this library and achieve a more simple human readable scripting such as the ‘Turtle’ python library.https://github.com/nadirB/Spat_Trajectory_Score_Library 
10:30am-11am Andrea AGOSTINI 
The project I present here is a simple textual programming language meant to ease the manipulation of the data structures of bach (an extension of Max for musical representation and computer-aided composition). Its goal is facilitating the representation of non-trivial processes and algorithms, specifically in the context of musical formalization. This is, in general, something not easily achievable in Max without resorting to writing code in some established, general-purpose programming language (C, C++, Java, JavaScript and more) through dedicated APIs whose bindings with Max tend to be cumbersome and, in some cases, inefficient because of the deep underlying differences in the programming paradigms and data representations. On the contrary, the language I propose is meant to be extremely simple and tightly integrated to the Max environment and the data structures of bach. The project is the outcome of the Musical Research Residency I carried out at IRCAM in 2017.
11am-11:30am David ZICARELLI and Emmanuel JOURDAN 
We will present the the latest news of Cycling ’74 regarding Max and Max for Live.
11:30am-12pm Break
12pm-12:30pm Denis BEURET
I developed a program that simulates a virtual quintet (drums, bass, piano, and another instrument) of jazz soloist(s). This ensemble grooves and has a wide range of controls for the rhythmic parameters of the groove. It is easy to make all the instruments play together following a soloist, give them more freedom, have them play poly-rhythmically, follow different inputs individually (audio or MIDI file, microphones, Omax, etc), control different parameters of the performance, etc. This gives a more natural feeling to the instruments. The ensemble can also follow variations in tempo and variations in the intensities of any given inputs. Concerning harmony, chords with 6 notes are generated in real-time depending on the notes played by the instruments. This virtual quintet is an innovative tool that can be set up to play different styles with a broad range of sounds.
David ZICARELLI and Emmanuel JOURDAN (Cycling `74)
Meeting Max experts
12:30pm-1pm Dr Juan Manuel ABRAS CONTEL 
‘Diálogos franciscano(n)s’, for flute and electronics, is linked to new technologies—like voice and sonic modelling (provided by Ircam Trax v3), audio spatialization, spectral morphing—, musical ekphrasis, numinosity, intertextuality and extended performance techniques. The piece is an artistic transmedialization of a stained glass made by Frère Éric for the Eglise romane de Taizé (France) depicting St. Francis of Assisi surrounded by six birds. The listener seems to hear how 6 different sound sources (representing birds) appear at 4 equidistant spots (representing trees) and start emitting signals from the periphery to the center, where the flutist (representing St. Francis) is located, while rotating counterclockwise at constant speed and transforming themselves into children’s voices (representing angels), just before the birds seem to take off rapidly. The flutist rotates almost constantly over his/her vertical axis to follow the apparition of the mentioned sound sources and answer to each of the emitted bird songs with his/her flute by playing in a somehow canonical way (hence the title) their corresponding transcriptions, which are connected by passages characterized by the use of extended performance techniques.
1pm-2pm Lunch Buffet Lunch Buffet
2pm-2:30pm Demo-poster 
Robert LISEK Performance
Norbert SCHNELL
Music Dices is one of the students’ projects created in the framework of the master’s program in Music Design  the Faculty of Digital Media of Furtwangen University started last autumn. The installation consists of a musical dice game that allows for creating music arrangements by chance. In the game, up to three players throw foam dices into the room. The motion and position of the three dices determine the concatenation and combination of three music  tracks (i.e. drums, rhythm guitar, and lead guitar). The implementation of the game is entirely based on mobile web technologies and integrates the Soundworks framework developed at Ircam.
Design: Tanita Deinhammer, Supervisor: Prof. Dr. Norbert Schnell

Aurelian BERTRAND ZEF The Electric Violin
2:30pm-3pm
3pm-3:30pm Forum : perspectives and new platform
3:30pm-4pm

Classroom

Time Nono ClassRoom Shannon Meeting and ClassRoom
9:30am-10am Hugo SILVA (PLUX) and ISMM Team (IRCAM)
Physiological data has had a transforming role on multiple aspects of society, which goes beyond the health sciences domains to which they were traditionally associated with. While biomedical engineering is a classical discipline where the topic is amply covered, today physiological data is a matter of interest for students, researchers and hobbyists in areas ranging from arts, programming, engineering, among others. Regardless of the context, the use physiological in experimental activities and practical projects is heavily bounded by the cost and limited access to adequate support materials.
In this workshop we will focus on BITalino, a versatile toolkit composed of low-cost hardware and software, and created to enable anyone to create cool projects and applications involving physiological data. The hardware consists of a modular wireless biosignal acquisition system that can be used to acquire data in real time, interface with other devices (e.g. Arduino or Raspberry PI), or perform rapid prototyping of end-user applications. The software comprehends a set of programming APIs, a biosignal processing toolbox, and a framework for real time data acquisition and postprocessing.
10:00am-11am Hands-on
Alain BONARDI ( IRCAM)  Philippe GALLERON, Eric MAESTRI, Jean MILLOT, Eliott PARIS, Anne SEDES ( University Paris 8) 
The ANR-funded MUSICOLL project (2016-2018) aims at redesigning the musical practice of realtime graphical computing in a collaborative manner. Hosted at the Maison des Sciences de l’Homme Paris Nord, it brings together the Centre de Recherche en Informatique et Création Musicale (CICM) belonging to MUSIDANSE Lab at Paris 8 University and OhmForce company specialized in collaborative digital audio. In this framework, we develop Kiwi, an environment for realtime collaborative music creation enabling to several creators to simultaneously work on the production of a sound process hosted online. We concurrently design a new course in patching for beginners that will be tought at Paris 8 University from february to may 2018. During this hands-on session, we will propose to participants a first introduction to Kiwi on their laptop computers as well as a first collaborative approach of patching.
11pm-11:30pm

Installation

1:30pm-4pm Diemo SCHWARTZ
DIRTI machines


Salle Shannon

Concert

8pm-9:30pm La princesse Légère

Théâtre national de l’Opéra comique


Installations

Time Studio 1 Under the glass roof
10am-4:30pm Pedro GARCIA-VELASQUEZ and Augustin MULLER (IRCAM)
Residency release in artistic research IRCAM/ZKM
The aim of this project is to explore the possibilities of characterizing imaginary and virtual spaces. Rather than trying to create an acoustic simulation of a place, we will explore the expressive and musical possibilities of particular acoustics connected to the evocative power of sound and the expressivity of memory.
After numerous concerts and binaural sound experiences, we created a library of High Order Ambisonics (HOA) format acoustic imprints that can be used in a variety of situations for the acoustic and oneiric characteristics of existing venues, stressing the possibilities of immersion of spatialized listening. This library focuses on the remarkable acoustics of certain places, but also on their poetic nature and their evocative power. These acoustic imprints and a few of the ambiances captured in situ are used here as etudes, or sketches that make it possible to explore certain possibilities and offer an acoustic journey in these re-imagined spaces.
Tristan SOREAU
void -. ..- . .() is an interactive installation that places the visitor in a listening situation opposite a digital biotope. The listener finds herself in the presence of a simulator of biological entities, an invisible swarm with only a sonorous manifestation. The quieter the visitor, the more the swarm makes itself manifest. Conversely, the louder the visitor, the quieter the swarm. The starting point for this work is the desire to let the behavior of swarms (be they birds or insects) be heard. The latter presented unusual characteristics to put to spatialized sound. We can observe a cloud that is both a mass and a sum of individuals. The movement of a swarm can be seen both in its ensemble and also as individual trajectories. On one hand, the installation is made up of a program—belonging to the family of multi-agent systems—that simulates the movements of a digital cloud. These movements are decided based on the public’s actions. The quieter the audience, the more the cloud manifests itself and the more individuals can be found in the cloud. Conversely, the louder the visitors, the sparser the cloud which assumes an escape behavior. On the other hand, the sound manifestation of the cloud is carried out in an “environment” made up of nests designed algorithmically with forms borrowed from nature. The mass’ movement is re-transcribed via a sound layer that moves from one nest to another depending on the virtual movement generated by the program. Individual trajectories manifest themselves through succinct sounds that are scattered through the space. void -. ..- . .() is born of the association of writing a program that simulates the behaviors of swarms and the fabrication of an environment in which the swarm can manifest itself through sound and in reaction to the public’s behavior. The installation troubles the listener, simulating a swarm equipped with affects; it makes one believe there is a biological entity, nonetheless at the threshold of digital, forcing a sort of strangeness to surface, a paradox: this entity is completely digital, however its behavior and the nature of its manifestation seem to be organic.
10am-6pm

Last update : February 20, 2018