IRCAM Forum Workshops Paris – Preliminary Program 2017

IRCAM and Centre Pompidou

15 – 17 March 2017

Please note: this preliminary program can be submitted to future modifications or additions.

Abstracts available

Follow us on @ircamforum #ircamforumworkshops!
CONCERT IRCAM LIVE 2017 : March 18, 20:30, Centre Pompidou, Grande Salle
Click here for more information.


Wednesday, March 15

Wednesday, March 15

Architecture and 3D conception

Parallel sessions – at IRCAM and Centre Pompidou

Time Title Speaker(s)
 9:00-9:30  Welcome Session / Presentation of the Program Greg Beller, Paola Palumbo (IRCAM)
 9:30-9:45  IRCAM Research and Development News Hugues Vinet (Ircam, Director R&D)
 9:45-10:15
We present panoramix, a versatile workstation for the diffusion, mixing and post-production of spatial sound. Designed as a virtual console, the tool provides a comprehensive environment for combining channel–, scene– and object–based audio. The incoming streams are mixed in a flexible bus architecture which tightly couples sound spatialization with reverberation effects. The application supports a broad range of rendering techniques (VBAP, HOA, binaural, etc.) and it is remotely controllable via the OSC protocol.
panoramix_1-930x500
Olivier Warusfel, Thibaut Carpentier (IRCAM, EAC Team)
 10:15-10:45
We will show new additions to Max available for download via the Max Package Manager, featuring the BLOCKS package that integrates support for ROLI hardware and Miraweb, a new version of the Mira controller that works in any web browser and is built on top of an open source protocol called Xebra that can be used to interact with Max patchers over a network.
David Zicarelli & Joshua Clayton (Cycling ’74)
 10:45-11:00  Break
 11:00-11:30
DAVID (Da Amazing Voice Inflection Device) is a free, real-time voice transformation tool able to “colour” any voice recording with an emotion that wasn’t intended by it’s speaker. DAVID was especially designed with the affective psychology and neuroscience community in mind, and aims to provide researchers with new ways to produce and control affective stimuli, both for offline listening and for real-time paradigms.
Jean-Julien Aucouturier, Marco Liuni (IRCAM, CREAM Team)
 11:30-12:00
We will present an overview over the projects we realised over the past years in the framework of the CoSiMa project (i.e. installations, participative concerts, workshops, and web applications). All of these projects are based on the hypothesis that visitors of a concert or a gallery – or anybody, anywhere anytime – would pull their smartphone out of their pocket to spontaneously join others around in creating sound and music. The presentations ends with a demonstration for which we invite the audience to participate.http://cosima.ircam.fr/
Norbert Schnell, Benjamin Matuszewski (IRCAM, ISMM Team)
 12:00-12:30  Presentation of his Artistic Residency at IRCAM  “Proxemic Fields” Lorenzo Bianchi Hoesch (artistic research residency), David Poirier-Quinot (IRCAM, ISMM team)
 12:30-13:00  Distributed Interactive Machine Learning Fréderic Bevilacqua, Joseph Larralde (ISMM Team)
 13:00-15:00  Buffet / student posters / installations

Time Title Speaker(s)
 15:00-16:30  Hands-On Session: panoramix. Special session for sound engineers Thibaut Carpentier, Jérémie Henrot (IRCAM)
 16:30-18:00  Hands-On Sessions: 3D Sound on Mobile Phone With the Web Platform CoSiMa Xavier Boissarie (ORBE) et Norbert Schnell ( IRCAM)
logo_site_2Cosima-logo

Time Title Speaker(s)
  11:00-12:00 Hands-On Session: Max Expert David Zicarelli & Joshua Clayton (Cycling ’74)
 15:00-16:30 Hands-On Session: Modalys for Beginners Jean Lochard, Robert Piéchaud (IRCAM)
 16:30-18:00 Hands-On Session: OpenMusic for Beginners Karim Haddad (IRCAM)

Time Title Speaker(s)
 14:00-14:30 Poster Presentation Patricia Alessandrini (Goldsmiths University of London)
 14:30-17:00

Lia Mice – SOUL DELAY

SOUL DELAY is a custom patch for MAX/MSP that emulates William Gibson’s description of jet lag sickness in his 2003 novel Pattern. Recognition: “Her mortal soul is leagues behind her, being reeled in on some ghostly umbilical down the vanished wake of the plane that brought her here, hundreds of thousands of feet above the Atlantic. Souls can’t move that quickly, and are left behind, and must be awaited, upon arrival, like lost luggage.” Similar to the soul trying to catch up to the body after flight, SOUL DELAY can be used in live performance to create a ghostly delay trail that slows down and speeds up, trying to catch up with the initial input. It emulates the sick feeling of jet lag in a way like tape delay, creating micro-pitch changes as the delay speeds up and slows down. These micro-pitches can be used as a compositional device to explore discordance and as a compositional device, and help determine the nature of the composition to be performed into SOUL DELAY.

Pawel Dziadur – Performance Tool: Video Optical Flow and Fluid Simulation Data Sent to Max/MSP for Sonification, Parametrised With Wekinator and Gestural Controllers

I would like to show part of research exploring the modes of composition and improvisation using the particles of optical flow and fluid simulation data based on movement in live and pre-recorded videos. This kind of data has got a lot of obvious visual dynamics, however it’s not straightforward to map it for creating musically relevant artistic effects and sonic experiences. I have built a C++ code on top of ofxOpticalFlow library for OpenFrameworks into sending video tracking optical flow vectors and particles data via OSC to Max/MSP. Multiple oscillators are dynamically generated in JavaScript in Max/MSP. By using Leap Motion controller routed through Wekinator neural network model based software and teaching it gestures for parametric presets I am creating the higher parametrising layer for the data derived from optical flow and fluid simulation. By using multiplicity of oscillators and techniques involving feedback and superVP the workflow can lead to interesting sonic territories.

Anna Terzaroli – For a Representation of the Sound

My research focuses on the design and construction of a software tool to plot the sound, able to make the transition from sound heard to sound watched. The tool, called Csgraph, is useful for teaching purposes too. The output of Csgraph is not a spectrum, it is not a sonogram, but it is a graph made from a Csound score: this is a relevant aspect concerning the difficulties relating to the symbolic representation of the music (Music encoding).

Lia Mice, Pawel Dziadur (Goldsmiths University of London),
Anna Terzaroli (Conservatorio Santa Cecilia)
Joué is an expressive and modular MIDI controller that feels like a real instrument. It’s an innovative and evolving instrument simplifying digital music playing and offering beginners and professional artists a unique level of expressivity and spontaneity. Joué is made of wood and metal and is equipped with a pressure sensitive sensor on which magic modules are placed. Modules like piano keys, guitar strings, drums pads or 3D control objects offer an infinite playground for musicians. Modules are made of soft and elastic silicone which transmit every single pressure variation to the sensor. Thanks to that, musicians can use natural gestures such as vibrato, bending, and hitting normally reserved for traditional instruments. joue

More information on website : www.play-joue.com

Pascal Joguet

Time Title Speaker(s)
 13:00-18:00
In this scenario we get connected to a network and receive sounds on our smartphones, like a gift. With simple gestures we manage to share sounds, to discover and to mix them. A really close voice-over, heard in headphones, guides us to a very precise and private listening and to the discovery of sound in relation with the smartphone’s movements. We are, at this moment, in a very introverted and intimate phase, alone and concentrated.The journey suggested by the voice-over brings us to a place where an important loudspeakers’ system is set up. Still with simple gestures we can make the sound from the headphones going to an Ambisonic system. The sound is set in space and the sound goes from our head to a social space in a linear way. The sound spatialization is here handled by the smartphones, which become a kind of “gamepad”. The listening in this project is at a community and shared level even if a small part of it remains intimate since the voice-over keeps being in the headphones.An hybrid listening is in this way set up between the headphones and the diffusion system, which allows the coexistence of sound spaces which are nowadays separate and distinct: the private and the social.lorenzo 2
Lorenzo Bianchi Hoesch (Artistic Research Residency)

Time Title Speaker(s)
 10:00-18:00
Since two years at the Fine Art School of Le Mans, I started a sound research about the bells and how to make sounds with one shape, one sculptural body. I made metallic sculptures with technique of 3D printing combined with metal casting. Today I’m studying the way that old foundries made bells, and searching a new method to make “sound casting” prototype very easily with new technologies and materials. Harmony and quality of a sound is also a question of complexity of material shape’s proportions. Today 3D printing give the opportunity to have a perfect shape in proportions and measures. I work essentially with the parametric 3D drawing plug-in Grasshopper to create complex and optimized shapes for sound. Maybe in the future, it would be possible to combine 3D software with the IRCAM software Modalys (for example), to predict, what kind of sound I could print and cast. benoit villemont
Benoit Villemont 

Time Title Speaker(s)
 10:45-17:00
This project aims at making a silent video into a video with sound through acoustic objects which are directly placed in the projection space. These sound machines form, just like an orchestra, the film soundtrack. Each frame of the video has sound like sound-effect engineers , but sound effects are automated: once activated, the installation does not need human intervention any longer. In this way it finds itself between the film-making, sound and sculptural words. Projected images are more or less figurative shots filmed in industrial facilities : Firmin Didot and Soregraph printer’s.tanguy clerc nb
Tanguy Clerc 

Time Title Speaker(s)
 15:00-18:00 VERTIGO
Panel discussion
: introduction
Animation: Frédéric Migayrou (Mnam/Cci-Centre Pompidou)
Gilles Retsin,Manuel Jimenez Garcia, Jenny Sabin, Philippe Morel, Joris Laarman
 18:30 Meeting: Music and architecture
Animation: Frank Madlener (IRCAM director), Frédéric Migayrou (Mnam/Cci-Centre Pompidou)
Olga Neuwirth, Greg Lynn

Centre Pompidou, Grande Salle

 20:30 Le sec et l’humide – Guy Cassiers

Thursday, March 16

Thursday, March 16

Simulation and Virtual Reality

Abstracts available

 

Parallel sessions – at IRCAM and Centre Pompidou

Time Title Speaker(s)
 9:30-10:00
Leap motion is an affordable and easy to use tool to track the motions of the user’s hands. It is widely used to control musical software. Mostly it’s being used for live performances e.g. to perform filtersweeps or to alter the volume of a certain audio-track or instrument just with the movement of the hand(s). While you find an increasing number of applications for music performances there are only few examples of how a motion tracking device like Leap motion can support music composers in the process of creating new sounds. This poster presentation (accompanied by an early-stage prototype developed in Chuck1 ) shows ways how Leap motion can be mapped to 10 or more parameters of a synthesizer at the same time for users searching for inspiration. This project especially focusses on supporting divergent thinking processes which are used to generate new ideas by trying out many possible solutions and exploring a wide field of options. One of the simplest solutions to support divergent thinking processes in sound exploration is the ‘random button’ which sets all parameters of a synthesizer to random values. This can be a simple and yet effective solution but leaves a lot to chance. Instead of letting a button randomly decide on a configuration of all your synthesizer parameters: What if every possible sound of your synthesizer would be located somewhere on an imaginary 1x1x1 meter cube right in front of you on your desk? What if you could explore the entire soundscape by moving your hand through this cube?
Richard Albert Bretschneider
 10:00-10:30
‘Orchestra of Speech’ is an art project developed as part of an artistic research fellowship at the Norwegian University of Science and Technology. The project’s artistic aim is to explore the use of speech as musical material for improvisation. For this an instrument-like system has been developed in Max using the FTM/Gabor and MuBu libraries from IRCAM. The system allows for real time ‘orchestration’ and manipulation of speech derived musical structures such as rhythms, melodies, tempo, vowel formants etc. Speech recordings are organized in analyzed corpora allowing selection based on musical properties. It also features an interactive mode using Markov models to generate new sequences of speech segments in response to queries from live sound input, as a sort of improvisational Dadaist speech recognition system reacting only to prosody. The performance setup includes transducers mounted on acoustic instruments, and has been used in several solo performances recently.
daniel formo installation
Daniel Formo
 10:30-11:00
I play live with a pc/software controlled by a MIDI keyboard and controller.
The pach built in Plogue Bidule 0.97 software with o VST plugins assigned and controlled by MIDI keyboard and controller. All parameters re controlled live during a concert.
You can see some demonstrations from these television recordings: here.
gintas
Gintas Kraptavicius
 11:00-11:30  Break
 11:30-12:00  IRCAM Live Projects
DSC_1309_ensemble3
Tarek Atoui
 12:00-12:30
Tools for computer-aided composition such as OpenMusic and PWGL provide access to the Lisp layer they are based upon, and advanced operations have often proven being more easily implemented in Lisp than through a higher-level, graphical approach. Max provides several options for embedding in patches textual code written in various programming languages, but my perception is that these languages bindings are either too revealing of the low-level mechanisms underlying the Max environment, or suffer from their abstractness with respect to the actual Max data types and programming paradigm. The project I will show, which I am carrying out within Ircam’s Musical Research Residency program, consists in designing and implementing a simple, high-level, multi-paradigm, textual programming language able to manipulate directly the data types of bach. At the time of the presentation, the language will still be in its development phase, but I will be able to show its basic principles and
some working examples.
Andrea Agostini
 12:30-13:00
Halfway between sculptures and musical instruments, the sonic objects designed by Violeta Cruz present new issues for musical writing. Her sonic objects are a series of mechanical machines with partially random behavior; the sonic behavior is prolonged by an electronic system. The dialogue between these objects and symphonic instruments creates unusual musical and theatrical situations, inviting the audience to question several domains of composition such as musical notation, algorithmic composition, and instrumental gesture.
Violeta Cruz
 13:00-13:15
Prismes électriques – hommage à Sonia Delaunay is a tribute to the naturalized French artist Sonia Delaunay, co-author of the simultaneous contrast theory with her husband. This piece is a personal transposition into music of one of Delaunay’s paintings dated 1914: Prismes électriques. I used some pictures of this painting – mainly enlargements of details – thus obtaining short sound elements through an audio synthesis software which made it possible to create sounds from images with a noise shaping technique. The single fragments of sound were then put together and elaborated, overlapping them several times so as to create a dense sound textures in which the audio spectrum appears first decomposed and then reorganized so as to bring out each element by contrasting it with the next ones: a study within the theory of simultaneous contrast.
Antonio d’Amato
 13:15-15:00   Buffet / student posters / installations

Time Title Speaker(s)
 15:00-17:00  Hands-On Session: BITalino Hugo Silva (PLUX),
Emmanuel Fléty ( IRCAM)

Time Title Speaker(s)
 15:00-16:30

Spectral processing in Max : pfft~ and supervp
Content: this hands-on focuses on some techniques to process a sound using its time-frequency representation, both by using within-Max approaches offered by the pfft~ framework and the IRCAM library supervp.
Marco Liuni, Jean Lochard (IRCAM)

Time Title Speaker(s)
 10:00-10:30

Mr. Chen Hongli is a sound artist. He experimented with live improvised sound art. He has continued to develop this approach in collaboration with numerous pioneering artists both in China and abroad, and it has since become his calling card, with performances at the Guggenheim Museum in New York, the Kunsten Festival Desarts in Brussels, Centre Pompidou in Paris as well as Institute Of Contemporary Arts in London. In chen Hongli’s work, sound art is not only a practice that has gradually shaped his works conceptually, but also a form of social intervention, a sociological method, and an experimental direction.
Guqin is an essential media of his “ sound art projects ”, and it is also the most substantial language tool for him. He tried several unprecedented and rebellious standing positions playing Guqin in search of multiple possibilities of the vocal expression, and this kind of subversive act of sound was a dialogue with Chinese traditional culture and philosophy, even it was completely out of the traditional and cultural consciousness. It is absolutely not a performance; it is a direct, serious, and critical act of sound for the reflection on everything.
chen hongli
Chen Hongli
 10:30-11:00
This presentation proposes to explore a computer-aided-composition (CAC) approach to structuring music by means of audio clustering and graph search algorithms. Although parts of this idea have been studied, notably in the context of corpus-based concatenative synthesis, musical genre recognition and computer-aided orchestration to name a few, the challenge remains to find a way to integrate these techniques into the compositional process, not to generate sound material but rather to analyse, to explore and to have a better understanding of it prior to scoring a musical piece. Unlike mainstream CAC tools, mostly focusing on generative methods, the latter proposes an analytical approach to structuring music in order to inform and stimulate the creative process. More concretely, this short presentation consists of revealing the overall algorithmic structure in order to examine the methodology, to clarify some inherent problematics, to expose the limits of such an approach and finally to discuss some ideas for other possible applications and developments. The whole will be first supported by a demonstration of an offline application then by a demonstration of an online version of the same application.
Frédéric Le Bel
 11:30-12:00  Antescofo 1.0 Jean-Louis Giavitto
 12:00-13:00  The Snail: Visualize Sound Thomas Hélie, Charles Picasso (IRCAM)
 13:00-18:00

Edward Lun, Lewis Commence, Jungwon Jung – Human Interaction Synthesizer

This project is based on the concept of the creation of the three main sensory receivers: visual, auditory and kinesthetic, and how they interact and manifest themselves in relation to movement of the human body.
The installation is built with multi-sensors, when the participator walks through different parts of the sensitive area, certain frequencies or a phrase of music will be played through the preset multi directional speakers, creating a spatial sound experience that is informed by the movements made by the composer.
This process will result in the equilibrium of the two different elements. The sets of sensors would be setup to track the movements allowing us to transfer this movement into sound and visuals. Thus, It creates a different function and application that takes both elements and illustrates them in a reconstructed way.

Derck Littel – Sound Sniffer

As a result of a collaboration with Royal College of Art, I propose to do an interactive musical performance or demo of a newly designed instrument, the Sound Sniffer, this tool audifies high frequency electro-magnetic fields to be found in any electric source, such as neon lights, computers, phones, alarm systems, and so on.
By exploring the communicative possibilities of this medium the poster/demo aims to demonstrate:
– The interactive possibilities of the tool in relation to lived experience.
– How new technologies can provoke awareness of the potential health issues in relation to sound
Last november 2016 I premiered this instrument at the opening of the new Design Museum in London and below is a Soundcloud link to the live recording. https://soundcloud.com/derck-littel/design-museum-take-3

Junhyeok Shin – Bridges

For the mankind who is living in this age, connection is one of the most significant communications for us to stepping forward. In terms of this subject, It can not be denied that ‘IRCAM’ is the important center of this platform. Through this creative interchange for the artists and researchers, people naturally participate in here to exploring the sound and technology. In this context, I realized that these extraordinary explorations have made it possible for to find a densified bridge: between artists and researchers, between us and the public. On this idea, my poster will examine to visualize the ‘bridges’ which is densified between the gaps through all the communication on this platform. Many of these visualized ‘bridges’ are the symbol of creative and valuable exchanges for the artists and researchers who explore and work in contemporary times through this platform.
Also at the same time, In terms of the futural perspective of ‘IRCAM’, It represents the direction of the bridges to be explored.

Edward Lun, Lewis Kemmenoe, Jungwon Jung, Derck Littel, Junhyeok Shin, (Royal College of Art)

Time Title Speaker(s)
 11:00-18:00
In this scenario we get connected to a network and receive sounds on our smartphones, like a gift.
With simple gestures we manage to share sounds, to discover and to mix them.
A really close voice-over, heard in headphones, guides us to a very precise and private listening and to the discovery of sound in relation with the smartphone’s movements. We are, at this moment, in a very introverted and intimate phase, alone and concentrated.The journey suggested by the voice-over brings us to a place where an important loudspeakers’ system is set up. Still with simple gestures we can make the sound from the headphones going to an Ambisonic system.
The sound is set in space and the sound goes from our head to a social space in a linear way.
The sound spatialization is here handled by the smartphones, which become a kind of “gamepad”. The listening in this project is at a community and shared level even if a small part of it remains intimate since the voice-over keeps being in the headphones.An hybrid listening is in this way set up between the headphones and the diffusion system, which allows the coexistence of sound spaces which are nowadays separate and distinct: the private and the social.
lorenzo 2
Lorenzo Bianchi Hoesch

Time Title Speaker(s)
  10:00-18:00
Since two years at the Fine Art School of Le Mans, I started a sound research about the bells and how to make sounds with one shape, one sculptural body.
I made metallic sculptures with technique of 3D printing combined with metal casting. Today I’m studying the way that old foundries made bells, and searching a new method to make “sound casting” prototype very easily with new technologies and materials. Harmony and quality of a sound is also a question of complexity of material shape’s proportions.
Today 3D printing give the opportunity to have a perfect shape in proportions and measures. I work essentially with the parametric 3D drawing plug-in Grasshopper to create complex and optimized shapes for sound.
Maybe in the future, it would be possible to combine 3D software with the IRCAM software Modalys (for example), to predict, what kind of sound I could print and cast.
benoit villemont
Benoit Villemont 

Time Title Speaker(s)
 10:00-18:00
This project aims at making a silent video into a video with sound through acoustic objects which are directly placed in the projection space. These sound machines form, just like an orchestra, the film soundtrack.
Each frame of the video has sound like sound-effect engineers , but sound effects are automated: once activated, the installation does not need human intervention any longer. In this way it finds itself between the film-making, sound and sculptural words.
Projected images are more or less figurative shots filmed in industrial facilities : Firmin Didot and Soregraph printer’s.tanguy clerc nb
Tanguy Clerc 

Time Title Speaker(s)
 15:00-18:00  VERTIGO: Conferences Franck Varenne, Bruno Lévy, Gaël Seydoux, Frédéric Kaplan, Olivier Warusfel et Markus Noisterning (IRCAM), Thierry Coduys (IanniX) / Gaël Martinet (Flux::)

Centre Pompidou, Grande Salle

20:30 Le sec et l’humide – Guy Cassiers

Friday, March 17

Friday, March 17

Makers & Design

Abstracts available

 

Parallel sessions – at IRCAM and Centre Pompidou

Time Title Speaker(s)
 10:00-11:00

The activity of sound design is setlled at Ircam since the 2000s and has been progressively developed in order to associate this polyvalent artistic discipline with scientific knowledge, methodologies and tools, especially from the auditory perception and cognition field of research. A new step of devlopment is about to come out and globally aims at considering sound design through the prism of design and science of design as defined by Cross: “the study of the principles, practices and procedures of [sound] design” (Cross, 2001) This past, present and future of sound design at Ircam will be presented and deeply illustrated with two fields of application: i/ in research, a presentation of the Skat-VG project dealing with concept and tools of sound sketches with the use of voice and gesture; ii/ in pedagogy, a presentation of the Sound Design Master cursus at ESBA TALM (School of Fine Arts, Le Mans) and especially the workshop methodology that is annually applied with a mixed group of designer / sound designer students and an industrial partner proposing a real use case.
Olivier Houix (Esba TALM site Le Mans, STMS Ircam-CNRS-UPMC), Nicolas Misdariis, Patrick Susini (IRCAM)
 11:00-11:45
Every sound has a social relation, but the development of new technological tools most often focusses on musical or sonic output. Software and hardware originally designed for music-making are constantly being adapted, hacked and reconfigured for other design contexts – such recontextualisations point to new ways of thinking about how sound design can impact on other disciplines.
With backgrounds in art, design and music, and interests that cross over into social, political, medical, computing and physical sciences, the Social Sound Research group at the School of Communication, Royal College of Art, London, is rethinking the potential of sound as a key area for contemporary communication design. This presentation explores how sound can be used to articulate, enhance and modulate our societies, examining how sound practice can effect and shape contemporary human experience and social interaction.
Drawing on examples ranging from identifying and protecting soundmarks, architectural practice, public announcements and public art, we demonstrate the need for more holistic approaches to sound in the public realm, informed by both social design, and sophisticated, engaging sonic results.
Matt Lewis, Cecilia Wee, Will Renel (Royal College of Art)
 11:45-12:00  Break
 12:00-13:30  VERTIGO
Panel Discussion: Makers, Designers and Collaborative Economy
Animation: Anne-Cécile Worms
Sébastien Broca, Amélie Capon, Marie-Christine Bureau,
Vincent Guimas
 13:30-15:00  Buffet / Student Poster / Installations
 15:00-16:30  VERTIGO
Conferences
Frédéric Bevilacqua, Emmanuel Fléty, Hugo Placido da Silva, Adrien Mamou-Mani, Pauline Eveno
 16:30-18:00  VERTIGO
Panel Discussion: Design
Animation: Marie-Ange Brayer
Nicolas Henchoz, François Brument, Sylvain Lefevre, Paola Antonelli, Alain Prochaintz, Alain Connes

Time Title Speaker(s)
 15:00-15:30
William Burroughs used cut-ups to produce random concatenations of word images and to free prose writing from the ‘tyranny of linearity’. CataRT disassembles recorded audio and reassembles it in non-linear forms, concatenating grains of sound to produce new, unforeseen sonic combinations. My project intends to bridge the gap between macro-, meso- and micro-structures of sound gestures in my free jazz performance practice.
At a macro-level, OMax captures my improvisations and re-contextualizes larger sound gestures. Working at the micro-level, CataRT concatenates audio grains from my performance to produce more texture-like sound gestures. However, at the meso-level, where recognizable phrases and motifs are recorded, analyzed and displayed for navigation (i.e., shorter than Omax, longer than CataRT), there is a gap.
I want to configure CataRT so that, as a performer, I can record in real time, segment audio at phrase- and motif-lengths and re-inject the concatenated results into my improvisations while continuing to play myself. This will give me 3 levels of concatenated sound gestures that can form new contexts with which my own improvising can interact.
Glen Hall
 15:30-16:30  Hands-On Session: CataRT Diemo Schwarz (IRCAM, ISMM)

Time Title Speaker(s)
 10:00-12:00  Hands-On Session: Antescofo Grégoire Lorieux (IRCAM)

Time Title Speaker(s)
 14:00-18:00

Anna Ridler – Speaking in Tongues

Speaking in Tongues is part of a project that explores and tries to make visual the different types of information that are conveyed by the human voice and how machine learning is interacting with this. Using Max MSP and Google Cloud Speech API, the project analyses in real time what the user says and classifies them according to a range of possibilities (e.g. gender, education levels, adherence to linguistic standard norm) which then triggers a certain video. But as voices are not static – the user may find either themselves moving in and out of different videos the longer they spend speaking into the microphone (the project, with hints of Joseph Weizenbaum’s ELIZA, repeats back to the user what they have just said as a question at intervals to encourage use) or find themselves, unable to switch voices, “locked” into one video.

Jen Haugan, Lara Poe – Sonifying Noise Pollution: The Silent Killer

“Sonifying Noise Pollution: The Silent Killer” is a project where Max MSP is used as a way of sonifying data from road noise pollution statistics.
Road traffic is one of the main sources of noise pollution in cities, and its impacts on human health are well documented. This project aims to highlight the extent of noise pollution from road traffic sources in the UK.
Detailed datasets were obtained from the European Environment Agency on the percentage of people exposed to dangerous levels of noise from road traffic sources across different cities in the UK. Each city was then assigned a specific colour based on the amount of people exposed to levels of noise above 65 dB.
The colour-coded map of UK cities was then loaded into Max MSP, where the colour values were unpacked and each colour channel was assigned a specific frequency.
The public were then able to click on the various colour-coded cities on the map to explore the road noise pollution levels of each city.
JenHaugan_NoisePollution01

Anna Ridler, Jen Haugan, Lara Poe  (Royal College of Art)

Time Title Speaker(s)
  13:30-14:30

Chloé X Ircam v2 – The Dawn (interactive mix) restages Chloé’s music as an interactive experience using smartphones. The players collectively create their own mix of Chloé’s composition as a function of their proximity and motion exploring musical and personal affinities.
Listening with headhones, each player embodies a single track of the mix controlled through the device’s inclination. Approaching one another, the sounds blend to a commun mix in the player’s earphones.
Wandering about, each player’s mix evolves according to a unique trajectory through the encounters and interactions with others.The web audio used in this performance are developed as a part of the CoSiMa (Collaborative Situated Media) and Wave research projects (Web Audio Visualisation and Editing) supported by the Agence nationale de la recherche, and coordonated by IRCAM.
Norbert Schnell (IRCAM)
  10:00-18:00
Since two years at the Fine Art School of Le Mans, I started a sound research about the bells and how to make sounds with one shape, one sculptural body.
I made metallic sculptures with technique of 3D printing combined with metal casting. Today I’m studying the way that old foundries made bells, and searching a new method to make “sound casting” prototype very easily with new technologies and materials. Harmony and quality of a sound is also a question of complexity of material shape’s proportions.
Today 3D printing give the opportunity to have a perfect shape in proportions and measures. I work essentially with the parametric 3D drawing plug-in Grasshopper to create complex and optimized shapes for sound.
Maybe in the future, it would be possible to combine 3D software with the IRCAM software Modalys (for example), to predict, what kind of sound I could print and cast.
benoit villemont
Benoît Villemont 

Time Title Speaker(s)
 10:00-18:00
This project aims at making a silent video into a video with sound through acoustic objects which are directly placed in the projection space. These sound machines form, just like an orchestra, the film soundtrack.
Each frame of the video has sound like sound-effect engineers , but sound effects are automated: once activated, the installation does not need human intervention any longer. In this way it finds itself between the film-making, sound and sculptural words.
Projected images are more or less figurative shots filmed in industrial facilities : Firmin Didot and Soregraph printer’s.tanguy clerc nb
Tanguy Clerc 

Time Title Speaker(s)
 17:30-18:00  Conclusions


Last update : March 10, 2017