Ircam Forum Workshops Paris – Program 2015

IRCAM and Gaîté lyrique

From 24th to 26th November 2015, in partnership with RBMA

Follow and Share on #ircamforum!
Highlights 2015 :
– Latest Sound and Music Technologies from IRCAM R&D: Antescofo, AudioSculpt, Spat
– Social Events: IRCAM Live Concert, Music Industry Panels with: Cycling’74 , AbletonFlux::, and student posters.
Workshops and presentations on embedded devices: Raspberry Pi, Max gen~ export, FAUST and Web audio.

Tuesday, November 24

Tuesday, November 24

Gaîté lyrique, Conference Room

Time Title Speaker(s)
9:30-10:00 Welcome Session Arshia Cont, Paola Palumbo (IRCAM)
10:00-10:30 IRCAM research and development news Hugues Vinet (Director R&D IRCAM)
10:30-11:00 News on Max Joshua Kit Clayton, David Zicarelli, Emmanuel Jourdan (Cycling ’74)
11:00-11:30 Break
11:30-12:00 Orchids New Version Philippe Esling, Daniele Ghisi (IRCAM)
12:00-12:30 Antescofo News José Echeveste (IRCAM, Mutant Team)
12:30-1:00 Spat news Olivier Warusfel, Thibaut Carpentier (Ircam EAC Team)
1:00-2:30 Lunch
2:30-3:00 News from the Analysis Synthesis Team Axel Roebel, Analysis-Synthesis Team
3:00-3:30 Computer assisted improvisation Jérôme Nika, Gérard Assayag (IRCAM, Musical Representations Team)
3:30-4:00 OpenMusic 6.10 Jean Bresson, Karim Haddad (IRCAM, Musical Representations Team)
4:00-4:30 Break
4:30-5:00 Web audio Norbert Schnell, Samuel Goldszmidt (IRCAM)
5:00-5:30
Faust is a high-level programming language, created to describe in a concise and effective manner different processes for sound synthesis and processing. One of its specificities is that it is entirely compiled, making it possible to obtain applications and plugins for a broad range of platforms from a single Faust program. Following an introduction to Faust, the conference will look into the strategies used to simplify the use of a compiled language in a musical context, particularly: the Faust complier used in other applications, developed during the ANR INEDIT project (IRCAM, LABRI, GRAME) and the use of the Internet as a compiler platform for rapid prototyping developed during the ANR FEEVER project (MINES, INRIA, UJM, and GRAME).
Yann Orlarey (GRAME)
5:30-6:00 Modalys Robert Piechaud, Carlo Laurenzi (IRCAM)

 

20:00-23:00 IRCAM Live Concert – Holly Herndon

Wednesday, 25 November

Wednesday, 25 November

Parallel Sessions – At IRCAM

Time Title Speaker(s)
10:00-10:30
In the talk we present the rationale for the GiantSteps project, a collaborative effort by music information retrieval (MIR) researchers and creative music practitioners, funded by the European Union. The project aims to create musical tools and instruments that provide users with intuitive and meaningful interfaces to complex selections of samples and musical data though expert agents, music analysis algorithms and new interfaces. We outline the overall concept of the project and the methodological framework for the user involvement and suggest a number of ways that we might be able to find new ways to support musicians in imagining future instruments for creative musical expression and find new ways to creatively exploit vast collections of music samples.
Kristina Andersen (STEIM, NL), Sergi Jordà (MTG – UPF, ES)
10:30-11:00
“A laptop is as small as you can get for computer music. You need a screen to see why it doesn’t work, a keyboard, a mouse…”, is what Miller Puckette stated in his presentation at IRCAM’s Pico Computer Symposium earlier this year. That sounds like a challenge. What about Raspberry Pi? It’s fast enough to do advanced audio processing in real time and small enough to sit in a stompbox. With a miniature touch screen and robust audio connectors it can make an interesting hybrid between stage-proof music hardware and general purpose computer, a format which suits the experimental performer. Katja Vetter took up this challenge and will present the status quo of her project in the upcoming event.
Katja Vetter
11:00-11:30 Break
11:30-12:00 Vignette Coala project
Nicolas.Mondon((composer.in.residency),..Adrien.Mamou..Mani((IRCAM, Instrumental Acoustics Team), Simon Milon (violin)
12:00-12:30
CatOracle in the result of an Artistic Research Residency this year at IRCAM entitled “A Factor Oracle for Timbre.” Based on the Forum MuBu for Max library, it is inspired by two powerful existing Forum tools: CataRT for corpus-based concatenative synthesis and OMax for machine improvisation. Harnessing a user-defined list of audio descriptors (based on ircamdescriptors~ or yin~) live or prerecorded audio is analyzed to construct an “Audio Oracle” as a basis for improvisation. CatOracle also extends features of classic concatenative synthesis to include live interactive audio mosaicking and score-based transcription using the bach library for Max. The project proposes applications not only to live performance of written and improvised electroacoustic music, but also computer-assisted composition and musical analysis.
Aaron Einbond (composer in residency),.Séverine Ballon (cello)
12:30-2:00 Buffet
2:00-4:00 Round Table : Conference Detour with Holly Herndon Frédérick Rousseau, Holly Herndon & Mat Dryhurst, Assimilation Process, Marco Donnarumma, Gérald Kurdian, Alex Augier, Loïc Touzet, collectif Kompost, Jean-Yves Leloup
4:00-4:30 Break
4:30-5:00
We will present the writing techniques for spatialization used for the sound creation Iles Ephémères* by Cécile Le Prado, using IRCAM’s Spatialisateur, piloted by the Ableton software program Live via Max and Max For Live.Iles éphémères is a sound work installed during the summer of 2015 at the Prieuré de La Charité sur Loire, a monument located on the ancient Via Lemovicensis on the road to Compostelle. The work takes the idea of a passage, of listening situations altered by travelling, by movement, by walking, and creates a sort of landscape using sound recordings, interviews, and writing in connection with the land. The musical space is altered depending on the listening space, on its configuration, and acoustic qualities. The musical spatialization is one of the essential parameters of the writing.*Commissioned by the Prieuré – Cité du Mot – CCR de La Charité-sur-Loire.
Manuel Poletti, Julien Chirol (Music Unit), Cécile Le Prado (composer)
5:00-5:30
While there is a well-established workflow for stereo production in DAWs, options have been more limited when working with Ambisonics. The Ambisonic Toolkit (ATK) brings together a number of tools and transforms for working with first order Ambisonic surround sound, and includes intriguing possibilities for spatial soundfield imaging. These tools have previously only been available for public release via the SuperCollider real-time processing environment.Cockos Reaper is a reasonably priced and flexible DAW, popular among many composers and sonic art\-ists working with spatial sound. Reaper’s versatile design conveniently supports the ATK’s Ambisonic workflow model. Using the JSFX text-based scripting language, the ATK has now been ported to plugins for Reaper; these include intuitive graphical user interfaces.
Trond Lossius
5:30-6:00 Hamelin project Bernard Szajner (composer)
6:00-6:30  Creative Process in Dels dos principes, septet with juggle
Henry Fourès (composer), Augustin Muller (Computer music designer)

Only participants with laptop (18 seats)

Time Title Speaker(s)
10:00-11:00 Meet-up OpenMusic Mikhail.Malt, Jean.Bresson((IRCAM)@@@@@@@@@@@@@
11:00-11:30 Break
11:30-12:00
Holopad proposes an original way to play spatialization in real-time on pressure enabled multi-touch tablets. It unites the traditional “one sound source point” virtual acoustic model with a more “acousmatic” approach to sound in space, making it possible to spread one sound over multiple source points in the virtual space.
Progest is focused on how to define trajectories, and more precisely spatial behaviours by gesture. We noticed that during the sound spatialization process parametrical trajectory tweaking is often a difficult task. The idea of having a system that could analyse an input gesture by means of its temporal and spatial characteristics, and then continue it “à la manière de” was born. Such gesture characteristics could be periodic but also stochastic, allowing the representation of a wide class of spatial behaviours. This work is carried out in collaboration with Valentin Emiya (LIF-Marseille) and Mathieu Lauriere (LJLL-Paris).
Charles Bascou (GMEM-CNCM-Marseille)
12:00-12:30
Contribution CNCM – La Muse en Circuit.The MusineKit project originated from the meeting of Sensomusic and the Muse en Circuit and the realization that digital tools intended to facilitate the discovery of musical creation are few, and often technically outdated. Their use is limited by restrictive interfaces or their complexity makes them inaccessible to the general public (children, teachers, teenagers). The Muse en Circuit – Centre national de création musicale – has considerable experience in digital audio creation and the practices relative to the transmission of contemporary music. Its educational projects have always used technologies developed for the general public and tools intended to be used live; technologies and tools that must be effective and fail-proof.The company Sensomusic has been developing the software “Usine” for 20 years, with the same goals in mind. This tool offers professional users a stable tool to perform music on stage and for interactive installations. While there are several qualities of Usine, two characteristics are particularly attractive: it is quick to learn and can be piloted, or interact, with the majority of interfaces on the market today (networks, MIDI interfaces, Leap motion, joysticks, Wiimote, graphical tablets, video signals, etc.). MusineKit is the fruit of the skills of both Sensomusic and the Muse en Circuit. MusineKit is a resolutely modern tool, in synch with our digital practices (and those of the users). Anyone can learn the software, from young children to adults, from a simple discovery of the world of sound to musical production.
Sébastien Béranger
12:30-2:00 Buffet
2:30-3:00
The TaCEM project (Technology and Creativity in Electroacoustic Music), funded by the United Kingdom’s Arts and Humanities Research Council (AHRC), investigates the relationship between creative process and technological innovation on the basis of eight key works from the electroacoustic repertoire. A significant part of the dissemination of this investigation is in the form of software that embeds, on the one hand, interactive software tools to engage aurally both with the structures of the considered work and the technologies used and, on the other hand, filmed materials with the composer and, where applicable, her/his collaborators. This presentation demonstrates the software developed for pieces by two major figures of the developments of real-time electronics at Ircam, Philippe Manoury’s Pluton (1988) and Cort Lippe’s Music for Tuba and Computer (2008), and compares the aesthetic and technological aspects of these works and of their real-time performing environments.
Michael.Clarke,.Frédéric.Dufeu (CeReNeM,.University.of.Huddersfield,.UK), Peter Manning (Durham University, UK)
3:00-3:30
The IndescripTools were, in the beginning, a bouquet of educational applications designed to introduce musicologists to the use of sound descriptors (energy, fundamental frequency, attacks, brilliance), for musicological analyses. I used these tools when teaching a Master’s degree level course “Analysis of the Musical Signal” at the Sorbonne. Currently, this tool is being developed to become an environment for the exploration of audio descriptors for musical analysis and research. It includes the analysis of 50 audio descriptors, the calculation of the Brightness-Standard deviation, similarity matrixes. Users have the possibility of importing analyses from other sources (zsa.descriptors, Ircamdescriptors~, Sonic Visualiser, etc.) and of exporting their own analysis in text format. The IndescripTools are developed in a MAX environment.
(© www.cycling74.com) using <Ircamdescriptors~> (© Ircam), the library <Mubu&Co> (© Ircam), and the (© Jourdan & Malt).
Mikhail Malt
3:30-4:00 Analysis and Repertory Session : Sampo – Gestual interface for acoustic instruments Alexander Mihalic (MUSINFO)
4:00-4:30 Break
4:30-6:00 Hands-on AudioSculpt Marco Liuni (Ircam)

Time Title Speaker(s)
10:00-11:00 Hands-on Modalys Lorenzo Pagliei (IRCAM)
11:00-11:30 Break
11:30-12:30 Hands-on Orchids Philippe Esling and Grégoire Lorieux (Ircam)
12:30-2:00 Buffet
2:30-3:30 Hands-on Spat Eric Daubresse (Ircam)
4:00-4:30 Break
4:30-6:00 Hands-on Max (beginner) Emmanuel Jourdan (Ircam)

Time Title Speaker(s)
10:00-11:00 Hands-on gen~ -> code export Emmanuel Jourdan, Joshua Kit Clayton (Cycling ’74)
11:00-11:30 Break
11:30-12:30 Hands-on Raspberry PI & PureData Katja Vetter
12:30-2:00 Buffet
2:00-4:00 Antescofo User Symposium MuTant Team & Invited Speakers (Javier Munoz, Christian Philipp Pfeiffer)
4:00-4:30 Break
4:30-6:00 Augmented Instrumentarium SmartIntruments project unity
6:00-6:30 Demo Hamelin Bernard Szajner (composer)


Thursday, November 26

Thursday, November 26

Parallel Sessions – At IRCAM

Time Title Speaker(s)
10:00-12:30 Round Table: New technological products in presence of AbletonFlux::, Juce, Roli, IanniX Moderated by Frédérick Rousseau1(Ircam), Maria Gkotzampougiouk (Ableton), Jean Lochard (Ircam), Gael Martinet (Flux::), Jean Baptiste Thiebaut, Dave Manon (Roli), Guillaume Jacquemin, Thierry Coduys (Iannix)
12:30-2:30 Buffet with Student Posters
2:30-3:00
The “Thermophones” project was originated by Jacques Rémus eight years ago, who initially organized several concerts and automatic installations using metal pipes that would start chanting due to the “pyrophone” sound phenomena called the Rijke effect.
Several other artists also gave some performances based on this principle using propane or hydrogen burners (Trimpin, Michel Moglia, …)
In connection with Professor Steve Garrett from Penn State University, and several researchers working in French laboratories, Jacques Rémus started developing an instrumentatrium with pipes locally heated by an electric resistance. The production of sound is a thermoacoustic effect which has generally been exploited and studied for industrial applications (far from music!).
Thermoacoustics is a young science at the interface of several disciplines: fluid mechanics, acoustics, heat transfer.
Sounds produced by thermophones are pure, deep, powerful and beautiful, and they can inspire dedicated musical writings. However from a technical point of view, there are many parameters to take into account in order to turn thermophones into musical instruments that can be controlled and used to play music.
Recently Jacques Rémus initiated a collaboration with Diana Baltean-Carlès and other researchers from LIMSI-CNRS. Together they applied and obtained a grant from the 2015 Diagonale Art and Science program.
Jacques Rémus
3:00-3:30
“Balaenoptera”, for Bass Clarinet, electronic sounds and live-electronics is a work premiered on 26th of May 2015, in ISSM “P. Mascagni” in Leghorn. It was inspired by a real episode: a whale beached south of Leghorn some years before. Sound material is from whale and bass clarinet sounds. This presentation is mainly focused on the use of OpenMusic and some its libraries (OM-Spat, OM-SuperVP, OM-Chant, OMChroma) to generate and spatialize sounds. All this has made it possible to unify processes and materials, both for instrumental and electronic part. It is given a concise description about some form and composition techniques in Balaenoptera, too, at least for the features interested in Algorithmic Composition.
Fabio De Sanctis De Benedictis
3:30-4:00
As your Max patches for stage, music performances, or sound installations grow in size, it becomes increasingly important that they do not run wild. Jamoma for Max is a package of externals and patches that helps YOU to stay in control of your patch. Each section of your patch is neatly turned into a model. You gain a new sense of freedom with respect to how interfaces can be designed, and it is a relief to discover that the patch can be instantiated in a predictable and reliable way. At any moment you can ask your patch how it is currently configured, and use this to build presets and cues for future performances. How do you want interactions to operate? Mapping between parameters is a breeze, and invites playful experimentation. Advanced mappings with interpolations recreate long missed sound effect processes from the discontinued Hipno plugin suite. Many models are available for spatial audio, and AudioGraph turns your patch cord into an audio snake cable, carrying many channels.
Trond Lossius, Theo de la Hogue, Julien Rabin
4:00-4:30 Break
4:30-5:00
Since my last presentation of Logelloop almost 10 years ago during the Forum workshops, the software has become significantly larger and is now at its version 4.4.One of the functions that I would like to highlight is the possibility of using any Max patch in Logelloop as an insert. Logelloop natively manages the inputs, MIDI connections, OSC, and has a powerful script/macros system; the use of Max patches is therefore much easier. We can easily pilot patches and include them with scripts whose language is now very advanced, making it possible to use variables, devaluate conditions, create loops, have complex functions, and more.
Philippe Ollivier
5:00-5:30
November is a commercial music font project started in 1999 which covers a wide range of musical characters, from Renaissance to today’s music. Beside exposing the various categories of symbols available in the recently released version 2.0, I will talk about little known aspects of music typography, such as metadata, SmuFL (Standard Music Font Layout), compatibility with notation software and on.
http://forumnet.ircam.fr/product/novemberfont/
Robert Piéchaud
6:00-7:00 Cocktail party

Time Title Speaker(s)
2:30-4:00 Hands-on SuperVP in Max
Marco Liuni (Ircam)

Time Title Speaker(s)
10:00-12:30 Hands-on Web Audio Norbert Schnell, Samuel Goldszmidt (IRCAM)
12:30-2:30 Buffet
2:30-4:00
Faust is a functional synchronous programming language designed especially for sound signal processing and synthesis in real-time. The objective of this workshop is to present Faust and its possibilities, notably in relation with other environments (Android or iOS), or with musical languages such as Max and Puredata. We will also look at different tools including FaustLive an environment for multi-platform development. Participants who wish to participate should come with their own laptop with FaustLive installed.
Yann Orlarey (GRAME)
4:00-4:30 Break
4:30-5:00
Upon the simultaneous presentation of two specific pure tone frequencies, non-linear distortion products are produced on the basilar membrane in the cochlea and result in the perception of additional tones not present in the acoustic space. Cubic and quadratic difference tone synthesis allows the composer to predictably evoke auditory distortion products at frequencies 2f1-f2 and f2-f1. This demonstration will present a variety of difference tone synthesis techniques while discussing concepts of spatialization, the biomechanics of hearing, and special considerations for incorporating distortion products in music.
Alex Chechile (CCRMA, Stanford University)
5:00-5:30
i-score is an open source inter-media sequencer for writing interactive scenarios. It is has been designed to control multiple applications (Max, PD, Live, Modul8, etc.) via the network using the OSC protocol. Developed during the OSSIA project (funded by the ANR), the 1.0-beta version of i-score offers a graphical formalism for writing the temporal relations and minute conditions between events that can be set off interactively. The creation and execution of multiple strata of independent times offers a completely new way to manage the automation of parameters assisted by interpolation tools between states or drawing its own curves with precision. The result of exchanges between scientists, engineers, and users, the presentation of this software will focus on user problems identified during the development of controls for the management of live performances, installations, and other interactive systems.
Théo de la Hogue, Jean-Michaël Célérier, Pierre Cochard (GMEA Albi)
5:30-6:00
Our application for Mac and PC is a hands-free MIDI controller. Written in Max, it is designed to create sound atmospheres using gestural language and artistic creation, Soundpainting. Walter Thompson, its designer, has taught us this discipline and is personally following the project’s advancement. Our objective is to recreate the possibility of sound replies offered by this language. Our application communicates with a VST host; each participant is invited to create his own virtual orchestra, turning the experience into a game, notably in terms of style. We imagine using motion recognition using Soundpainting as a replacement for keyboard/mouse commands. This part of the project is still in development, we are currently using vocal recognition: the user can create signs and call out their names out loud, creating a corresponding sound reply from the virtual orchestra.
Renaud Felix

Time Title Speaker(s)
 2:30-4:00  Ambisonic Demo
Frédérick Rousseau (IRCAM)


Dernière mise à jour : 15 octobre 2015