English

Download the ForumProgram

Time Hall A1 Conferencial 1
8:30am:9:00am Registration
9am-9:30am IRCAM Forum Staff, Centro Gabriela Mistral, Pontificia Universidad Catolica de Chile, Universidad de Chile and Center for Mathematical Modeling 
Opening Ceremony
Program Presentation
9:30am:10am Jean LOCHARD (IRCAM)
From artistic discovery to artistic creativity and vice versa: IRCAM news
10:00am:10:30am Carlos AGON (IRCAM)
Presentation
Transformational approaches have a long tradition in formalized music analysis in the Americanas well as in the European tradition. This paradigm has become an autonomous field of study making use of more and more sophisticated mathematical tools, ranging from group theory to categorical methods. Within the transformational approach, Klumpenhouwer networks are prototypical examples of music-theoretical constructions providing a description of the inner structure of chords by focusing on the transformations. we summarized in this our proposition of a generalized framework of Klumpenhouwer Networks based on category theory.
10:30am-11am
11am-11:30am Break
11:30am:12pm Axel OSSES (CMM)
Listening the body : sound and tomography
12pm-12:30pm
12:30pm-1pm Jean-Louis GIAVITTO (IRCAM)
Presentation
The notion of space has often been summoned in musical theory to formalize the analysis of pieces and the terms “pitch space”, “rythmic space” or “compositinal space” abund in the musical litterature… While the spatial metaphor is a good support of the musical intuition and thus a useful pedagogical tool, it is also a heuristic that, from a technical point of view, has enabled the development of new computer tools to assist the composer in his creative processes.This presentation explores some spatial representations of musical notions arising from elementary concepts in algebraic topology. The initial idea is to represent simple musical objects (e. g. notes, chords or intervals) by elementary spatial domains, and their relations (e.g. co-occurrence or succession) by neighborhood relationships. The topological notions of incidence, path, boundary, obstruction, etc. are then used to explain and characterize the underlying musical structure. For example, a musical sequence can be represented as a path in a cellular complex materializing the structure of chords. The type of path reveals the musician’s compositional strategies. The application of geometric operations on trajectories leads to musical transformations of the initial piece. Etc.We will attempt to explain the “unreasonable effectiveness” of spatial representations in music by means of natural links between the organization of percepts and topological relations. In particular, it is possible to bridge the gap between the topological approach outlined here and the formal concept analysis developed in symbolic learning.
1pm-2pm Lunch – Posters
2pm-2:30pm Hector PARRA
Composition Masterclass: La plasticidad del gesto
Carlos AGON (IRCAM)
Presentation
Computer-aided composition systems enable composers to write programs to generate and transform musical scores. In this sense, constraint programming is appealing because of its declarative nature: the composer constrains the score and relies on a solver to automatically provide solutions. However, the existing constraint solvers often lack interactivity. To enables the composer to alter and to navigate in the solution space this aim, we propose spacetime programming, a paradigm based on lattices and synchronous process calculi.
2:30pm-3pm
3pm-3:30pm Jean-Louis GIAVITTO (IRCAM)
Presentation

The Antescofo system couples machine listening and a specific programming language for compositional and performative purposes. It allows real-time synchronization of human musicians with computers during live performance, especially in the context of mixed music (the live association of acoustic instruments played by human musicians and
electronic processes run on computers).During live performance, musicians interpret the score with precise and personal timing, where the score time (in beats) is evaluated into the
physical time (measurable in seconds). For the same score, different interpretations lead to different temporal deviations, and musician’s actual tempo can vary drastically from the nominal tempo marks. This phenomenon depends on the individual performers and the interpretative context. To be executed in a musical way, electronic processes should
follow the temporal deviations of the human performers.Achieving this goal starts by score following, a task defined as real-time automatic alignment of the performance (usually through its audio stream) on the music score. However, score following is only the first step toward musician-computer interaction; it enables such interactions but does not give any insight on the nature of the accompaniment and the way it is synchronized.Antescofo is built on the strong coupling of machine listening and a specific programming language for compositional and performative
purposes:- The Listening module of Antescofo software infers the variability of the performance, through score following and tempo detection algorithms.- And the Antescofo language. provides a generic expressive support for the design of complex musical scenarios between human musicians and computer mediums in real-time interactions. makes explicit the composer intentions on how computers and musicians are to perform together (for example should they play in a “call and response” manner, or should the musician takes the leads, etc.).This way, the programmer/composer describes the interactive scenario with an augmented score, where musical objects stand next to computer programs, specifying temporal organizations for their live coordination. During each performance, human musicians “implement” the instrumental part of the score, while the system evaluates the electronic part taking into account the information provided by the listening module.Content
——-
The presentation will focus on the Antescofo real-time programming language. This language is built on the synchrony hypothesis where atomic actions are instantaneous. Antescofo extends this approach with durative actions. This approach, and its benefits, will be compared to
others approaches in the field of mixed music and audio processing.In Antescofo, as in many modern languages, processes are first class values. This makes possibles to program complex temporal behaviors in a simple way, by composing parameterized processes. Beyond processes,
Antescofo actors are autonomous and parallel objects that respond to messages and that are used to implement parallel electronic voices. Temporal patterns can be used to enhance these actors to react to the occurrence arbitrary logical and temporal conditions.During this lecture, we will explain how Antescofo pushes the recognition/triggering paradigm which is actually preeminent in mixed
music, to the more musically expressive paradigm of synchronization, where “time-lines” are aligned and synchronized following performative and temporal constraints.Synchronization strategies are used to create specific time-line that are “aligned” with another time-line. Primitive time-lines include the performance of the musician on stage followed by the listening machine, but may also include any kind of external processes using a dedicated
API to inform the reactive engine of its specific passing of time.
This presentation will be “example-oriented” relying on actual use of Antescofo to implement several variation on Piano Phases (Steve Reich),
the Polyrhythmic machine (Yann Maresz) and the Totem (Marco Stroppa).
3:30pm-4pm Renzo FILINICH
Paper Presentation

“What is this face, less clear and clearer the pulse in the arm, less strong and stronger – given

or borrowed? Farther than the stars and closer than the eye … ” T. S. Elliot, Marina.

The Cholo Feeling reproduces moments of senses release captured in shots that can be stretched, ripped and fracturing over extensions of time. With the performer’s face in the foreground dictated in monumental size, the public is exposed to flashes of this physical / virtual interface. This approach allows an instrument that pursues intimacy and narrative, two aspects of computer live performance that are often difficult to achieve.

This action manifests itself in tensing the relations that arise through the transformation of the image and the sound in real time under the paradigm in the new media of communicating with one and the other, in the work the image is blurred and generates a no time between the image and representation, under that paradigm the performer charges a virtual reality state.

Inspired by the research of granular synthesis, the work is an exploration that utilizes the performance of audiovisual sampling. Taken from the drawing technique from the photograph of scan frames, the work reproduces several points in time simultaneously.
The Cholo Feeling seeks to explore and analyze the effect of this technology on the physical body and how the body influences the technology during the performance, especially in relation to the problem of (re)presenting the ‘unrepresentable’, ie, the sublime of the digital body. The sublime understood as tension between joy and pain – the pleasure of having a feeling of totality inseparable from the pain of not being able to present an object equal to this idea of totality. The spectators are not simply engaged in passive contemplation, but are called to do what, according to Kant, can not be done, to “present the non-representable” (1978, 119), to overcome what is possible for something else, to strive continuously for a judgment that can never be guaranteed.

Cholo is a term used in some Latin American countries as a term of national identity and generally refers to the mestizo population of indigenous and white traits, generally seen in America, which leaves out whites or creoles, blacks, mulattos, Asian and indigenous descendants. This mixture was very frequent in the states or provinces of Latin American countries, where the native population ended up being more of the third part of the population. Due to geographical differences in its use, it is possible that misinterpretations occur in speech; so it is important to know the context in which this term is used to attribute a meaning to it. In some countries it is used as a pejorative term.

4pm-4:15pm Break
4:15pm-4:45pm Jean LOCHARD (IRCAM)
Workshop
Najo Modular Interface offers easy access to many of the processing techniques developed at IRCAM without having to be an expert in Max. Its highly intuitive graphical interface makes it possible to assemble several audio modules and facilitates their control via external devices (e.g. MIDI, graphical tablet, joystick, etc.). The NMI interface makes it easy for users to put together synthesis modules, to read samples, and to carry out sound processing so that they can create all sorts of different sounds. NMI includes a mixing table that makes it possible to balance and spatialize the sound sources that are derived from the different modules.
During, this workshop, we will introduce NMI by exploring some synthesis technics and effects used in « Tentative de réalité » for cello and electronic from Hector Parra.
It is asked to the students to install Max 7.3 (www.cycling74.com) and NMI 2.3 on their laptop before the workshop.
Jaime ORTEGA (CMM)
Listening the mountains : waves and mining
4:45pm – 5:15pm
5:15pm-5:30pm Philippe ESLING (IRCAM)
Paper Presentation
Up until now, researches in artificial intelligence focused on a mathematico-logical approach (the ability of the computer to solve problems formalized as sets of objective and supervised goals). However, driven by recent revolutionary advances, artificial intelligence enters a new era, and should keep on being driven to constantly push back previously established limits.In a newly created team at IRCAM, we rather aim to understand creative intelligence, this peculiarity that fundamentally distinguishes human beings from other branches of the tree of life. In this endeavor, music offers an ideal environment to better understand the creative mechanisms of intelligence. Indeed, musical creativity combines challenging theoretical questions with cognitive problems that are hard to formalize. In particular, the notion of musical time is a primordial and inseparable component of music. Musical time naturally develops on multiple scales (ranging from the identity of a sound to the entire structure of a piece) requiring a finer analysis. On the other hand, music can be transmitted in a variety of forms, ranging from scores to recordings. Thus, an object with a unique semantic meaning can be represented by a multitude of means, which require understanding their interactions. Finally, musical creativity is mostly undergone without a formal goal, an ideal case to understand the creative mechanisms of learning. Hence, adressing these questions could give rise to a whole new category of creatively intelligent machines.These dazzling questions that live inherently in music are also found in a tremendous variety of research fields ranging from heart disease detection to environmental monitoring. Thus, by studying these theoretical questions through the prism of music, we can reach new generic mechanisms of learning that can radiate to several scientific domains. In this conference, we will discuss the latest advances in new models of learning directly targeting creative processes, unsupervised learning, multi-scale flexible time analysis and the discovery of variational spaces mixing data ranging from scores to recordings while analyzing our perception of these. We will demonstrate several state-of-art systems developed at IRCAM for sound synthesis, score generation and musical orchestration.
5:30pm-6pm

Time Hall A1 Conferencial 1
9:30am-10am José Miguel FERNANDEZ and Jean-Louis GIAVITTO (IRCAM)
Workshop: Antescofo
Hector PARRA
Composition Masterclass: La emoción de la ciencia
10am-10:30am
10:30am-11am
11am-11:30am Break
11:30am-12pm Benjamin LEVY (IRCAM)
Workshop: Omax
Hector PARRA
Composition Masterclass: Palabra y música
12pm:12:30pm
12:30am-1pm
1pm-2:30pm Lunch – Posters
2:30pm-3pm José Miguel FERNANDEZ (IRCAM)
Workshop
In this workshop José Miguel Fernandez will present mainly the programming language Antescofo for compositions that use instruments and electronics in real time.
Some examples of works done with this program will be shown. Interaction techniques such as score follower, audio analysis of instruments (onset detection, envelope follower, audio descriptors, etc.) and other types of interaction such as motion capture using sensors to control the synthesis and processing of sound in real time.

Those interested can come with their laptop (Mac) with the program MaxMSP (version 6 or 7) installed.

3pm-3:30pm
3:30pm-4pm Break 
4pm:4:30pm Carlos AGON (IRCAM)
Workshop
OpenMusic (OM) is a visual programming language providing a set of classes and libraries that make it a very convenient environment for music composition. Various classes implementing musical data / behavior are provided. They are associated with graphical editors and may be extended by the user to meet specific needs. Different representations of a musical process are handled, among which common notation, midi piano-roll, sound signal. High level in-time organisation of the music material is proposed through the concept of “maquette”
4:30pm:5pm
5pm-5:30pm Nicolas ESPINOZA y Bernardo GIRAUTA
Paper presentation
This lecture’s aim is to propose a reflection about the intersections between the concept of musical indeterminacy, politics, knowledge and the use of probabilistic algorithms in musical improvisation sessions. The indeterminacy, historically linked to the so-called American experimental music – artist John Cage being its greater representative – consists in leaving open certain aspects of a musical composition to the moment of performance. Free improvisation practices, by abandoning objective criteria of musical organization such as tonality and time signature, are related, in some ways, to the concept of indeterminacy. As John Cage states in many of his writings, indeterminacy greatest goal is to dismiss the author’s control over the final result of the artistic work. This aspect relates aesthetics and formal language problems directly to theories of knowledge and human consciousness.The authors of this artistic research use, in free improvisation sessions, a probabilistic algorithmic framework that maps movement measurements into real-time modifications of the sounds played by the musicians. Developed by one of the authors, the mapping code works through probability distributions creating non deterministic relations between the motion captured by the sensor and the sound. Formal descriptions about the algorithm and its implementation will be includedBy means of a non-human element intervention over the musical experience this algorithm may function as an indeterminacy radicalization device. Philosophers Gilles Deleuze, Félix Guattari and Ludwig Wittgenstein provide tools for the understanding of the political consequences involved in both, the unpredictability of the musical language’s mode of operation, and the introduction of decisions made by non-human elements in artistic productions in general.Observations:- This research has two authors, Nicolas Espinoza and Bernardo Girauta.- This lecture could include a performance.
5:30pm:6pm Leopoldo SOTO
Paper presentation
In the Plasma Physics and Nuclear Fusion Laboratory of the Chilean Nuclear Energy Commission, several technologies, design skills, and capabilities to develop instruments, have been implemented. These include: electromagnetism, pulsed power, lasers, holography, among others. Further than our scientific research activities, we have interacted with artist and professionals of other areas, as for example as consultants for the “Museo Interactivo Mirador” a science interactive museum in Chile. Moreover, plastic artists have been trained in optics and holography in our laboratory. This interaction has opened our mind to other fields and vision. In addition, we have worked together with actresses, actors, musicians and audiovisual artists in the development of scientific dissemination videos for the general public (YouTube CienciaEntretenida Channel). On the other hand, our experience, allows us to explore the use of Tesla coils for music reproduction and interpretation. In this case there is no mechanical membrane that produces the sound, it is the plasma (sparks and rays) produced in the air that generates the sound. This presentation will expose these experiences and their possible projections.

Time Hall A1 Conferencial 1
10am-10:30am Jean LOCHARD (IRCAM)
Workshop Spat
Spat (or Spatialisateur in French) is a real-time spatial audio processor that allows composers, sound artists, performers, and sound engineers to control the localization of sound sources in 3D auditory spaces. In addition, Spat provides a powerful reverberation engine that can be applied to real and virtual auditory spaces.The processor receives sounds from instrumental or synthetic sources, adds spatialization effects in real-time, and outputs signals for reproduction on an electroacoustic system (loudspeakers or headphones).During this short session, we will use the different Spat modules from the source to the headphone by assembling Spat objects in Max.
It is asked to the students to install Max 7.3 (www.cycling74.com) and the Ircam Spat package before the workshop.
Philippe ESLING (IRCAM)
Presentation 
Musical orchestration is the subtle art of writing musical pieces for orchestra, by combining the spectral properties specific to each instrument in order to achieve a particular sonic goal. For centuries up to this day, orchestration has been transmitted empirically and never a true scientific theory of orchestration has emerged, as the obstacles this analysis and formalization must surmount are tremendous. Indeed, this question puts forward one of the most complex, mysterious, and dazzling aspects of music, which is the use of timbre to shape musical structures in order to impart emotional impact. Timbre is the complex set of auditory qualities (usually refered as the sound colour) that distinguish sounds emanating from different instruments. Intuitively, an auditory object is defined by several properties, which evolve in time. Decades of simultaneous research in signal processing and timbre perception have provided evidences for a rational description of audio understanding. We address these questions by relying on both solid perceptual principles and experiments on known empirical orchestration examples, and developing novel learning and mining algorithms from multivariate time series that can cope with the various time scales that are inherent in musical perception. In this quest, we seek tools for the automatic creation of musical content, a better understanding of perceptual principles and higher-level cognitive functions of the auditory cortex, but also generic learning and analysis techniques for data mining of multivariate time series broadly applicable to other scientific research fields. In this context, the multivariate analysis of temporal processes is required to understand the inherent variability of timbre dimensions, and can be performed through multiobjective time series matching. This has led to implement and commercialize the first automatic orchestration system called Orchids, which allows transforming any sound into an orchestral score.
10:30am-11am
11am-11:30am Break
11:30am-12:00pm Andrés Ferrari (Universidad de Chile)
Presentation : El videojuego como marco para la creación de performances interdisciplinarias
Per BLOLAND
Demo
Per Bloland will present the Induction Connection within Modalys, created during his Artistic Research Residency, and his ongoing use of it in his compositions. He would further like to demonstrate the Electromagnetically-Prepared Piano (a device that causes piano strings to resonate via electromagnets), and discuss the development of its physical model (the Induction Connection).
12:00pm:12:30pm
12:30pm-1pm Javier JAIMOVICH, Francisca MORAND
Presentation
The Emovere Project started in 2014 with interests on sensoriality, physiology, consciousness and experience as fundamentals for interdisciplinary and artistic experimentation. This collaboration resulted in a first performance piece “Emovere” premiered in GAM in October 2015. We are currently working on “Self-Intersection”, a solo performance by a dancer, developed around themes of identity and self-image constructed by the subjective relationship of somatic processes, such as self-sensing while moving and vocalizing.This presentation will focus on the techniques and methodologies that we have developed to measure, process and interpret the physiological and inertial signals acquired from the dancers. We will discuss how we have used these features  to create different mappings schemes and sound objects that respond interactively in order to generate the sound environments of the performance pieces.website, 1st example, 2nd example
 Rodrigo CASTELLANOS
Demo
It is the demo of the “Macro Patch”, an application in continuous development to process acoustic signals, mainly of instruments of wind, voice and percussion. It was developed in Pure data and contains original and universal processes that are used for improvisation and the creation of works. The idea is to perform together with a guest musician a conceptual and practical presentation that allows to expose the creative reaches of this patch.
An example
1pm-2:30pm Lunch – Posters
2:30pm-3pm Jean LOCHARD (IRCAM)
Workshop
Modalys is IRCAM’s flagship physical model-based sound synthesis environment, used to create virtual instruments from elementary physical objects such as strings, plates, tubes, membranes, plectra, bows, or hammers.It is also possible to create objects with more complex shapes out of 3D meshes, or using measurements, and Modalys does all the hard computational work for you, bringing them to life and making them sound.By combining these various physical objects, astonishing, unique virtual instruments can then be shaped, and it is up to you to decide how to play them!After a short introduction about physical modelling, we will explore Modalys thru its two main interface : Modalisp will help us to understand the main principles involved in the creation of a virtual instrument in Modalys in a textual interface. We will see how to use Modalys in real time in Max thru simple examples.It is asked to the students to install Max 7.3 (www.cycling74.com) and the Modalys package before the workshop.
3pm-3:30pm
3:30pm-4pm
4pm-4:30pm Rodrigo F. CADIZ y Patricio DE LA CUADRA(Pontificia Universidad Catolica de Chile)
Presentation : Arcontinuo: the instrument of change
4:30pm-5pm
5pm-5:30pm Patricio DE LA CUADRA (Pontificia Universidad Catolica de Chile)
Presentation : The musical gesture: an interdisciplinary approach
5:30pm-6pm Rodrigo F. CADIZ (Pontificia Universidad Catolica de Chile)
Presentation : Data sonification : examples in medical imaging and auditory graphs

 

7:30pm CONCERT


Philippe MANOURY
Partita II for violon and electronics (17 minutes)

Héctor PARRA
I have come like a butterfly into the hall of human life for electronics (18 minutes)

Héctor PARRA
Cell 2 for flute, B-flat clarinet, percussion and piano (14 minutes)

José Miguel FERNANDEZ
Fond diffus electronics (17 minutes)

Rodrigo F. Cadiz
eQuena, for quena, arcontinuo and electronics (11 minutes)

Time Hall A1
10am-10:30am Benjamin LEVY and José Miguel FERNANDEZ (IRCAM)
Concert’s making off and computer music design at IRCAM
10:30am-11am
11am:11:30am Break
11:30am-12pm IRCAM’s researchers, composers, and partners 
Panel discussion and conclusion: Art/Science
12pm-12:30pm
12:30pm-1pm
1pm – 2:30pm Lunch – Posters

Last Update : September 11th, 2017