||IRCAM Forum Staff, Centro Gabriela Mistral, Pontificia Universidad Catolica de Chile, Universidad de Chile and Center for Mathematical Modeling
||Jean LOCHARD (IRCAM)
From artistic discovery to artistic creativity and vice versa: IRCAM news
||Carlos AGON (IRCAM)
Transformational approaches have a long tradition in formalized music analysis in the Americanas well as in the European tradition. This paradigm has become an autonomous field of study making use of more and more sophisticated mathematical tools, ranging from group theory to categorical methods. Within the transformational approach, Klumpenhouwer networks are prototypical examples of music-theoretical constructions providing a description of the inner structure of chords by focusing on the transformations. we summarized in this our proposition of a generalized framework of Klumpenhouwer Networks based on category theory.
||Axel OSSES (CMM)
Listening the body : sound and tomography
||Jean-Louis GIAVITTO (IRCAM)
The notion of space has often been summoned in musical theory to formalize the analysis of pieces and the terms “pitch space”, “rythmic space” or “compositinal space” abund in the musical litterature… While the spatial metaphor is a good support of the musical intuition and thus a useful pedagogical tool, it is also a heuristic that, from a technical point of view, has enabled the development of new computer tools to assist the composer in his creative processes.This presentation explores some spatial representations of musical notions arising from elementary concepts in algebraic topology. The initial idea is to represent simple musical objects (e. g. notes, chords or intervals) by elementary spatial domains, and their relations (e.g. co-occurrence or succession) by neighborhood relationships. The topological notions of incidence, path, boundary, obstruction, etc. are then used to explain and characterize the underlying musical structure. For example, a musical sequence can be represented as a path in a cellular complex materializing the structure of chords. The type of path reveals the musician’s compositional strategies. The application of geometric operations on trajectories leads to musical transformations of the initial piece. Etc.We will attempt to explain the “unreasonable effectiveness” of spatial representations in music by means of natural links between the organization of percepts and topological relations. In particular, it is possible to bridge the gap between the topological approach outlined here and the formal concept analysis developed in symbolic learning.
||Lunch – Posters
Composition Masterclass: La plasticidad del gesto
|Carlos AGON (IRCAM)
Computer-aided composition systems enable composers to write programs to generate and transform musical scores. In this sense, constraint programming is appealing because of its declarative nature: the composer constrains the score and relies on a solver to automatically provide solutions. However, the existing constraint solvers often lack interactivity. To enables the composer to alter and to navigate in the solution space this aim, we propose spacetime programming, a paradigm based on lattices and synchronous process calculi.
||Jean-Louis GIAVITTO (IRCAM)
The Antescofo system couples machine listening and a specific programming language for compositional and performative purposes. It allows real-time synchronization of human musicians with computers during live performance, especially in the context of mixed music (the live association of acoustic instruments played by human musicians and
electronic processes run on computers).During live performance, musicians interpret the score with precise and personal timing, where the score time (in beats) is evaluated into the
physical time (measurable in seconds). For the same score, different interpretations lead to different temporal deviations, and musician’s actual tempo can vary drastically from the nominal tempo marks. This phenomenon depends on the individual performers and the interpretative context. To be executed in a musical way, electronic processes should
follow the temporal deviations of the human performers.Achieving this goal starts by score following, a task defined as real-time automatic alignment of the performance (usually through its audio stream) on the music score. However, score following is only the first step toward musician-computer interaction; it enables such interactions but does not give any insight on the nature of the accompaniment and the way it is synchronized.Antescofo is built on the strong coupling of machine listening and a specific programming language for compositional and performative
purposes:- The Listening module of Antescofo software infers the variability of the performance, through score following and tempo detection algorithms.- And the Antescofo language. provides a generic expressive support for the design of complex musical scenarios between human musicians and computer mediums in real-time interactions. makes explicit the composer intentions on how computers and musicians are to perform together (for example should they play in a “call and response” manner, or should the musician takes the leads, etc.).This way, the programmer/composer describes the interactive scenario with an augmented score, where musical objects stand next to computer programs, specifying temporal organizations for their live coordination. During each performance, human musicians “implement” the instrumental part of the score, while the system evaluates the electronic part taking into account the information provided by the listening module.Content
The presentation will focus on the Antescofo real-time programming language. This language is built on the synchrony hypothesis where atomic actions are instantaneous. Antescofo extends this approach with durative actions. This approach, and its benefits, will be compared to
others approaches in the field of mixed music and audio processing.In Antescofo, as in many modern languages, processes are first class values. This makes possibles to program complex temporal behaviors in a simple way, by composing parameterized processes. Beyond processes,
Antescofo actors are autonomous and parallel objects that respond to messages and that are used to implement parallel electronic voices. Temporal patterns can be used to enhance these actors to react to the occurrence arbitrary logical and temporal conditions.During this lecture, we will explain how Antescofo pushes the recognition/triggering paradigm which is actually preeminent in mixed
music, to the more musically expressive paradigm of synchronization, where “time-lines” are aligned and synchronized following performative and temporal constraints.Synchronization strategies are used to create specific time-line that are “aligned” with another time-line. Primitive time-lines include the performance of the musician on stage followed by the listening machine, but may also include any kind of external processes using a dedicated
API to inform the reactive engine of its specific passing of time.
This presentation will be “example-oriented” relying on actual use of Antescofo to implement several variation on Piano Phases (Steve Reich),
the Polyrhythmic machine (Yann Maresz) and the Totem (Marco Stroppa).
“What is this face, less clear and clearer the pulse in the arm, less strong and stronger – given
or borrowed? Farther than the stars and closer than the eye … ” T. S. Elliot, Marina.
The Cholo Feeling reproduces moments of senses release captured in shots that can be stretched, ripped and fracturing over extensions of time. With the performer’s face in the foreground dictated in monumental size, the public is exposed to flashes of this physical / virtual interface. This approach allows an instrument that pursues intimacy and narrative, two aspects of computer live performance that are often difficult to achieve.
This action manifests itself in tensing the relations that arise through the transformation of the image and the sound in real time under the paradigm in the new media of communicating with one and the other, in the work the image is blurred and generates a no time between the image and representation, under that paradigm the performer charges a virtual reality state.
Inspired by the research of granular synthesis, the work is an exploration that utilizes the performance of audiovisual sampling. Taken from the drawing technique from the photograph of scan frames, the work reproduces several points in time simultaneously.
The Cholo Feeling seeks to explore and analyze the effect of this technology on the physical body and how the body influences the technology during the performance, especially in relation to the problem of (re)presenting the ‘unrepresentable’, ie, the sublime of the digital body. The sublime understood as tension between joy and pain – the pleasure of having a feeling of totality inseparable from the pain of not being able to present an object equal to this idea of totality. The spectators are not simply engaged in passive contemplation, but are called to do what, according to Kant, can not be done, to “present the non-representable” (1978, 119), to overcome what is possible for something else, to strive continuously for a judgment that can never be guaranteed.
Cholo is a term used in some Latin American countries as a term of national identity and generally refers to the mestizo population of indigenous and white traits, generally seen in America, which leaves out whites or creoles, blacks, mulattos, Asian and indigenous descendants. This mixture was very frequent in the states or provinces of Latin American countries, where the native population ended up being more of the third part of the population. Due to geographical differences in its use, it is possible that misinterpretations occur in speech; so it is important to know the context in which this term is used to attribute a meaning to it. In some countries it is used as a pejorative term.
||Jean LOCHARD (IRCAM)
Najo Modular Interface offers easy access to many of the processing techniques developed at IRCAM without having to be an expert in Max. Its highly intuitive graphical interface makes it possible to assemble several audio modules and facilitates their control via external devices (e.g. MIDI, graphical tablet, joystick, etc.). The NMI interface makes it easy for users to put together synthesis modules, to read samples, and to carry out sound processing so that they can create all sorts of different sounds. NMI includes a mixing table that makes it possible to balance and spatialize the sound sources that are derived from the different modules.
During, this workshop, we will introduce NMI by exploring some synthesis technics and effects used in « Tentative de réalité » for cello and electronic from Hector Parra.
It is asked to the students to install Max 7.3 (www.cycling74.com) and NMI 2.3 on their laptop before the workshop.
|Jaime ORTEGA (CMM)
Listening the mountains : waves and mining
|4:45pm – 5:15pm
||Philippe ESLING (IRCAM)
Up until now, researches in artificial intelligence focused on a mathematico-logical approach (the ability of the computer to solve problems formalized as sets of objective and supervised goals). However, driven by recent revolutionary advances, artificial intelligence enters a new era, and should keep on being driven to constantly push back previously established limits.In a newly created team at IRCAM, we rather aim to understand creative intelligence, this peculiarity that fundamentally distinguishes human beings from other branches of the tree of life. In this endeavor, music offers an ideal environment to better understand the creative mechanisms of intelligence. Indeed, musical creativity combines challenging theoretical questions with cognitive problems that are hard to formalize. In particular, the notion of musical time is a primordial and inseparable component of music. Musical time naturally develops on multiple scales (ranging from the identity of a sound to the entire structure of a piece) requiring a finer analysis. On the other hand, music can be transmitted in a variety of forms, ranging from scores to recordings. Thus, an object with a unique semantic meaning can be represented by a multitude of means, which require understanding their interactions. Finally, musical creativity is mostly undergone without a formal goal, an ideal case to understand the creative mechanisms of learning. Hence, adressing these questions could give rise to a whole new category of creatively intelligent machines.These dazzling questions that live inherently in music are also found in a tremendous variety of research fields ranging from heart disease detection to environmental monitoring. Thus, by studying these theoretical questions through the prism of music, we can reach new generic mechanisms of learning that can radiate to several scientific domains. In this conference, we will discuss the latest advances in new models of learning directly targeting creative processes, unsupervised learning, multi-scale flexible time analysis and the discovery of variational spaces mixing data ranging from scores to recordings while analyzing our perception of these. We will demonstrate several state-of-art systems developed at IRCAM for sound synthesis, score generation and musical orchestration.