||Session: Active musicology – sound library
||Session: New interfaces / new instruments
|| Maurilio CACCIATORE
The name comes from the french definition “Musique Mixte”, that indicates the tradition of live-electronics (or even only electronics without real time interaction) with acoustical instruments on stage. It works like a middleware: the modules are ready-to-use but all their parts are open, so the users can modify them locally in their own patch as needed for their project. MMixte is targeted from middle-skill to advanced Max users; the less experienced programmers can get a training about how to organize what is generally called like “concert-patcher” (a Max patcher to use in concert for the management of the electronics of a piece), improve their programming technique or simply avoid the risk of crash during a concert because of a bad programming of own modules. Advanced Max users can trust the extreme simplicity of this collection – only the basic library of Max has been used – and build in few minutes an environment for a piece to develop further by the own. The time of preparation of a concert patcher can be reduced dramatically; the programmer can start after few passages the work about of the audio treatments, the spatialization and the other creative part of the Max patching.
This collection comes from a personal need. I started to develop for myself a hard-core for a standard concert patcher to be used in my pieces. I started to formalize these modules to avoid the lack of time to write them each time from the beginning. MMixte is, in this sense, a collection made from a composer for other composers.
The presentation will show the architecture of the package, its modules and their use.
is a Max package released in July 2017
Over the past decade, first the SATIS department, and the ASTRAM laboratory, and now the PRISM laboratory (Perception Représentation Image Son Musique) have developed a project to creation a browsable, online library of ambient sounds called “Sons du Sud”. This project is currently entering a new phase of development with the creation of a specially-developed thesaurus and the realization of an interface that provides an ergonomic and didactic tool for indexing sounds (sounds intended for audio-visual professionals, more specifically sound editors). Rémi Adjiman will present his current work, research subjects and possibilities. It is also possible he will present the current “Sons du Sud” website, keeping in mind development is scheduled for the end of 2018. This project is supported by the SATT PACA, PRIMI, and AFSI (Association Professionnel du Son à l’Image).
|Peter CONNELLY, Eduardo FOUILLOUX, Jan-Marc HECKMANN
MuX : Create your own musical instruments and soundscapes in VR.
||Session: Artificial intelligence & sound design
|| Sahin KURETA
The presentation is intended as an introduction to deep learning and its applications in music. The presentation features the use deep auto-encoders for generating novel sounds from a hidden representation of an audio corpus, audio style transfer as shown by Ulyanov (https://dmitryulyanov.github.io/audio-texture-synthesis-and-style-transfer/), and future directions based on CycleGan, WaveNet, etc; as well as a high level introductions of some basic concepts in machine learning.
I wish to present on my ongoing work with the Modalys Induction Connection – created as a result of my Musical Research Residency in 2013. This presentation (an updated version of the one I gave in Santiago at a previous Forum) will begin with a general overview, followed by a series of audio examples drawn from my recently created project documentation pages.
This will be followed by additional examples from a piece I am currently composing, for piano and electronics, commissioned by Keith Kirchoff as part of his EAPiano Project. These new examples feature two main components: my experiments with modeling a grand piano bichord struck by a hammer, and additional experiments with the use of the Induction Connection. For most of these examples I first created a ModaLisp text document to generate audio files, then added code to generate an mlys script for real time use in a Max patch.
|| Karolína KOTNOUR
Within this project, I aim to define a mutual synthesis of sound and position of the borderlines in space. Space, as well as word, has its own shape, meaning, and wording. How it affects our perception of architecture acoustic experience space? What is the relationship between sound and vision and how the brain interprets multidimensional space? As the sound spectrum affects brain activity and spatial perception, such as in the audible frequency range based on the physiological structures of the auditory organ of a living organism?
The main purpose of this project is the implementation of a structure that on the basis of this interaction creates a spatial envelope / the veil / that is changing in time and space, resulting in parallel. This creates elastic / fluid / structure of matter and the sound reflecting itself. This situation is modeled on the basis of information obtained from sensors located in space and on the basis of sensory evaluation of human auditory perception in such space. The realization of this structure will be preceded by testing and visualization of such structures in interactive virtual reality using 3D and 4D programs like Rhino, Grasshopper, Max7/MSP for Ableton Live which allows creative representation of a sound as a shape in a common space.
Summary of fundamental objectives:
Visualization of human sensory perception of an environment in a relation to the architecture forms.
Specifics of psychoacoustic space and visualization of the reality and inner structure of reality.
Possibilities and contexts of subjective visualization of sound in art and subjective graphical score for musical compositions. All objectives will be evaluated in bending details of a designing structure of “The Vail.”
| Richard Albert BRETSCHNEIDER
Project GPSES (Gestural parameter space explorer for synthesizers) proposes an approachto support divergent thinking processes of artists navigating through the enormously hugeamount of sounds of a synthesizer. The project lives at the intersection of music andinteraction design.
By using the hand tracking device Leap Motion the artist the parameter space of a synthesizer is being translated into a physical 3D space on the desk of the artist where the musician can navigatethrough the possible combinations with a set of different hand postures and handmovements.
At IRCAM Forum 2016 Richard Bretschneider already presented the theoretical foundationsof this project in a talk. Now he would love to present the enhancements as well as a liveperformance of a working prototype.
The main elements of the prototype consist of:
– Leap Motion device
-Wekinator (Software used to recognize the hand posture via machine learning) -Python script (to translate hand posture + hand position into synthesizer parameters that get sent to a synthesizer via OSC)
– A software synthesizer
Richard Albert Bretschneider is an internationally working artist and user experienceconsultant living in Germany.He believes that technology is not only a translator for ideas – it‘s a source for inspiration as well.
|| Robert B. LISEK
We observe the success of artificial neural networks in simulating human performance on a number of tasks: such as image recognition, natural language processing, etc. However, there are limits to state-of-the- art AI that separate it from human-like intelligence. Humans can learn a new skill without forgetting what they have already learned and they can improve their activity and gradually become better learners. Today’s AI algorithms are limited in how much previous knowledge they are able to keep through each new training phase and how much they can reuse. In practice, this means that you need to build a new algorithm for each new specific task. There is domain called AGI where will be possible to find solutions for this problem. Artificial general intelligence(AGI) describes research that aims to create machines capable of general intelligent action. “General” means that one AI program realizes number of different tasks and the same code can be used in many applications. We must focus on self-improvement techniques e.g. Reinforcement Learning and integrate it with deep learning, recurrent networks.
The multidisciplinary study of trumpets—or cornua in Latin—of Pompeii, ancestors of today’s brass instruments, involving the humanities, acoustics, materials science, instrument making, sound and music synthesis, and is a part of the Paysages sonores et espaces urbains de la Méditerranée ancienne supported by the École Française de Rome. The presentation will focus primarily on one of the study’s aspects, that is the creation of virtual copied of 5 trumpets discovered in 1852 and 1884. These models help us understand their performance as well as their sonic and musical possibilities. In the beginning, the study consisted of analyzing the current reconstitutions of these instruments, which have undergone several restorations, and offer a profile that is as exhaustive and as accurate as possible based on a number of indicators. This profile made it possible to calculate, using the software program Resonans created to assist in the conception of wind instruments, the resonance of these trumpets and give us information about the notes that could be played with them and on a range of characteristics such as accuracy, timbre, ease and power of sound emission.
After this, thanks to the software program Modalys, a full, real-time model of these instruments was created in Max, letting us effectively test their sonic and musical possibilities.
|| Knut KAULKE
Where do beats begin and where does tonality end? You can find the answer in a forest of steel trees composed of resonant bark during a spectral thunderstorm“. The aim of my research project is the merger of melodic and percussion elements resulting into one conglomeration. Drum sets become more tonal without loosing their characteristic noise-like sound.
The ambition is to create entirely new sounds and complex forms driven by beat sequences which can be played via claviature.
During my scientific doctoral thesis I was driven by a passion to discover something new and to increase the understanding of complex molecular networks. In life sciences, hypothesis often have to be modified due to functional complexity of living things. Experimental results may disprove theoretical assumptions – a novel, surprising thought can appear! My curiosity and excitement of organic/natural complexity as a scientist also emerge as a musician when I perform sound studies.
My musical research focuses on a merger of melodic and percussive elements into one conglomeration. I approach my theme at two levels: 1. The sound design of percussive elements and 2. The tonal sequencing in and of this special sounds.
Applied Kyma instruments and effects are almost exclusively based on the SlipStick model. Thereby, SlipStick operates as an engine of sound synthesis. Simultaneously, it induces and influences other forms of synthesis as Frequency Modulation, Physical Modelling and Resynthesis including the combination and influence of one and the other. Subsequently, single sounds are multiplied by the Replicator. Thereby, highly complex and lively sounds are being created.
The intended tonality of percussive sounds is underpinned by the modification of Euclidean and Non-Euclidean rhythms. These rhythms were chosen because the structure of these algorithms may be used to generate very native percussive sequences, which can be modified by shifting tone pitches and other variances leading to a mergence in tonality and a subsequent collapse. A special developed Kyma sequencer, the core of my setup, can be played in half tone steps and creates instrumental sounds, which will surprise everyone.
The content of the demo:
Applied Kyma instruments and effects in my demo performance are almost exclusively based on the SlipStick model. Thereby, SlipStick operates as an engine of sound synthesis. Simultaneously, it induces and influences other forms of synthesis as Frequency Modulation, Physical Modelling and GrainCloud Resynthesis including the combination and influence of one and the other. Subsequently, single sounds are multiplied by the Replicator. The replicator can be used in Kyma’s programming environment for multiplying variables. (e. g. voices, controller values, instruments etc.)
In my demo I will show some techniques in how to excite SlipStick and how to use SlipStick as an exciter of sound synthesis. Furthermore, I will share some modification hints leading to a more lively sound.
The percussion is a wooden box on legs. It can be played on its top surface with hands (you can hit or rub the surface). The box itself works as a sound box. Of course, the table makes sound on its own, but the addition of electronics makes it possible to hear a larger range of timbres. Sensors detect when the surface is touched, the information is sent to a microprocessor (a BELA
card). After analysis, the card sends the audio signal that vibrates the sound box, transforming the sound created by the instrument.
This table is part of my final student project, a mixed composition for solo instrument and spatialized electronics using Max with Antescofo, Spat, and the language FAUST. It also includes a video projection on several screens. The work is presented here as an interactive installation and the audience can play on the table; their actions are echoed in the electronic sounds and the image.
||Installation : Mux
||Session: Career path with secondary-school pupil
||Session: From the lab to the scene
8 March The Women Day:Woman in Sound professions The composer Violeta CRUZ
Since 2011, I have been working on a research and writing project based on the musical and staged dialogue between symphonic instruments and electroacoustic sonic objects. Following up on the creation of three objects (the electroacoustic fountain, the little man machine, and the light rattle) that led to 7 concert works and 4 installations and performances, the project has recently expanded to encompass the conception and construction of the sonic décor of my opera La Princesse Légère. This décor was designed in collaboration with the set designer Oria Puppo and the director Jos Houben. In the context of an opera, the theatrical dimension of the objects becomes more important, providing new leads for their musical exploitation and offering new challenges.
|Franck VIGROUX and Antoine SCHMITT
Chronostasis is a temporal illusion that effects the neurons responsible for the prediction of the immediate future and to listening to music: time seems to stand still. But time is elastic and a stretched elastic always returns to its original form. The audio-visual performance of Chronostasis pushes this logic to its limits by diluting a catastrophic moment with temporal stretches and inversions throughout the performance. The present is frozen and diffracts forever, the past and the future cease to exist.The music is interpreted live with electronic instruments, the video is generative.
|| Suguru GOTO
These are based upon sensor technology, as well as programming of Mapping Interface and robotics in order to construct instruments virtually, and at the same time this explores the relationship between interfaces and humans (Man and Machine). Suguru Goto has been appointed Associate Professor at Tokyo University of the Arts since April, 2017, and he has been further developing this research and work productions. Having been basing on the interactive environment, it reacts images and sound in the space of virtual reality. As a feature of reproduction of the virtual space within this research, the simulation of a zero gravity space has been conceived in the process of development. The results and knowledge of this research will be able to be perhaps applied to new devices, including new art expression, and will create a new flow in the field of experiential expression using sounds and images.
|| Julia BLONDEAU (IRCAM)
This presentation will focus on the creation of the work Namenlosen, premiered at the Philharmonie 2 de Paris in June 2017 by the Ensemble Intercontemporain. I will discuss the use of the language Antescofo and its connections with Pnoramix (new graphical interface for Spat) and to CSound (for the generation of synthesis in real-time). I will provide a few examples of uses of multiple times and writing as well as a library of spatialization with automatic source assignment.
||Pedro GARCIA-VELASQUEZ and Augustin MULLER (IRCAM)
Residency release in artistic research Ircam/ZKM
The aim of this project is to explore the possibilities of characterizing imaginary and virtual spaces. Rather than trying to create an acoustic simulation of a place, we will explore the expressive and musical possibilities of particular acoustics connected to the evocative power of sound and the expressivity of memory.
After numerous concerts and binaural sound experiences, we created a library of High Order Ambisonics (HOA) format acoustic imprints that can be used in a variety of situations for the acoustic and oneiric characteristics of existing venues, stressing the possibilities of immersion of spatialized listening. This library focuses on the remarkable acoustics of certain places, but also on their poetic nature and their evocative power. These acoustic imprints and a few of the ambiances captured in situ are used here as etudes, or sketches that make it possible to explore certain possibilities and offer an acoustic journey in these re-imagined spaces.
||Session: Collective interaction and geolocation
||Session: Gesture – therapy – bioart
|| Jan DIETRICH
est un projet musical qui a débuté en juin 2017 à Hambourg. Il a été lancé au Blurred Edges festival de musique contemporaine: 10 compositions ont été commandées, numérisées, munies de données GPS et localisées à travers la ville. Le public a pu rechercher les pistes avec le smartphone et les écouter directement sur le site.
La présentation présentera l’historique et l’idée du projet StreamCaching soit expliquer le concept de localisation de 10 œuvres d’art le long des voies au sol du satellite, montrer des extraits des compositions et de l’art visuel, donner un aperçu de la façon dont le projet se poursuivra et laisser place à la discussion sur les questions techniques et sociales.
|Andreas BERGSLAND and Robert WECHSLER
The MotionComposer is a therapy device that turns movement into music. The newest version uses passive stereo-vision motion tracking technology and offers a number of musical environments, each with a different mapping. (Previous versions used a hybrid CMOS/ToF technology). In serving persons of all abilities, we face the challenge to provide the kinesic and musical conditions that afford sonic embodiment, in other words, that give users the impression of hearing their movements and shapes. A successful therapeutic device must, a) have a low entry fee, offering an immediate and strong causal relationship, and b) offer an evocative dance/music experience, to assure motivation and interest over time. To satisfy both these priorities, the musical environment “Particles” uses a mapping in which small discrete movements trigger short, discrete sounds, and larger flowing movements make rich conglomerations of those same sounds, which are then further modified by the shape of the us
|| Oeyvind BRANDTSEGG
The project explores cross-adaptive processing as a drastic intervention in the modes of communication between performing musicians. Digital audio analysis methods are used to let features of one sound modulate the electronic processing of another. This allows one performer’s musical expression on his/her instrument to influence radical changes to another performer’s sound. This action affects the performance conditions for both musicians. The project method is based on iterative practical experimentation sessions. Development of processing tools and composition of interaction mappings are refined on each iteration, and different performative strategies explored. All documentation and software is available online as open source and open access.
The project is run by the Norwegian University of Technology and Science, Music Technology, Trondheim. Collaboration partners at De Montfort University, Maynooth University, Queen Mary University of London, Norwegian Music Academy, University of California San Diego, and a range of fine freelance music performers.
The presentation will look at key findings, artistic and technical issues, and future potential.
| Thomas DEUEL
The Encephalophone is a hands-free musical instrument and musical prosthetic. It measures EEG ‘brain-wave’ signal to allow users to generate music in real time using only thought control, without movement. With unique Brain-Computer Interface (BCI) algorithms it harnesses the user’s electrical brain signals to create music in real-time using mental imagery and not requiring any movement. Thus it can be used with paralyzed individuals as a musical prosthetic. It has been experimentally proven to work with reasonable accuracy, and is now being used in clinical trials with patients who are paralyzed from stroke, MS, ALS, or spinal cord injury. These patients who have lost their musical abilities due to neurological disease are empowered to create music in real-time for the first time since their injury without needing movement.
Garth Paine present his artistic residency project in the context of residencies Ircam and ZKM.
Future Perfect will be a concert performance using smart phone virtual reality technologies and ambisonic/ WaveField sound diffusion.
Future Perfect explores the seam between virtual reality as a documentation format for environmental research and archiving nature, combining the thoughts that:
1) ‘nature’ as we know it may, in the near future, only exist in virtual reality archives, and the
2) notion of the virtual, a hyper-real imaginative world contained by a technological mediation can be presented to individual as a personal experience.
The Future Perfect performance will not have a fixed Point of View. interactive crowd mapping using smart phone beacons will generate personal journeys through the work and determine each audience members own viewing and listening perspectives. The work will draw on the deep expertise at IRCAM in WaveField Synthesis techniques, which through the smart phone tracking will allow sonic objects to be attached to and follow people within the concert space. HOA ambisonics will use SPAT to creat an immersive sound field immersion. Smart Phone tracking will allow the tracking of people within the concert space, using flocking and spatial spread to drive interactive musical and animation parameters.
The work will be made from 360 VR footage shot by Paine in nature preserves in Paris and Karlsruhe, blended with procedural animations, derived from plant images and HOA record gins made by the composer at the same location. Participants will be able to walk freely through the space, with vector lines being drawn between people subject to proximity and vectors of movement. Other individuals will be indicated in the VR space as outlines to make movement safe and to help develop a collective consciousness
|| Emanuele PALUMBO
I would like to present my research work as a performative installation that will address two areas of interest: the relationship between the musical gesture and the physiological response of a saxophonist, the relationship between listening and a the physiological “resonance” of a physiological performer, and finally, the relationship between the two. These different types of interaction will be explored throughout the spaces and moments of the installation that, via a computer, automatically generates its form. The psychological parameters of the musician and physiological interpreter are captured via the LISTEN system, processed by a computer and used to create—in real-time—both the electronic sounds and the score. The physiological interpreter is a dancer who will be in different positions in the space: standing, sitting, lying down. Here my work is combined with a colleague, the choreographer Zdenka Brungot Svitekovà. Zdenka works on the somatic power of certain techniques to manipulate the body: another dancer will be responsible for managing the “physiological interpreter”, creating changes in the quality of the fabric of the body; the result will therefore be a change in the music generated. The people who enter the installation space are also invited to interact with the physiological interpreter. In a third part of the installation, we will explore the relationship between the saxophonist and the physiological interpreter.
This installation will also display the technology used and the data captured. Monitors with this information will be displayed to the participants.
At the end of the 30-minute performance, I will present the installation