Check out the beta version of the new IRCAM Forum ⇢

IRCAM FORUM -WOCMAT Workshops Preliminary Program – Taiwan

At Kainan University (Taoyuan, Taiwan)

From 14th to 16th December 2016

Please note: this preliminary program can be submitted to future modifications or additions.

Information which contain abstracts / biographies:

Music presentation


Wednesday 14th, December

Wednesday 14th, December

Parallel Sessions

Time Program Chair
08:40-9:30 Registration Preparatory Office
09:30-10:00 Opening ceremony President Jung-Hui Liang, Dean Yao-Ming Yeh
10:00-10:20 Jeff HUANG (Taiwan University), Paola PALUMBO and Emmanuel JOURDAN (IRCAM)
General Presentation of WocMat-Forum IRCAM Conferences and presentation of Forum IRCAM
Prof. Yu-Chung Tseng
10:20-11:10 Ken PAOLI (College of Du Page, USA)
Macrostructural Aspects of Algorithmic Composition; Using Schenkerian Concepts to Shape Sections of a Composition and to Construct Larger Compositions
Prof. Chao-Ming Tung
11:10-11:20 Break
11:20-12:10 Emmanuel JOURDAN (IRCAM)
Born in 1980, Emmanuel Jourdan began studying the clarinet at the age of 8, and later studied computer music at the National Music School of Montbéliard and at the Conservatory of Besançon. He received many awards and honors and scholarships in clarinet, chamber music, conducting, computer music, and computer-assisted composition. From 1998 to 2001 he taught clarinet and played with several orchestras in eastern France. Since 2001 he has been working at the IRCAM pedagogy department, teaching computer music to young students and music educators, and was involved with the development of “Music Lab”, a project for the French National Ministry Education. Since 2003, he has been principally involved with teaching Max/MSP/Jitter to the students of IRCAM’s year-long cursus, as well as conducting workshops and seminars on IRCAM software. In 2006, he worked on Kaija Saariaho’s opera “Adriana Mater”, premiered at the Opera Bastille in Paris. Since 2006, he has been working as a developer for Cycling ’74, Inc, working on Max, Max for Live and Mira. He founded e—j dev in 2013 to deepen working at the frontier between artists and scientists.


Max is born at Ircam in the eighties to help composer fulfill their ideas. Almost three decades later, Cycling ’74 continues to develop the software that is used in every production at Ircam and around the world. This presentation will cover a brief history of the development of Max, and focus on the technologies that have been developed at Ircam to extend Max’s capabilities.

Prof. Chih-Fang Huang
12:10:13:40 Lunch buffet
13:40-14:20 Thibaut CARPENTIER (IRCAM)
News Trends on IRCAM Development
Prof. Yu-Chung Tseng
14:20-15:10 Grégoire LORIEUX (IRCAM)

Since its fundation by Pierre Boulez in 1977, IRCAM has developed a multiple activity around creation, research and transmission. A lot of products we propose for the public are the results and the mirror of this intensive activity in diverse fields of research, as the deposit of this knowledge. All this technologic, musical, intellectual experience is transmitted to younger musicians, composers and scientists through a series of pedagogical activities.
Prof. Chao-Ming Tung
15:10-16:00 Leigh LANDY (Music,Technology and Innovation Research Centre, De Montfort University)

As an experimental composer/musicologist who has been responsible for a modest number of technological developments, I feel it is my role to help create a musical future based on a strong foundation of the past. The fact is that a significant amount of experimental music involving technology seems to be exploring technological more than musical goals. Perhaps analogously some technology is being developed that is (not yet) relevant to music. Why can’t they be merged somehow? For whom is this music/this technology being made?
Furthermore, there are interesting changes taking place in music making in this early phase of a new century, at least in terms of experimentation. This involves a move of focus from new musical languages, content and use of space to means of production (e.g., of sounds/samples, instruments and music) and dissemination, all of this coexisting with commercial culture of course. Given my interest in making experimental music accessible to a broad audience and inviting greater participation, this keynote will focus on a number of problem areas and, more importantly, some opportunities for artists, scholars and developers in order to help us get ahead of the game within this field.
Prof. Takeyoshi Mori
16:00-16:10 Break
16:10-17:00 Benoit MEUDIC (IRCAM)

In May 2016 was created the last musical piece and first choregraphy of Thierry De Mey “Simplexity”. This piece is the concretization of 3 years of research and exploration of different ways to put in a concrete language the imagination of the composer, thanks to Ircam’s softwares.
This presentation will focus on three aspects of the piece: first we will present the algorithms and generative processes that were written in Open-Music for generating sounds, musical scores and gestures. In particular, a theoretical model of the multiphonics of a perfect string was implemented, and a generative compositional algorithm was developed with the possibility of building hundreds of musical scores from a given database of specific chords and rhythms. Then we will focus on different gestures as pendulum oscillations, snake displacements or gibbons playing that were inspiration sources for building physical models in max/msp. Last, we will present an interactive video gesture sound generation system that was used during the concert. A live demonstration of this system will be proposed.
Prof. Takeyoshi Mori
17:00-17:15 Conclusion of the day

Time Program Chair
8:40-18:00 Rehearsal
18:00-19:30 Dinner (on your own)
19:30-21:30 Concert Prof. Shing-Kwei Tzeng

Time Program Chair
8:40-12:10 Sound Gallery / Paper Poster
Prof. Ting-Yu Wang
12:10:13:40 Lunch buffet
13:40-16:00 Sound Gallery / Paper Poster Prof. Ting-Yu Wang

Lobby, International Conference Hall, Zhuo Ye Hall B110

19:30-21:30 Concert Prof. Shing-Kwei Tzeng

Thursday 15th, December

Thursday 15th, December

Parallel Sessions

Time Program Chair
9:30 – 10:00 Grégoire LORIEUX (IRCAM)
Presentation of Orchids and orchestration tools
Prof. Takeyoshi Mori
10:00 – 10:45 Gilbert NOUNO (IRCAM)
Gilbert Nouno will present a research work on concatenative synthesis originally developed by composer Ben Hackbarth in collaboration with Ircam in the musical research residency program. It analyzes databases of sound segments and arranges them to follow a target sound according to audio descriptors. The presentation of Audioguide will be followed by a workshop where the participants would experiment working with the software based on the python language and Ircam audio descriptors by the Sound Analysis and Synthesis research Team. Gilbert Nouno will also present a general approach of a composing workflow for electronic events in the Antescofo environment developed at Ircam during the last years. The Antescofo proper language has now become a unique tool for real time event management, and enable new ways to think electronic music scores. Recent music works and compositions using this new approach will be discussed, followed by a workshop for composers and sound artists.
Prof. Takeyoshi Mori
10:45 – 11:45 Paper Presentation: Multimedia Interactions

1. Chih-Fang Huang and Yajun Cai: Real time automated accompaniment system
2. Ken Paoli: Phil WInsor’s Formosan Aboriginal Legends
3. Naotoshi Osaka and Kazuho Hara: A rule-based automatic music arrangement
4. Zhibo Xu and Dalei Fang: Body music: an attemps of hyper-musical representation through multiple sound processing approaches
5. Shing-Kwei Tzeng: The Arts of Tai-chi 42 Postures with Hoomei as Interactive Performance
11:45 – 12:00 Break
12:00 – 12:30 Yi-Cheng Lin (Composer)

Zoe (Yi-Cheng) Lin (1982- ), a contemporary classical and electronic music composer and iOS developer received her Doctor of Musical Arts degree in composition at University of Wisconsin-Madison at age of 26. Currently working as Assistant Professor at Music Department of Fu-Jen University in Taipei, Taiwan. Not only a composer with musical works performed in States, France, Czech Republic, Malaysia, and Taiwan, but also a pianist, percussionist, Japanese Tsugaru Shamisen player, and Chinese Qin (seven string zither) player. A self-directed learner fascinated by various kinds of knowledge, including sound design, quantum physics, Zen, eastern culture and philosophy, as well as programing language including swift, python, C#, html5, CSS, and JavaScript. Thus, composing music with fresh sonorities and styles of oriental voices and imaginary science-fictional scenes.


Most of us believe that we are living in a three dimensional world. However, based on recent quantum physics research, we might live in a world with eleven dimensions, and we can only sense three of these dimensions since the other eight dimensions are hiding inside these three dimensions. Can you imagine a multi-dimensional world? What if we can sense all these other eight dimensions? Will other dimensions lead us into different worlds?Though I am not a scientist, I very much enjoy entertaining the possibility that other dimensions are hiding in our world. As a composer and an iOS developer, I thought I could explore this notion of multi-dimension in the form of electronic music, and through my work, which uses a lot of panning to give audience a sense of space, together with headphone, VR device, and an iOS APP of telling the story, provide audience a “private and personal” experience of traveling through a series imaginary multi-dimensional world.The work contains six movements. Movement one is about Earth; the middle movements explore different worlds or dimensions; and the final movement relates to Earth again, but in new ways that are reflective of the musical and dimensional journey this piece sets out to pilot.The instrumental parts of this composition express the emotions associated with a multi-dimensional journey. The electronic parts represent the different kinds of scenery I imagine to be present in all of the new worlds. Though a world with more than three dimensions is intangible and beyond many people’s imagination, I do not compose a hodgepodge of nonsense sounds or notes to represent that which defies logic. I compose based on sounds generated in the real world and organize those sounds in an illogical-logical way in order to represent the different worlds in each movement. I draw upon research suggesting that while dreaming, people are not limited by their neural system as much as when they are awake, and therefore can sense information from other dimensions, i.e. visualize their dreams. I intend to conceptualize the way the brain transfers (or translates) such information into recognizable images in audible ways.The human brain, unfortunately, does not always accurately translate images and sounds; this is why, when we recall our dreams, the narrative, imagery, and the like are foggy. As such, the four middle movements of my piece will be distinct in their sounds in order to capture the diverse experience in this journey:

Mvt. II- A Metal World with Metal Air: Use the sound of metal to suggest a world contains metal rocks and air composed by metal materials.
Mvt III – A Dry and Windy World: Use sand and wind sounds to suggest a world that is dry, bold, and even a bit boring, without any living things.
Mvt. IV – A Violent World with Fire: Suggest a world that is hot and violent consumed by warfare and fires.
Mvt. V – A World Composed with Spirits: Suggest a world that is not tangible, containing nothing but spirits or will-o’-the-wisp . The sound of “spirit air” starts from short value, and then gradually become longer, in order to convey a world in which time (or the measurement of time) is not stable. I want to express the idea that this world is beyond our space-time dimensions and completely imaginary. I want to create a world that scientists cannot prove exists or does not exist.

From Mvt. II to Mvt. V, the electronic music will progress from sounds that are hard and concise, and then gradually dissolve into sounds that are not tangible and more akin to the spirits. Since these sounds are recognizable in our reality but organized in a way that is not quite logical, I am taking a page from surrealism.

In order to present the idea that all of these worlds and dimensions are unified under a grand physical rule, musical elements such as dynamics, pitch intervals, rhythmic pattern, register, and articulations will be carefully interlocked and based upon total-serialism compositional methods.

Trying to project unknown multi-dimensional cosmos is not an easy thing to do, but I hope through this demo, I can lead audience to a scientific world beyond our imagination.

12:30 – 13:45 Lunch
13:45 – 14:15 Yu-Chung Tseng
Yu-Chung Tseng, D.M.A., associate professor of computer music composition, director of music technology master program and laptop orchestra-CLOrk at National Chiao Tung University in Taiwan, R.O.C.


This presentation shows an integrated wireless wearable interactive music system – MusFit constructed at Music Technology Lab of National Chiao-Tung University in Taiwan in 2016.This presentation presents a wireless integrated wearable interactive music system – Musfit. The system was built according the intension of integrating the motion of hands (fingers), head, and feet of a performer to music performance.
The device of the system consists of a pair of gloves, a pair of shoes, and a cap, which were embedded various sensors to detect the body motion of a performer. The data from detecting was transmitted to computer via wireless device and then mapped into various parameters of sound effectors built on Max/MSP for interactive music performance.
The ultimate goal of the system is to free the performing space of the player, to increase technological transparency of performing and, as a result, to promote the interests of interactive music performance.
At the present stage, the progression of prototyping a wireless integrated wearable interactive music system has reached the goal we expected. Further studies are needed in order to assess and improve playability and stability of the system, in such a way that it can be effectively employed in concerts eventually.Wearable devices with sensors embedded have been widely adopted in human-computer interaction and new interfaces for musical expression communities. A wireless wearable interactive music device controlled by body posture has been increasingly developed. A known example of wireless wearable devices for interactive music performance is Laetitia Sonami’s Lady’s Glove built and developed by Stein.
Different from Lady’s Glove, the wireless wearable interactive music system – Musfit was intended to integrate the motion of hands (fingers), head, and feet of a performer to music performance.
The device of the system consists of a pair of gloves, a pair of shoes, and a cap. Several hardware including Arduino, Bluetooth, and various sensors were employed in the system. Sensors built in gloves, shoes, and a cap, were used to detect the motion of performer’s feet, fingers, and head.
The data from detecting was transmitted into computer via Bluetooth device and was mapped into various parameters of sound effectors via the algorithms of Max program. In addition, the mapped numbers were also used to trigger the mode switch of sound effectors or to trigger the mode of sound diffusion.
The object of the research is to free the performing space of the player and increase the technological transparency of performance while involving the use of technologies in music concert.This research was supported by the Ministry of Science and Technology in Taiwan (R.O.C.). (Project No. 104WFA0650423)
Related quick ppt reference(with pictures ):

14:15 – 15:30 IRCAM – Pavlos ANTONIADIS

Pavlos Antoniadis is a Berlin-based pianist specializing in complex contemporary and experimental music, as well as a doctoral researcher at IRCAM and LabEx GREAM, Université de Strasbourg.


The proposed paper and demo introduces a model of embodied interaction with complex piano notation and a prototype interactive system for the gestural processing and control of musical scores.
Concluding: We wish to present a performer’s perspective on the osmosis between contemporary performance practice, embodied cognition and computer music interaction, by way of a theoretical model of embodied navigation of complex notation and an interactive system dedicated to it. This presentation affirms the centrality of gesture as an interface between physical energy and symbolic representations and hopes to contribute in the discussion concerning the ontological status of gesture and notation in a digitally mediated world.

15:30 – 16:00 Clinton Watkins (artist and lecturer at Colab, Auckland)


The Invisible Narratives presentation is a 15-20 minute live sound performance that utilizes a specialized modular synthesizer system, midi interface and associated sequencing software. The focus of Invisible Narratives is upon creating narratives that are purely sonic and imageless, utilizing field recordings captured between 2012-2016 from various international locations within China, New Zealand, Australia, Europe and America. I will utilize the collected sound in application with my newly established composition techniques, customized electronic hardware and software for the production of compositions and performances that focus upon the macrocosm of a particular location. The purpose of the performance is to evoke a visceral sense of isolation within another environment via new sound technologies.


Clinton Watkins investigates affect through the construction of combined immersive experiences of sound, colour and scale. Work focuses on the characteristics, structures, phenomena, and processing of sonic and visual material. Installations incorporate found and custom-made audio and video hardware to create repetition, distortion, duration and form, distilled via a minimalist sensibility. He has exhibited in solo and group exhibitions throughout New Zealand, Australia, Europe, Asia and the United States. He is represented by Starkwhite Gallery, Auckland. Watkins is also a practicing experimental musician who regularly produces and performs as a solo artist and collaboratively, most recently working with artist Santiago Sierra and performing along-side free jazz saxophonist Peter Brötzmann. He holds a Doctoral Degree from Elam School of Fine Arts, lectures in experimental time-based media and is the Programme Leader of the Bachelor of Creative Technologies degree at AUT.

16:00 – 16:15 Break
16:15 – 16:45 Jongwoo Yim
16:45-17:00 Conclusions of the day

Time Program Chair
9:30-12:30 Sound Gallery / Paper Poster
Prof. Ting-Yu Wang
12:30:13:45 Lunch buffet
13:45-16:45 Sound Gallery / Paper Poster Prof. Ting-Yu Wang

Time Program Chair
9:30-12:30 Rehearsal
12:30-13:45 Lunch Buffet
13:45-16:45 Rehearsal Prof. Shing-Kwei Tzeng

Time Program Chair
10:45-12:30 Workshop on Max librairies Olivier PASQUET
Emmanuel JOURDAN
12:30-13:45 Lunch
13:45-16:45 Workshop on Audiosculpt Grégoire LORIEUX


International Conference Hall, Zhuo Ye Hall B110

19:30-21:30 Concert 2 (free admission) Prof. Chien-Wen Cheng

Friday 16th, December

Friday 16th, December

Parallel Sessions

Time Program Chair
9:30-10:20 IRCAM – Olivier PASQUET (Computer Music Designer and artist)
Presentation of jTol rythm library
Prof. Chao-Ming Tung
10:20-11:10 Paper Presentation: Music Data Processing

1. Stone Cheng, Shi-Shiang Niu and Cheng-Kai Hsu: study of soundscape emotions alteration by a blend of music signals
2. Dong Zhou Interactive Environmental Sound Installation for Music Therapy Purpose
3. Ladislav Marsik: – Java library and tools for chordal analysis
4. Anna Terzaroli: The Dissonance Notation
5. Byeongwon Ha: Diligent Operator: the resurrection of musique concrète with Max/MSP Jitter and Arduino
11:10-11:50 Break
11:50 – 12:30 Chow Jun Yan


For the past few decades, researchers have conducted investigations to understand how musicians communicate and coordinate with each other as a performance unfurls. In general, verbal, non-verbal (eye contact, body language, and etc.), and musical cues have been employed by the musicians as a necessity of unifying the piece as a whole. In this proposal, I have extended the investigation from a single modality performance (music) to a multi-disciplinary improvisation performance, where one musician and one visual artist have been invited to improvise together within their own disciplines. A composition with minimal structure acts as the common ground and has been provided to the performers as a general guidance for the performance. The performers use the materials from their own disciplines to coordinate, communicate and interact with each other when the performance unfurls. Furthermore, instead of conducting the investigation in a laboratory setting, the investigation has been conducted within a ‘quasi-naturalistic’ setting of rehearsals and performances.In this presentation, the strategies for the preparation during the rehearsals will be presented, followed by a discussion of the communication strategies between a percussionist with live electronic music and a digital visual artist during their live interactions. The identification will help to provide a general understanding on how the performers from different modalities manage to develop a moment-by-moment acute sense of coordination and communicate with each other as the performance unfolds. In addition, the identification will also shed light on criteria useful for constructing a media platform which can support multiple modality improvisation.

12:30-13:45 Lunch buffet
13:45-14:45 Paper Presentation: Audio Data Processing

1. Ho-Chun Herbert Chang and Spencer Topel: Sideband: An Acoustic Amplitude Modulation Synthesizer
2. Li-Chuan Tang: a design for a spectral-resolved music-colour display scheme
3. Natalie Yu-Hsien Wang, Fan-Pei Gloria Yang, Chen-Pei Lin, Tung-Mao Chiang and Yen-Ting Lai: enhancement of brain networks after music therapy Y
4. 妍苑 高, Shen Lin, Chih-Fang Huang and Yancong Su: a pilot study on the interactive music biorobot integration
5. CHOW JUN YAN: Exploring Co-performer Communication in Sound-Visual Improvisatory Performance
14:45-15:00 Break
15:00-16:00 Award ceremony
Award ceremony of Sound Installation and Multimedia exhibition and 2016 Joint WOCMAT-IRCAM Forum Conferences (free admission)
16:00-16:20 Conclusion of the Forum IRCAM-WocMat

Time Program Chair
10:20-12:30 Workshop on OpenMusic Benoit MEUDIC
12:30-13:45 Lunch
13:45-16:00 Concatenative synthesis Workshop Gilbert NOUNO


program (pdf)

bandeau partenaires

Last Update : December 12, 2016