TribuneView more

alarm/will/sound: A Multidisciplinary Research and Installation Project


An overview of the IRC Résidence en Recherche Musicale project “alarm/will/sound,” realised as a collaboration between composer Alexander Sigman, the IRCAM Sound Perception and Design (SPD) research team, and Stuttgart-based product designer/visual artist Matthias Megyeri. This article provides a bird’s-eye view of the project’s objectives, phases, and results, peppered with media examples.

Car alarms. They’re loud, they’re annoying, and frankly, no one even listens to them.

For these reasons, the audible car alarm has not proven to be as effective a deterrent as it has been a source of noise pollution.
While in recent decades, other sonorous components of the automobile (e.g., the audio system, engine, cabin, doors, turn signal, horn, exhaust system) have been developed significantly in recent decades–particularly with the advent of the electric car–the car alarm system has received comparatively little attention. Needless to say, non-audible car alarm systems such as Lojack and others employing GPS technology have been associated with a comparatively high vehicle recovery rate. In the context of this project, we have not been (exclusively) interested in improving the performance of car alarms in general, but rather in investigating hitherto unexplored potentials of  expanding the audible car alarm with respect to sound vocabulary and interaction paradigms.

Who are “we” and why are we interested in car alarms? 

The IRCAM Sound Perception and Design (SPD) Team has a history of collaborations with the auto industry. Research topics have ranged from car horn sound quality to perceived urgency in audio features and its application to the design of car interior human-computer interfaces to the sonification of electric cars. Recently, the SPD team’s sound design (associated with composer A. Cera) for the Renault Frendzy concept car received substantial publicity.
Matthias Megyeri’s work has spanned the artistic and commercial sectors. With Sweet Dreams Security®, his commercial home security company and brand established in 2003, Megyeri proposes a climate change for the way in which one approaches the “institution of security,” demanding to rethink the way security is traditionally reproduced, and creating a line of alternative products, all by a simple gesture. This gesture replaces the fear from others with a friendly proposal just by smiling sometimes literally, sometimes conceptually and positioning them not as potential criminals, but potential friends.  Since 2006, five works from the Sweet Dreams Security® line have been part of the New York Museum of Modern Art permanent collection
A Stuttgart native, Megyeri has also maintained a long-standing interest in directly engaging with the automobile industry dominant in the region (Stuttgart is home to both Mercedes and Porsche)—with respect to both car design practices and relationships between car manufacturers and artistic institutions and artists.
Here are a few popular items from his Sweet Dreams Security line:
10_cctv 06_padlock 01_fence
As a composer, Alexander Sigman has recently been interested in the influence of the sounds of physical environments on the aesthetics of composers and sound artists, as well as the impact of composers and sound artists on physical environments. Many of his recent ensemble, electroacoustic, installation, and media works deal with the reconstruction of, interaction with, and importation of sonic source materials from urban and industrial environments.In addition to his compositional activities, Sigman also has a background in Cognitive Science—Music Cognition and Timbre Perception in particular— and has thus been approaching the project from both an artistic and research perspective.

The Interesting Thing about Car Alarms [sic!]: Public/Private Space 

When activated, the car alarm effectively creates an invisible, nebulous boundary between the private space of the vehicle and the public space surrounding it, extending beyond the vehicle’s visible, physical boundary. Judging by the number of You Tube videos of people dancing to car alarms, this grey zone between private property and public has often been creatively navigated:
Who is ultimately the target of the audible car alarm? The perpetrator? Unlikely, as audible alarms have been proven to be ineffective deterrents. The car owner, who may be out of range of the alarm? The public, which tends to ignore or flee sounding car alarms?

Fundamental Questions and Alternative Models

Wherein lies the kernel identity of the car alarm? Does it lie in the hardware components and the context? Or in the sounds that it produces and mode of interaction by which it is activated? If the latter, is is thus possible to transform the alarm from a deterrence mechanism into a mechanism of engagement, a sort of virtual instrument that the car-owner or passerby learns to play and manipulate, generates audio-visual feedback and  responds intelligently and dynamically to the “performer’s” actions? Returning to the basic mechanics of the alarm system, why restrict an audio-producing sensor system to a sensitivity only to physical proximity? Why not expand the array of physical parameters to which the alarms are sensitive, as well as the information transmitted to the car-owner (or the unsuspecting passerby)?

Project Phases and Milestones

In order to work towards achieving the large-scale, utopian vision outlined in the previous paragraph, the project was segmented into 4 primary phases. The first phase (January-February 2013) was devoted to the elaboration of a multi-category sound corpus and the conceptualisation of the project. The second (September 2013-February 2014) to the conducting of a sound perception experiment in sound source identifiability and the construction of an acoustic descriptor space in which to situate the sounds used in the experiment. At the moment, we are in the midst of preparing a second experiment in urgency vs. attraction ratings of newly synthesised auditory warnings. Once the new car alarm prototype software and hardware components–informed by the results of the experiments and the acoustic modelling–have been sufficiently developed and several proposed interaction designs fully implemented,  the prototypes will be presented in art installation contexts.
Here is an overview:

Phase I: Sound Catalogue Construction and Characterisation

A sound corpus was constructed, consisting of the entries belonging to the following categories: natural/vocal (human/animal), industrial/mechanical, synthetic/electroacoustic, typical horror film danger signals, real car alarm sounds, and “auditory scenes” comprised of recorded and synthetic complexes of individual sounds. The sounds were either derived from auditory databases and field recordings of urban centres, or generated/processed via a variety of synthesis techniques realised in AudioSculpt, CataRT, AudioGuide, and Max/MSP.
Here is the current sound corpus taxonomy:
And here is a sampling of the sound corpus:

Phase II: Experiment 1 and Acoustic Modelling 

Using a selection of sounds from the industrial/mechanical category as stimuli for an experiment in sound causality (source) identifiability and typicality. This sound library was selected due to its size and scope. Conducted via an online, crowdsourced platform, the experiment required subjects to a) describe the sound source and b) rate their confidence level in identifying the source on a 1-5 scale.
The experiment is still “live,” and may be accessed here.  If you complete the trial, you will get the gist of the tasks and the interface.
The results of the experiment enabled us to order the 39 stimuli used on an abstractness-iconicity scale, and to establish thresholds between abstract, iconic, and in-between stimuli.
We then constructed an acoustic descriptor space in which to situate the stimuli which fell below the iconicity threshold. After considering multiple approaches, we selected 2 perceptually salient acoustic dimensions: Perceptual Spectral Centroid (x axis) and Harmonic/Noise Energy Ratio (axis). This descriptor space was also populated with cross-synthesised hybrids of the stimuli. A third dimension, Amplitude Modulation Depth, was later added.
In this diagram, the numbers highlighted in blue correspond to hybrid sounds:
   Besides representing relative perceived distances between sounds, this acoustic descriptor space enables one to generate and situate new stimuli for research and artistic purposes, to tag the stimuli with acoustic descriptors, and to search the sound corpus in a systematic way.

Phase III: Auditory Warning Construction and Experiment 2

A new series of auditory warnings have been assembled from a) nine source sounds positioned towards the centre and at the extremes of the acoustic descriptor space; b) 6 typical alarm profiles (morphologies) derived from a 2010 study by Minard, Misdariis et al; and c) the IOI (inter-onset intervals) of the six sounds comprising the standard car alarm repertoire.
  6 Morphologies                                                                     6 IOI’s (car alarm sounds) 
The auditory warnings thus generated will be tested for levels of perceived urgency vs. comfort (or repulsion vs. engagement) in a second experiment to be launched within the next couple of months. As for Experiment 1, the experiment will be conducted entirely online.

Future Work: Interaction Design, Presentation, and User Experience Documentation 

The results obtained from the experiments and the methodologies employed in constructing the acoustic descriptor spaces will inform the development of our own car alarm prototypes. These prototypes will be exhibited as interactive multimedia installations both in the gallery context, and in public spaces (i.e., embedded in parked vehicles).
Aside from presenting these prototypes to the public, these installations will serve as a testing ground for various human-alarm interaction models. The alarms may be triggered directly or remotely (via iPad/laptop control stations), will respond to an array of physical parameters, and in some cases, be self-triggering (i.e., non-interactive). Through collecting data from the control stations, distributing visitor surveys, and analysing camera images, user experiences with each interaction model will be evaluated.
In the future, we intend to introduce an open-source aspect to this project. Namely, members of the public will be able to upload their own recordings, which will be systematically edited, tagged, and catalogued.

Presentations and Publications: 

Comments are closed.