“Like an auto-tune, but for emotions” (Brian Resnick, for Vox.com)
DAVID (Da Amazing Voice Inflection Device) is a free, real-time voice transformation tool able to “colour” any voice with an emotion that wasn’t intended by it’s speaker.
DAVID is a Max project available for free to the Forum community after registration (follow download link to proceed).
Voice processing for cognitive neuroscience
DAVID was especially designed with the affective psychology and neuroscience community in mind, and aims to provide researchers with new ways to produce and control affective stimuli, both for offline listening and for real-time paradigms.
DAVID was extensively validated for use in psychological and neuroscientific experiments (Rachman, L., Liuni, M., Arias, P. et al. Behav Res 2017). In listening experiments conducted in IRCAM (France), UCL (UK), Lund University (Sweden) and Waseda University (Japan), we found that emotional transformations made with DAVID were well recognized and sounded as natural as non-modified expressions of the same speaker; that the emotional intensity of the transformation could be controlled; and the transformations appeared valid in several languages, namely in French, English, Swedish and Japanese. In fact, even the speakers themselves mistake the manipulated speech as their own.
In addition, it does so in real-time, i.e. you can transform speech as it is spoken, e.g. over the phone. With modern audio interfaces, we reached in/out latencies as small as 15ms, which even makes it useable as vocal feedback to a speaker without disrupting speech production.
Design and Development
DAVID was developped by the CREAM Neuroscience Lab at IRCAM with funding from the European Research Council and in collaboration with Petter Johansson and Lars Hall (Lund University, Sweden), Rodrigo Segnini (Siemens, Japan), Katsumi Watanabe (Waseda University, Japan), and Daniel Richardson (University College, London UK). DAVID was so named after Talking Heads’ frontman David Byrne, whom we were priviledged to count as our early users in March’15.
See the readme.
The following video is a tutorial presented at the 4th International Conference on Music and Emotion, October 2015 in Geneva (Switzerland).