Check out the beta version of the new IRCAM Forum ⇢

ProductView more


Toolbox for multimodal analysis of sound and motion, interactive sound synthesis and machine learning.

MuBu For Max is a toolbox for multimodal analysis of sound and motion, interactive sound synthesis and machine learning. It includes : real-time and batch data processing (sound descriptors, motion features, filtering);  granular, concatenative and additive synthesis; data visualization;  static and temporal recognition;  regression algorithms.

The MuBu multi-buffer is a container providing a structured memory for the real-time processing. The buffers of a MuBu container associate multiple tracks of aligned data to complex data structures such as:

  • segmented audio data with audio descriptors and annotations
  • annotated motion capture data
  • aligned audio and motion capture data

Each track of a MuBu buffer represents a regularly sampled data stream or a sequence of time-tagged events such as audio samples, audio descriptors, motion capture data, markers, segments, and musical events.

Apart from its SDIF-based native file format, a MuBu container can import data from various file formats including: common audio file formats (AIFF, RIFF, Ogg/Vorbis, etc.), SDIF, text, MIDI standard files.

MuBu for Max is distributed as set of core externals providing functions for visualizing, manipulating, and recording the MuBu data structures:

  • mubu … multi-buffer container
  • imubu … container with graphical user interface
  • mubu.track … optimized access to a track
  • mubu.recordmubu.record~ … recording data streams and sequences
  • mubu.process … processing data streams

Further externals are dedicated to data modeling and sound synthesis:

  • mubu.knn … k-NN unit selection using a kD-Tree
  • mubu.concat~ … segment-based synthesis
  • mubu.granular~ … granular synthesis

In addition, the distribution includes en experimental set of stream processing and analysis operators implemented as PiPo* modules:

  • pipo.mel … extract MEL frequency bands from an audio stream
  • pipo.mfcc … extract MEL frequency cepstrum coefficients from an audio stream
  • pipo.psy … extract pitch synchronous YIN-based markers and from an audio stream
  • pipo.mvavrg, pipo.median … moving average and median filters on arbitrary streams
  • pipo.slicepipo.fftpipo.bandspipo.dct … low-level modules calculating audio frames, fast Fourier-transform, spectral bands, and decreet cosine transform

These PiPo modules can be used as operators with the mubu.process externals as well as with the Max externals pipo and pipo~ that are distributed with MuBu for Max.

* The PiPo plugin interface for processing objects is a plugin API for modules processing multi-dimensional data streams (see also The formalization of PiPo data streams is very close to the that of MuBu tracks and the SDIF file format.