## AudioSculpt

Groupe Public active 1 week et 2 days agoUser Group for *AudioSculpt*, AudioSculpt Lite and Analysis/Synthesis Command-line Tool users.

### théorie d'yves meyers

Auteur | 2 Utilisateurs souscrits | |
---|---|

Antoine ESCUDIER |
Hello, |

Mai 23, 2017 à 17:31 #22436 | |

Axel Roebel |
Hello Antoine, the different types of transforms including Fourier and Wavelets all have strengths and weaknesses. Signal transformation with spectral representations can be signficicantly simplified if the spectral representation is adapted to the type of operations that are to be performed on the signals. Transformation harmonic signals for example is signficiantly simplified if sinusoids are individually resolved in the representation. Now music and speech are basically composed of harmonic signals and therefore the use of standard DFT representation (with a constant but optimally time adaptive frequency resolution) makes the mathematics that are required to perform signal manipulations much easier than equivalent operations for the case of wavelet based representations. On the other hand there are many other tasks for that other types of frequency scales that are closer to wavelets are more suitable and more efficient. So before I elaborate a little bit more on the question I’d like to say that especially for modification of music and speech Wavelets are very impracticle, and therefore I don’t think that the work of Meyer will end up in AudioSculpt or similar programs at some point in the future. Now looking a little bit further around the direct question I would like to stress that the AAAS analysis that you can perfom in AudioSculpt since now about 2 years creates a signal representation with signal adaptive resolution. The use of these signal adaptive representations has been made possible by the ground breaking work on non-stationary Gabor frames, here notably using time varying frequency resolution, that have been made possible based on results established by Monika Doerfler and colleagues. Compared to the frequency dependent but nevertheless fixed time and frequency resolution of wavelets we have here an approach allowing to adapt the frequency resolution over time, which from my perspective and for music and speech signals is much more interesting. Based on the work of Monika and colleagues my former PhD student Marco Liuni has in fact developed an algorithm that does the adaptation of the frequency resolution automatically (leading to the AAAS analysis). For the moment the integration of the algorithm into AudioSculpt is still quite rudimentary, but I hope we will find means to better integrate it in the future. I’d like to mention further that multi resolution approaches have also been used to establish a significantly enhanced f0 estimation algorithm (the swipe algorithm by Arturo Camacho), that I hope we will be able to introduce into AudioSculpt in a not so very long future, and that we use frequency dependent time and frequency resolution (not wavelets, but perceptual frequency scales) for example in recent work for texture analysis/synthesis. Multi resolution signal analysis is a rather large and complex field, but over time all this will come into the main stream of audio signal analysis and I hope this answers your question. With kind regards |

Mai 23, 2017 à 22:05 #22438 | |

Antoine ESCUDIER |
Hello Axel, |

Mai 24, 2017 à 22:10 #22444 |

Vous devez être connecté pour répondre à ce sujet.