Workshop on Sound Morphing and the Sonic Continuum: From Music Creation to Psychoacoustics

Organizers: Marcelo Caetano (McGill, Canada) and Charalampos Saitis (QMUL, London, UK)

Call for participation

The twentieth century witnessed a compositional paradigm shift from pitch and duration to timbre. The advent of the digital computer revolutionized the representation and manipulation of sounds, opening up new avenues of artistic and scientific exploration. The quest for “new timbres" led to the development of sound transformation techniques usually referred to as sound morphing. Uniquely situated at the crossroads of art and science, and thus highly relevant to the CMMR community, sound morphing allows to create hybrid timbres along the sonic continuum between two sounds, with great creative and research potential.

This workshop will feature short tutorials and a group-based activity to stimulate art-science cross-communal knowledge exchange on sound morphing applications in computer music creative practice and psychoacoustical research. This workshop has three main goals:

  • bullet blue  to compile and openly share a list of available resources for sound morphing techniques and software;
  • bullet blue  to offer a hands-on demonstration of musical instrument sound morphing using the freely available Sound Morphing Toolbox;
  • bullet blue  to draft a research agenda for sound morphing that addresses issues such as interdisciplinary synergies and open access resources.

Much like the infamous Bush/Obama morph, this workshop promises to be thought-provoking and fun. We look forward to your participation!

Workshop motivations

Sound morphing is a sound transformation that gradually blurs the categorical distinction between the sounds being morphed by blending their sensory attributes. Sound-morphing techniques allow synthesizing sounds with intermediate timbral qualities by interpolating sounds from different musical instruments, for example. Sound morphing finds applications in music composition and performance due to the possibility of creating hybrid sounds that are intermediate between a source and a target sounds.

The Sound Morphing Toolbox (SMT) contains MATLAB implementations of sound modeling and transformation algorithms used to morph musical instrument sounds. The SMT is open-source and freely available on GitHub (, making it highly flexible, controllable, and customizable by the user. This hands-on workshop is aimed mainly at less technically inclined participants such as composers or researchers without the technical background. During the workshop, participants will be guided on how to use the SMT step by step. Our aim is to provide an intuitive rather than technical understanding of the audio processing algorithms used. By the end of the workshop, the participants will be able to make informed decisions about audio processing algorithms and parameter values and use the SMT on their own. Additionally, the workshop will draft a research agenda for sound morphing that introduces technical aspects, aesthetic and perceptual issues. Finally, we will identify shortcomings of the currently available pieces of morphing software listed above and research opportunities.

Technical Requirements

The SMT requires MATLAB to run. Thus, participants will need to bring their laptops with MATLAB and the SMT code installed (a link to download the SMT code and workshop related examples will be provided prior to the workshop). Participants who do not have MATLAB will need a computer with MATLAB for the hands-on part of the workshop. However, we believe that participation in pairs is also possible. In this case, both participants would work collaboratively during the hands-on session.


Marcelo Caetano

E-mail: mcaetano (at)

received the Ph.D. degree in signal processing from UPMC Paris 6 University in 2011 under the supervision of Prof. Xavier Rodet, then head of the Analysis/Synthesis group at IRCAM. In 2017, he received competitive funding from the Portuguese Foundation for Science and Technology to develop a three-year project on sound modeling and transformation in the SMC group at INESC-TEC. He has published over 40 peer-reviewed articles in international journals and conferences with more than 300 citations (h-index 11). His current research interests are computer-aided musical orchestration and audio processing such as musical instrument sound modeling and sinusoidal parameter estimation with applications in sound analysis, synthesis, and transformation.

Charalampos Saitis

holds a PhD in Music Technology from McGill University. He is currently Lecturer in Digital Music Processing in the Centre for Digital Music at Queen Mary University of London. He has published over 40 articles on the topics of auditory perception and cognition, musical acoustics, and musician-instrument interaction. His research aims to quantify how listeners process and conceptualize sound quality, focusing especially on crossmodal semantic processing and the link between perception, language and meaning. Edited books include Timbre: Acoustics, Perception, and Cognition (Springer, 2019) and Musical Haptics (Springer, 2018).


Date: Friday 18 October (morning session)

Capacity: 20 participants

Location: CNRS Campus Joseph Aiguier, 31 chemin Joseph Aiguier, 13009 Marseille