CoCAVS: Concatenative Corpus-based Audio–Visual Synthesis
Summary of the research project
The proposal is about extending the principle of interactive corpus-based concatenative synthesis to the visual domain: Instead of creating music by navigating through sound grains in a space of audio descriptors, we build up a corpus of still images, calculate image descriptors (colour, texture, brightness, entropy, etc.), and navigate through that corpus interactively with gestural control via movement sensors. This will evoke an esthetic of visual collage or cut-up, juxtaposing similar images when navigation is local, and opposing contrasting images when the navigation jumps to different parts of the image descriptor space. The fascinating artistic question to be explored during the residency is then how to link the image descriptors to the sound descriptors in order to create multi-modal AV performances and installations. Here, the questions of multi-sensorial correspondences and synesthesia to be explored in the mapping might even lead towards new research directions in multi-modal perception.
Diemo Schwarz is an improvising musician who composes for installations, dance and video. He is also a researcher in real-time musical interactions at Ircam and developer in digital arts. He plays with the electronics of materials rich in timbre and textures, exploring different corpus of sounds with gestural controllers, thus allowing the expressiveness and the body to dialogue with the digital instrument. His use of concatenative synthesis recomposes the space of sounds and questions their intrinsic qualities. Finally he composes for dance, video and installations. His scientific research at Ircam focuses on the interaction between musician and machine, and the exploitation of large masses of sounds for real-time and interactive sound synthesis, in collaboration with composers, or for consumer installations with intuitive, tangible interfaces.