Hybrid Real/Mimetic Sound Works
Anglia Ruskin University
Western Australian Academy of Performing Arts
This paper describes a project to construct a process allowing for data interchange between visual and sonic media: to create a continuum in which sound could be visualized and then resonified through by both live performers and digital means. A number of processes to aid this visualisation/ sonification “ecosystem” were developed. Software was created to create scores based on sonic features of “field recordings” through spectral analysis by rendering the frequency of the strongest detected sinusoidal peak of a recording vertically and its timbral characteristics by luminance, hue and saturation on a scrolling score. Along similar principals a second process was developed to generate a realtime score using graphical symbols to represent detected accents in “found sound” speech recordings. In the other direction software was built to render greyscale images (including sonograms) as sound and a second iteration to generate audio from detected analysis parameters. The imperfections in the various transcription processes are intriguing in themselves as they throw into relief the distinctions between the various forms of representation and in particular the timescales in which they are perceived. The implied circularity of processes also opened the potential for re-interrogation of materials through repeated transmutation. This discussion explores these implications in the context of the analysis of field recordings to generate visual representations that can be resonified using both performative (via notation) and machine (visual data-based) processes, to create hybrid real/mimetic sound works through the combination (and recombination) of the processes.