Author Identifier

Matthew Bray

http://orcid.org/0000-0001-7155-3698

Date of Award

2024

Document Type

Thesis

Publisher

Edith Cowan University

Degree Name

Doctor of Philosophy

School

Western Australian Academy of Performing Arts

First Supervisor

Lindsay Vickery

Second Supervisor

Stuart James

Abstract

The practice of Telematic Music Performance (TMP), a subset of Network Music Performance (NMP), enables geographically remote musicians to engage in real-time musical interplay, typically occurring over a Wide Area Network (WAN: the Internet). However, the transmission of data between remote collaborators introduces millisecond latencies that obstruct the nuance of traditional human-to-human musical intercourse, and consequentially, the music generated by TMP often lacks rudimentary degrees of syncopation. Reducing network latency has been a critical focal point for research in this field, wherein effective networks support low-latency, multi-modal data transmission to enhance sensations of immersion and telepresence for participants; the sense of sharing the ‘same space’ as their collaborator. The author previously established Telemidi as an effective TMP approach by exchanging only MIDI performance data as a primary Latency Accepting Solution (LAS); where MIDI data triggers a series of loop-based rhythmic devices upon which a range of improvised, and performative musical actions may occur. This current research undertakes a broad exploration of literature to identify optimal TMP operations and system designs, before implementing a practice-led methodology to iteratively refine and innovate the Telemidi TMP system; by reducing latency, minimising its influence, and enhancing a performer’s musical actions. By synchronising activity between distributed, and near-identical Digital Audio Workstations (DAWs), this research explores approaches to enable remote collaborators to co-create stable Pulse-Based Music (PBM) genres, characterised by syncopated textures and progressive musical structures. MIDI data facilitates the detailed metric analysis of performative actions and network function, as well as providing opportunities to generate a reactive, multi-media output resulting from the pluralistic nature of MIDI’s representation of musical performance actions. Such an output may be experienced by observers using immersive VR headwear, offering a hyper-mediated delivery of the actions used to create music. A successful implementation of this process holds the potential for a wide array of applications including music performance, music education, online gaming, remote medical therapeutics and virtual wellness services.

DOI

10.25958/e2y6-z825

Included in

Music Commons

Share

 
COinS