Dr Thomas Deacon


Research Fellow in Design Research for Sound Sensing
PhD

About

Research

Research projects

Research collaborations

Publications

Thomas Deacon, Mathieu Barthet (2023)Invoke: A Collaborative Virtual Reality Tool for Spatial Audio Production Using Voice-Based Trajectory Sketching, In: Proceedings of the 18th International Audio Mostly Conferencepp. 161-168 ACM

VR could transform creative engagement with spatial audio, given affordances for spatial visualisation and embodied interaction. But, issues exist addressing how to support collaboration for spatial audio production (SAP). Exploring this problem, we made a VR voice-based trajectory sketching tool, named Invoke, that allows two users to shape sonic ideas together. In this paper, thematic analysis is used to review two areas of a formative evaluation with expert users: (i) video analysis of VR interactions; and (ii) analysis of open questions about using the tool. Implications present new opportunities to explore co-creative VR tools for SAP.

Thomas Deacon, Patrick Healey, Mathieu Barthet (2022)“It’s cleaner, definitely”: Collaborative Process in Audio Production, In: Computer supported cooperative workpp. 1-31 Springer Netherlands

Working from vague client instructions, how do audio producers collaborate to diagnose what specifically is wrong with a piece of music, where the problem is and what to do about it? This paper presents a design ethnography that uncovers some of the ways in which two music producers co-ordinate their understanding of complex representations of pieces of music while working together in a studio. Our analysis shows that audio producers constantly make judgements based on audio and visual evidence while working with complex digital tools, which can lead to ambiguity in assessments of issues. We show how multimodal conduct guides the process of work and that complex media objects are integrated as elements of interaction by the music producers. The findings provide an understanding how people currently collaborate when producing audio, to support the design of better tools and systems for collaborative audio production in the future.

Thomas Deacon, Mark D Plumbley (2024)Working with AI Sound: Exploring the Future of Workplace AI Sound Technologies, In: CHIWORK '24: Proceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work2pp. 1-21

The workplace is a site for the rapid development and deployment of Artificial Intelligence (AI) systems. However, our research suggests that their adoption could already be hindered by critical issues such as trust, privacy, and security. This paper examines the integration of AI-enabled sound technologies in the workplace, with a focus on enhancing well-being and productivity through a soundscape approach while addressing ethical concerns. To explore these concepts, we used scenario-based design and structured feedback sessions with knowledge workers from open-plan offices and those working from home. To do this, we present initial design concepts for AI sound analysis and control systems. Based on the perspectives gathered, we present user requirements and concerns, particularly regarding privacy and the potential for workplace surveillance, emphasising the need for user consent and levels of transparency in AI deployments. Navigating these ethical considerations is a key implication of the study. We advocate for novel ways to incorporate people’s involvement in the design process through co-design and serious games to shape the future of AI audio technologies in the workplace.

Thomas Deacon, Arshdeep Singh, Gabriel Bibbo, Mark D Plumbley (2024)Soundscape Personalisation at Work: Designing AI-Enabled Sound Technologies for the Workplace

Poor workplace soundscapes can negatively impact productivity and employee satisfaction. While current regulations and physical acoustic treatments are beneficial, the potential of AI sound systems to enhance worker wellbeing is not fully explored. This paper investigates the use of AI-enabled sound technologies in workplaces, aiming to boost wellbeing and productivity through a soundscape approach while addressing user concerns. To evaluate these systems, we used scenario-based design and focus groups with knowledge workers from open-plan offices and those working remotely. Participants were presented with initial design concepts for AI sound analysis and control systems. This paper outlines user requirements and recommendations gathered from these focus groups, with a specific emphasis on soundscape personalisation and the creation of relevant datasets.

Arshdeep Singh, Thomas Edward Deacon, Mark David Plumbley (2024)Environmental Sound Classification Using Raw-audio Based Ensemble Framework

Environmental sound classification (ESC) aims to automatically recognize audio recordings from the underlying environment, such as " urban park " or " city centre ". Most of the existing methods for ESC use hand-crafted time-frequency features such as log-mel spectrogram to represent audio recordings. However, the hand-crafted features rely on transformations that are defined beforehand and do not consider the variability in the environment due to differences in recording conditions or recording devices. To overcome this, we present an alternative representation framework by leveraging a pre-trained convolutional neural network, SoundNet, trained on a large-scale audio dataset to represent raw audio recordings. We observe that the representations obtained from the intermediate layers of SoundNet lie in low-dimensional subspace. However, the dimensionality of the low-dimensional subspace is not known. To address this, an automatic compact dictionary learning framework is utilized that gives the dimensionality of the underlying subspace. The low-dimensional embeddings are then aggregated in a late-fusion manner in the ensemble framework to incorporate hierarchical information learned at various intermediate layers of SoundNet. We perform experimental evaluation on publicly available DCASE 2017 and 2018 ASC datasets. The proposed ensemble framework improves performance between 1 and 4 percentage points compared to that of existing time-frequency representations.

Thomas Deacon, Tony Stockman, Mathieu Barthet (2017)User Experience in an Interactive Music Virtual Reality System: An Exploratory Study, In: M Aramaki, R KronlandMartinet, S Ystad (eds.), BRIDGING PEOPLE AND SOUND10525pp. 192-216 Springer Nature

The Objects VR interface and study explores interactive music and virtual reality, focusing on user experience, understanding of musical functionality, and interaction issues. Our system offers spatiotemporal music interaction using 3D geometric shapes and their designed relationships. Control is provided by tracking of the hands, and the experience is rendered across a head-mounted display with binaural sound presented over headphones. The evaluation of the system uses a mixed methods approach based on semi-structured interviews, surveys and video-based interaction analysis. On average the system was positively received in terms of interview self-report, metrics for spatial presence and creative support. Interaction analysis and interview thematic analysis also revealed instances of frustration with interaction and levels of confusion with system functionality. Our results allow reflection on design criteria and discussion of implications for facilitating music engagement in virtual reality. Finally our work discusses the effectiveness of measures with respect to future evaluation of novel interactive music systems in virtual reality.

Thomas Deacon, Nick Bryan-Kinns, Patrick G. T. Healey, Mathieu Barthet (2019)Shaping Sounds: The Role of Gesture in Collaborative Spatial Music Composition, In: PROCEEDINGS OF THE 2019 ON CREATIVITY AND COGNITION - C&C 19pp. 121-132 Assoc Computing Machinery

This paper presents an observational study of collaborative spatial music composition. We uncover the practical methods two experienced music producers use to coordinate their understanding of multi-modal and spatial representations of music as part of their workflow. We show embodied spatial referencing as a significant feature of the music producers' interactions. Our analysis suggests that gesture is used to understand, communicate and form action through a process of shaping sounds in space. This metaphor highlights how aesthetic assessments are collaboratively produced and developed through coordinated spatial activity. Our implications establish sensitivity to embodied action in the development of collaborative workspaces for creative, spatial-media production of music.