Dr Thomas Deacon
About
Biography
I am a design researcher with a special interest in understanding the impact of new technologies on people. With a background in sound and immersive technology, my research interests include distributed cognition, immersion, creativity support tools, co-design, and user experience design. I am into doing applied research, immersive design, and participatory design research.
News and Events
16 - 17 Jan 2024 - Invited to give flash talk at Workshop on Interdisciplinary Perspectives on Soundscapes and Wellbeing, University of Surrey, UK. (Day 1: Workshop & Webinar; Day 2: World Café & Sandpit Sessions)
30 August - 1 September 2023 - Attended Audio Mostly 23 in Edinburgh Napier, UK to present paper.
24 July 2023 - VITALISE 3rd Call Winning Submission - Collaboration with LiCalab team in Thomas More University, to do soundscape research with older adults in Belgium.
19 - 21 April 2023 - Attended Urban Sound Symposium 2023, Barcelona, Spain.
ResearchResearch projects
I am working on the "AI for Sound" project, funded by a £2.21 million Fellowship Award from the Engineering and Physical Sciences Research Council (EPSRC) to the University of Surrey's Professor Mark Plumbley. My part in this project involves:
- Investigating the deployment of sound sensors and AI systems for the home, smart buildings, and smart cities, as well as the creative sector.
- Designing and undertaking participatory research engagement activities with stakeholders.
- Pitching project ideas to potential industry partners.
Research collaborations
Sound Wellbeing in Later Life study selected for funding by VITALISE Consortium. Research is collaboration between AI for Sound Project and Digital Worlds Research Centre. https://www.surrey.ac.uk/digital-world-research-centre/funded-projects/sound-wellbeing-later-life
Research projects
I am working on the "AI for Sound" project, funded by a £2.21 million Fellowship Award from the Engineering and Physical Sciences Research Council (EPSRC) to the University of Surrey's Professor Mark Plumbley. My part in this project involves:
- Investigating the deployment of sound sensors and AI systems for the home, smart buildings, and smart cities, as well as the creative sector.
- Designing and undertaking participatory research engagement activities with stakeholders.
- Pitching project ideas to potential industry partners.
Research collaborations
Sound Wellbeing in Later Life study selected for funding by VITALISE Consortium. Research is collaboration between AI for Sound Project and Digital Worlds Research Centre. https://www.surrey.ac.uk/digital-world-research-centre/funded-projects/sound-wellbeing-later-life
Publications
VR could transform creative engagement with spatial audio, given affordances for spatial visualisation and embodied interaction. But, issues exist addressing how to support collaboration for spatial audio production (SAP). Exploring this problem, we made a VR voice-based trajectory sketching tool, named Invoke, that allows two users to shape sonic ideas together. In this paper, thematic analysis is used to review two areas of a formative evaluation with expert users: (i) video analysis of VR interactions; and (ii) analysis of open questions about using the tool. Implications present new opportunities to explore co-creative VR tools for SAP.
Working from vague client instructions, how do audio producers collaborate to diagnose what specifically is wrong with a piece of music, where the problem is and what to do about it? This paper presents a design ethnography that uncovers some of the ways in which two music producers co-ordinate their understanding of complex representations of pieces of music while working together in a studio. Our analysis shows that audio producers constantly make judgements based on audio and visual evidence while working with complex digital tools, which can lead to ambiguity in assessments of issues. We show how multimodal conduct guides the process of work and that complex media objects are integrated as elements of interaction by the music producers. The findings provide an understanding how people currently collaborate when producing audio, to support the design of better tools and systems for collaborative audio production in the future.
The workplace is a site for the rapid development and deployment of Artificial Intelligence (AI) systems. However, our research suggests that their adoption could already be hindered by critical issues such as trust, privacy, and security. This paper examines the integration of AI-enabled sound technologies in the workplace, with a focus on enhancing well-being and productivity through a soundscape approach while addressing ethical concerns. To explore these concepts, we used scenario-based design and structured feedback sessions with knowledge workers from open-plan offices and those working from home. To do this, we present initial design concepts for AI sound analysis and control systems. Based on the perspectives gathered, we present user requirements and concerns, particularly regarding privacy and the potential for workplace surveillance, emphasising the need for user consent and levels of transparency in AI deployments. Navigating these ethical considerations is a key implication of the study. We advocate for novel ways to incorporate people’s involvement in the design process through co-design and serious games to shape the future of AI audio technologies in the workplace.
Poor workplace soundscapes can negatively impact productivity and employee satisfaction. While current regulations and physical acoustic treatments are beneficial, the potential of AI sound systems to enhance worker wellbeing is not fully explored. This paper investigates the use of AI-enabled sound technologies in workplaces, aiming to boost wellbeing and productivity through a soundscape approach while addressing user concerns. To evaluate these systems, we used scenario-based design and focus groups with knowledge workers from open-plan offices and those working remotely. Participants were presented with initial design concepts for AI sound analysis and control systems. This paper outlines user requirements and recommendations gathered from these focus groups, with a specific emphasis on soundscape personalisation and the creation of relevant datasets.
Environmental sound classification (ESC) aims to automatically recognize audio recordings from the underlying environment, such as " urban park " or " city centre ". Most of the existing methods for ESC use hand-crafted time-frequency features such as log-mel spectrogram to represent audio recordings. However, the hand-crafted features rely on transformations that are defined beforehand and do not consider the variability in the environment due to differences in recording conditions or recording devices. To overcome this, we present an alternative representation framework by leveraging a pre-trained convolutional neural network, SoundNet, trained on a large-scale audio dataset to represent raw audio recordings. We observe that the representations obtained from the intermediate layers of SoundNet lie in low-dimensional subspace. However, the dimensionality of the low-dimensional subspace is not known. To address this, an automatic compact dictionary learning framework is utilized that gives the dimensionality of the underlying subspace. The low-dimensional embeddings are then aggregated in a late-fusion manner in the ensemble framework to incorporate hierarchical information learned at various intermediate layers of SoundNet. We perform experimental evaluation on publicly available DCASE 2017 and 2018 ASC datasets. The proposed ensemble framework improves performance between 1 and 4 percentage points compared to that of existing time-frequency representations.
The Objects VR interface and study explores interactive music and virtual reality, focusing on user experience, understanding of musical functionality, and interaction issues. Our system offers spatiotemporal music interaction using 3D geometric shapes and their designed relationships. Control is provided by tracking of the hands, and the experience is rendered across a head-mounted display with binaural sound presented over headphones. The evaluation of the system uses a mixed methods approach based on semi-structured interviews, surveys and video-based interaction analysis. On average the system was positively received in terms of interview self-report, metrics for spatial presence and creative support. Interaction analysis and interview thematic analysis also revealed instances of frustration with interaction and levels of confusion with system functionality. Our results allow reflection on design criteria and discussion of implications for facilitating music engagement in virtual reality. Finally our work discusses the effectiveness of measures with respect to future evaluation of novel interactive music systems in virtual reality.
This paper presents an observational study of collaborative spatial music composition. We uncover the practical methods two experienced music producers use to coordinate their understanding of multi-modal and spatial representations of music as part of their workflow. We show embodied spatial referencing as a significant feature of the music producers' interactions. Our analysis suggests that gesture is used to understand, communicate and form action through a process of shaping sounds in space. This metaphor highlights how aesthetic assessments are collaboratively produced and developed through coordinated spatial activity. Our implications establish sensitivity to embodied action in the development of collaborative workspaces for creative, spatial-media production of music.