Dr Philip Coleman


Senior Lecturer in Audio
PhD, FHEA

Academic and research departments

Music and Media.

About

Areas of specialism

Sound Zones; Microphone Arrays; Spatial Audio; Object-Based Audio

University roles and responsibilities

  • Admissions Tutor for Tonmeister Course

    Affiliations and memberships

    Research

    Research interests

    Research projects

    Indicators of esteem

    • Presented a Tutorial on "Personalising Sound Over Loudspeakers" at ICASSP 2019 [download slides]

      Supervision

      Postgraduate research supervision

      Teaching

      Publications

      P Coleman, L Remaggi, PJB Jackson (2020)S3A Room Impulse Responses University of Surrey
      Philip Coleman, Andreas Franck, Jon Francombe, Qingju Liu, Teofilo de Campos, Richard Hughes, Dylan Menzies, Marcos Simón Gálvez, Yan Tang, James Woodcock, Frank Melchior, Chris Pike, Filippo Fazi, Trevor Cox, Adrian Hilton, PHILIP J B JACKSON (2020)S3A Audio-Visual System for Object-Based Audio University of Surrey
      Philip Jackson, Filippo Fazi, Philip Coleman (2019)Personalising sound over loudspeakers University of Surrey

      In our information-overloaded daily lives, unwanted sounds create confusion, disruption and fatigue in what do and experience. Taking control of your own sound environment, you can design what information to hear and how. Providing personalised sound to different people over loudspeakers enables communication, human connection and social activity in a shared space, meanwhile addressing the individuals’ needs. Recent developments in object-based audio, robust sound zoning algorithms, computer vision, device synchronisation and electronic hardware facilitate personal control of immersive and interactive reproduction techniques. Accordingly, the creative sector is moving towards more demand for personalisation and personalisable content. This tutorial offers participants a novel and timely introduction to the increasingly valuable capability to personalise sound over loudspeakers, alongside resources for the audio signal processing community. Presenting the science behind personalising sound technologies and providing insights for making sound zones in practice, we hope to create better listening experiences. The tutorial attempts a holistic exposition of techniques for producing personal sound over loudspeakers. It incorporates a practical step-by-step guide to digital filter design for real-world multizone sound reproduction and relates various approaches to one another thereby enabling comparison of the listener benefits.

      P Coleman, PJB Jackson, L Remaggi, A Franck (2020)Data: Object-Based Reverberation for Spatial Audio University of Surrey
      Q Zhu, Philip Coleman, M Wu, J Yang (2016)Robust personal audio reproduction based on acoustic transfer function modelling, In: AES Sound Field Control Conference Proceedings Audio Engineering Society

      Personal audio systems generate a local sound field for a listener while attenuating the sound energy at pre-defined quiet zones. Their performance can be sensitive to errors in the acoustic transfer functions between the sources and the zones. In this paper, we model the acoustic transfer functions as a superposition of multipoles with a term to describe errors in the actual gain and phase. We then propose a design framework for robust reproduction, incorporating additional prior knowledge about the error distribution where available. We combine acoustic contrast control with worst-case and probability-model optimization, exploiting limited knowledge of the error distribution. Monte-Carlo simulations over 10000 test cases show that the method increases system robustness when errors are present in the assumed transfer functions.

      M Olik, PJ Jackson, P Coleman (2013)Influence of low-order room reflections on sound zone system performance, In: Proceedings of Meetings on Acoustics19

      Studies on sound field control methods able to create independent listening zones in a single acoustic space have recently been undertaken due to the potential of such methods for various practical applications, such as individual audio streams in home entertainment. Existing solutions to the problem have shown to be effective in creating high and low sound energy regions under anechoic conditions. Although some case studies in a reflective environment can also be found, the capabilities of sound zoning methods in rooms have not been fully explored. In this paper, the influence of low-order (early) reflections on the performance of key sound zone techniques is examined. Analytic considerations for small-scale systems reveal strong dependence of performance on parameters such as source positioning with respect to zone locations and room surfaces, as well as the parameters of the receiver configuration. These dependencies are further investigated through numerical simulation to determine system configurations which maximize the performance in terms of acoustic contrast and array control effort. The design rules for source and receiver positioning are suggested, for improved performance under a given set of constraints such as a number of available sources, zone locations and the direction of the dominant reflection. © 2013 Acoustical Society of America.

      Philip Coleman, A Franck, Jon Francombe, Qingju Liu, Teofilo de Campos, R Hughes, D Menzies, M Simon Galvez,, Y Tang, J Woodcock, Philip Jackson, F Melchior, C Pike, F Fazi, T Cox, Adrian Hilton (2018)An Audio-Visual System for Object-Based Audio: From Recording to Listening, In: IEEE Transactions on Multimedia20(8)pp. 1919-1931 IEEE

      Object-based audio is an emerging representation for audio content, where content is represented in a reproductionformat- agnostic way and thus produced once for consumption on many different kinds of devices. This affords new opportunities for immersive, personalized, and interactive listening experiences. This article introduces an end-to-end object-based spatial audio pipeline, from sound recording to listening. A high-level system architecture is proposed, which includes novel audiovisual interfaces to support object-based capture and listenertracked rendering, and incorporates a proposed component for objectification, i.e., recording content directly into an object-based form. Text-based and extensible metadata enable communication between the system components. An open architecture for object rendering is also proposed. The system’s capabilities are evaluated in two parts. First, listener-tracked reproduction of metadata automatically estimated from two moving talkers is evaluated using an objective binaural localization model. Second, object-based scene capture with audio extracted using blind source separation (to remix between two talkers) and beamforming (to remix a recording of a jazz group), is evaluated with perceptually-motivated objective and subjective experiments. These experiments demonstrate that the novel components of the system add capabilities beyond the state of the art. Finally, we discuss challenges and future perspectives for object-based audio workflows.

      Andreas Franck, Jon Francombe, James Woodcock, Richard Hughes, Philip Coleman, Robert Menzies-Gow, Trevor J. Cox, Philip J. B. Jackson (2019)A System Architecture for Semantically Informed Rendering of Object-Based Audio, In: Journal of the Audio Engineering Society67(7/9)pp. 1-11 Audio Engineering Society

      Object-based audio promises format-agnostic reproduction and extensive personalization of spatial audio content. However, in practical listening scenarios, such as in consumer audio, ideal reproduction is typically not possible. To maximize the quality of listening experience, a different approach is required, for example modifications of metadata to adjust for the reproduction layout or personalization choices. In this paper we propose a novel system architecture for semantically informed rendering (SIR), that combines object audio rendering with high-level processing of object metadata. In many cases, this processing uses novel, advanced metadata describing the objects to optimally adjust the audio scene to the reproduction system or listener preferences. The proposed system is evaluated with several adaptation strategies, including semantically motivated downmix to layouts with few loudspeakers, manipulation of perceptual attributes, perceptual reverberation compensation, and orchestration of mobile devices for immersive reproduction. These examples demonstrate how SIR can significantly improve the media experience and provide advanced personalization controls, for example by maintaining smooth object trajectories on systems with few loudspeakers, or providing personalized envelopment levels. An example implementation of the proposed system architecture is described and provided as an open, extensible software framework that combines object-based audio rendering and high-level processing of advanced object metadata.

      Qiaoxi Zhu, Xiaojun Qiu, Philip Coleman, Ian Burnett (2021)An experimental study on transfer function estimation using acoustic modelling and singular value decomposition, In: The Journal of the Acoustical Society of America Acoustical Society of America

      Transfer functions relating sound source strengths and the sound pressure at field points are important for sound field control. Recently, two modal domain methods for transfer function estimation have been compared using numerical simulations. One is the spatial harmonic decomposition (SHD) method, which models a sound field with a series of cylindrical waves; while the other is the singular value decomposition (SVD) method, which uses prior sound source location information to build an acoustic model and obtain basis functions for sound field modelling. In this paper, the feasibility of the SVD method using limited measurements to estimate transfer functions over densely-spaced field samples within a target region is demonstrated experimentally. Experimental results with various microphone placements and system configurations are reported to demonstrate the geometric flexibility of the SVD method compared to the SHD method. It is shown that the SVD method can estimate broadband transfer functions up to 3099 Hz for a target region with a radius of 0.083 m using three microphones, and allow flexibility in system geometry. Furthermore, an application example of acoustic contrast control is presented, showing that the proposed method is a promising approach to facilitating broadband sound zone control with limited microphones.

      Sound field control to create multiple personal audio spaces (sound zones) in a shared listening environment is an active research topic. Typically, sound zones in the literature have aimed to reproduce monophonic audio programme material. The planarity control optimization approach can reproduce sound zones with high levels of acoustic contrast, while constraining the energy flux distribution in the target zone to impinge from a certain range of azimuths. Such a constraint has been shown to reduce problematic self-cancellation artefacts such as uneven sound pressure levels and complex phase patterns within the target zone. Furthermore, multichannel reproduction systems have the potential to reproduce spatial audio content at arbitrary listening positions (although most exclusively target a `sweet spot'). By designing the planarity control to constrain the impinging energy rather tightly, a sound field approximating a plane-wave can be reproduced for a listener in an arbitrarily-placed target zone. In this study, the application of planarity control for stereo reproduction in the context of a personal audio system was investigated. Four solutions, to provide virtual left and right channels for two audio programmes, were calculated and superposed to achieve the stereo effect in two separate sound zones. The performance was measured in an acoustically treated studio using a 60 channel circular array, and compared against a least-squares pressure matching solution whereby each channel was reproduced as a plane wave field. Results demonstrate that planarity control achieved 6 dB greater mean contrast than the least-squares case over the range 250-2000 Hz. Based on the principal directions of arrival across frequency, planarity control produced azimuthal RMSE of 4.2/4.5 degrees for the left/right channels respectively (least-squares 2.8/3.6 degrees). Future work should investigate the perceived spatial quality of the implemented system with respect to a reference stereophonic setup.

      P Coleman, P Jackson, M Olik, JA Pedersen (2014)Personal audio with a planar bright zone, In: Journal of the Acoustical Society of America136(4)pp. 1725-1735 Acoustical Society of America

      Reproduction of multiple sound zones, in which personal audio programs may be consumed without the need for headphones, is an active topic in acoustical signal processing. Many approaches to sound zone reproduction do not consider control of the bright zone phase, which may lead to self-cancellation problems if the loudspeakers surround the zones. Conversely, control of the phase in a least-squares sense comes at a cost of decreased level difference between the zones and frequency range of cancellation. Single-zone approaches have considered plane wave reproduction by focusing the sound energy in to a point in the wavenumber domain. In this article, a planar bright zone is reproduced via planarity control, which constrains the bright zone energy to impinge from a narrow range of angles via projection in to a spatial domain. Simulation results using a circular array surrounding two zones show the method to produce superior contrast to the least-squares approach, and superior planarity to the contrast maximization approach. Practical performance measurements obtained in an acoustically treated room verify the conclusions drawn under free-field conditions.

      Miguel Blanco Galindo, Philip Coleman, Philip Jackson (2020)Microphone array geometries for horizontal spatial audio object capture with beamforming, In: Journal of the Audio Engineering Society (AES) Audio Engineering Society

      Microphone array beamforming can be used to enhance and separate sound sources, with applications in the capture of object-based audio. Many beamforming methods have been proposed and assessed against each other. However, the effects of compact microphone array design on beamforming performance have not been studied for this kind of application. This study investigates how to maximize the quality of audio objects extracted from a horizontal sound field by filter-and-sum beamforming, through appropriate choice of microphone array design. Eight uniform geometries with practical constraints of a limited number of microphones and maximum array size are evaluated over a range of physical metrics. Results show that baffled circular arrays outperform the other geometries in terms of perceptually relevant frequency range, spatial resolution, directivity and robustness. Moreover, a subjective evaluation of microphone arrays and beamformers is conducted with regards to the quality of the target sound, interference suppression and overall quality of simulated music performance recordings. Baffled circular arrays achieve higher target quality and interference suppression than alternative geometries with wideband signals. Furthermore, subjective scores of beamformers regarding target quality and interference suppression agree well with beamformer onaxis and off-axis responses; with wideband signals the superdirective beamformer achieves the highest overall quality.

      Qiaoxi Zhu, Xiaojun Qiu, Philip Coleman, Ian Burnett (2020)A comparison between two modal domain methods for personal audio reproduction, In: Journal of the Acoustical Society of America147(1)pp. 161-173 Acoustical Society of America

      Personal audio provides private and personalized listening experiences by generating sound zones in a shared space with minimal interference between zones. One challenge of the design is to achieve the best performance with a limited number of microphones and loudspeakers. In this paper, two modal domain methods for personal audio reproduction are compared. One is the spatial harmonic decomposition (SHD) based method and the other is the singular value decomposition (SVD) based method. It is demonstrated that the SVD based method provides a more efficient modal domain decomposition than the SHD method for 2.5 dimensional personal audio design. Simulation results show that the SVD based method outperforms the SHD one by up to 10 dB in terms of acoustic contrast and up to 17 dB in terms of reproduction error for a compact arc array with five loudspeakers, while requiring fewer microphones around the zone boundaries. The SVD based method retains the inherent efficiency of optimizing in a modal domain while avoiding the inherent geometric limitations of using SHD basis functions. Thus, this approach is advantageous for applications with flexible system geometries and a small number of loudspeakers and microphones.

      Milap Rane, Philip Coleman, Russell Mason, Søren Bech (2022)Quantifying headphone listening experience in virtual sound environments using distraction, In: EURASIP Journal on Audio, Speech and Music Processing202230 Springer

      Headphones are commonly used in various environments including at home, outside and on public transport. However, the perception and modelling of the interaction of headphone audio and noisy environments is relatively unresearched. This work investigates the headphone listening experience in noisy environments using the perceptual attributes of distraction and quality of listening experience. A virtual sound environment was created to simulate real-world headphone listening, with variations in foreground sounds, background contexts and busyness, headphone media content and simulated active noise control. Listening tests were performed, where 15 listeners rated both distraction and quality of listening experience across 144 stimuli using a multiple-stimulus presentation. Listener scores were analysed and compared to a computational model of listener distraction. The distraction model was found to be a good predictor of the perceptual distraction rating, with a correlation of 0.888 and an RMSE of 13.4%, despite being developed to predict distraction in the context of audio-on-audio interference in sound zones. In addition, perceived distraction and quality of listening experience had a strong negative correlation of-0.953. Furthermore, the busyness and type of the environment, headphone media, loudness of the foreground sound and active noise control on/off were significant factors in determining the distraction and quality of listening experience scores.

      Giacomo Costantini, Andreas Franck, Chris Pike, Jon Francombe, James Woodcock, Richard J. Hughes, PHILIP COLEMAN, Eloise Whitmore, Filippo Maria Fazi (2019)A Dataset of High-Quality Object-Based Productions

      Object-based audio is an emerging paradigm for representing audio content. However, the limited availability of high-quality object-based content and the need for usable production and reproduction tools impede the exploration and evaluation of object-based audio. This engineering brief introduces the S3A object-based production dataset. It comprises a set of object-based scenes as projects for the Reaper digital audio workstation (DAW). They are accompanied by a set of open-source DAW plugins–—the VISR Production Suite—–for creating and reproducing object-based audio. In combination, these resources provide a practical way to experiment with object-based audio and facilitate loudspeaker and headphone reproduction. The dataset is provided to enable a larger audience to experience object-based audio, for use in perceptual experiments, and for audio system evaluation.

      Dylan Menzies, Philip Coleman, Filippo Maria Fazi (2020)A Room Compensation Method by Modification of Reverberant Audio Objects, In: IEEE/ACM Transactions on Audio, Speech and Language Processing29pp. 239-252 Institute of Electrical and Electronics Engineers (IEEE)

      Conventional channel-based room equalisation can reduce overall colouration caused by the room response, however it cannot separately correct the colouration caused by the late and early parts of the response, or consider the reverberance in the source signal. A room compensation method is developed here for a source signal in which the dry source sound and the associated target reverberant response are encoded separately, which is possible in an object-based audio framework. The target response is modified using the reproduction room response. Subject to some conditions the combined response approximates the target, with accurate early and late equalisations, reverberant balance, and decay timing. Stochastic assumptions are used to simplify the processing, enabling efficient real-time processing of the encoded audio.

      Craig Cieciura, Russell David Mason, Philip Coleman, Jon Francombe (2020)Understanding Users' Choices and Constraints when Positioning Loudspeakers in Living Rooms Zenodo

      Dataset pertaining to an experiment concerning positions of ad-hoc loudspeakers and mobile phones in domestic living rooms. This forms part of the PhD research of Craig Cieciura. This was experiment-based research to determine how to render object-based audio in the domestic environment using ad-hoc, audio-capable devices. References AES148 (2020): Cieciura, C., Mason, R., Coleman, P. and Francombe, J. 2020. Understanding users’ choices and constraints when positioning loudspeakers in living rooms, Audio Engineering Society Preprint, 148th Convention, Engineering Brief (number tbc).

      M Olik, PJB Jackson, P Coleman, JA Pedersen (2014)Optimal source placement for sound zone reproduction with first order reflections, In: JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA136(6)pp. 3085-3096 ACOUSTICAL SOC AMER AMER INST PHYSICS
      P Coleman, M Mo̸ller, M Olsen, M Olik, PJB Jackson, JA Pedersen (2012)Performance of optimized sound field control techniques in simulated and real acoustic environments., In: J Acoust Soc Am131(4: Aco)pp. 3465-3465 Acoustical Society of America

      It is of interest to create regions of increased and reduced sound pressure ('sound zones') in an enclosure such that different audio programs can be simultaneously delivered over loudspeakers, thus allowing listeners sharing a space to receive independent audio without physical barriers or headphones. Where previous comparisons of sound zoning techniques exist, they have been conducted under favorable acoustic conditions, utilizing simulations based on theoretical transfer functions or anechoic measurements. Outside of these highly specified and controlled environments, real-world factors including reflections, measurement errors, matrix conditioning and practical filter design degrade the realizable performance. This study compares the performance of sound zoning techniques when applied to create two sound zones in simulated and real acoustic environments. In order to compare multiple methods in a common framework without unduly hindering performance, an optimization procedure for each method is first used to select the best loudspeaker positions in terms of robustness, efficiency and the acoustic contrast deliverable to both zones. The characteristics of each control technique are then studied, noting the contrast and the impact of acoustic conditions on performance.

      Philip Coleman, Philip Jackson (2016)Planarity-based sound field optimization for multi-listener spatial audio, In: AES Sound Field Control Conference Proceedings

      Planarity panning (PP) and planarity control (PC) have previously been shown to be efficient methods for focusing directional sound energy into listening zones. In this paper, we consider sound field control for two listeners. First, PP is extended to create spatial audio for two listeners consuming the same spatial audio content. Then, PC is used to create highly directional sound and cancel interfering audio. Simulation results compare PP and PC against pressure matching (PM) solutions. For multiple listeners listening to the same content, PP creates directional sound at lower effort than the PM counterpart. When listeners consume different audio, PC produces greater acoustic contrast than PM, with excellent directional control except for frequencies where grating lobes generate problematic interference patterns.

      Qiaoxi Zhu, Philip Coleman, Xiaojun Qiu, Ming Wu, Jun Yang, Ian Burnett (2018)Robust Personal Audio Geometry Optimization in the SVD-Based Modal Domain, In: IEEE/ACM Transactions on Audio, Speech, and Language Processing27(3)pp. 610-620 Institute of Electrical and Electronics Engineers (IEEE)

      Personal audio generates sound zones in a shared space to provide private and personalized listening experiences with minimized interference between consumers. Regularization has been commonly used to increase the robustness of such systems against potential perturbations in the sound reproduction. However, the performance is limited by the system geometry such as the number and location of the loudspeakers and controlled zones. This paper proposes a geometry optimization method to find the most geometrically robust approach for personal audio amongst all available candidate system placements. The proposed method aims to approach the most “natural” sound reproduction so that the solo control of the listening zone coincidently accompanies the preferred quiet zone. Being formulated in the SVD-based modal domain, the method is demonstrated by applications in three typical personal audio optimizations, i.e., the acoustic contrast control, the pressure matching, and the planarity control. Simulation results show that the proposed method can obtain the system geometry with better avoidance of “occlusion,” improved robustness to regularization, and improved broadband equalization.

      MILAP DILIP RANE, RUSSELL DAVID MASON, PHILIP COLEMAN, Søren Bech (2022)Survey of User Perspectives on Headphone Technology

      Headphones are widely used to consume media content at home and on the move. Developments in signal processing technology and object-based audio media formats have raised new opportunities to improve the user experience by tailoring the audio rendering depending on the characteristics of the listener's environment. However, little is known about what consumers consider to be the deficiencies in current headphone-based listening, and therefore how best to target new developments in headphone technology. More than 400 respondents worldwide took part in a headphone listening experience survey. They were asked about how headphones could be improved, considering various contexts (home, outside, and public transport) and content (music, spoken word, radio drama/tv/film/online content, and telecommunication). The responses were coded into themes covering technologies (e.g. noise cancellation and transparency) and features (e.g. 3D audio) that they would like to see in future headphones. These observations highlight that users' requirements differ depending on the listening environment, but also highlight that the majority are satisfied by their headphone listening experience at home. The type of programme material also caused differences in the users' requirements, indicating that there is most scope for improving users' headphone listening experience for music. The survey also presented evidence of users' desire for newer technologies and features including 3D audio and sharing of multiple audio streams.

      Q Zhu, Philip Coleman, M Wu, J Yang (2017)Robust Acoustic Contrast Control with Reduced In-situ Measurement by Acoustic Modelling, In: Journal of the Audio Engineering Society65(6)pp. 460-473 Audio Engineering Society

      Personal audio systems generate a local sound field for a listener while attenuating the sound energy at pre-defined quiet zones. In practice, system performance is sensitive to errors in the acoustic transfer functions between the sources and the zones. Regularization is commonly used to improve robustness, however, selecting a regularization parameter is not always straightforward. In this paper, a design framework for robust reproduction is proposed, combining transfer function and error modelling. The framework allows a physical perspective on the regularization required for a system, based on the bound of assumed additive or multiplicative errors, which is obtained by acoustic modelling. Acoustic contrast control is separately combined with worst-case and probability-model optimization, exploiting limited knowledge of the potential error distribution. Monte-Carlo simulations show that these approaches give increased system robustness compared to the state of the art approaches for regularization parameter estimation, and experimental results verify that robust sound zone control is achieved in the presence of loudspeaker gain errors. Furthermore, by applying the proposed framework, in-situ transfer function measurements were reduced to a single measurement per loudspeaker, per zone, with limited acoustic contrast degradation of less than 2 dB over 100–3000 Hz compared to the fully measured regularized case.

      M Olik, P Coleman, PJB Jackson, J Francombe, R Mason, M Olsen, M Møller, S Bech (2013)A comparative performance study of sound zoning methods in a reflective environment, In: Proceedings of the 52nd AES International Conferencepp. 214-223

      Whilst sound zoning methods have typically been studied under anechoic conditions, it is desirable to evaluate the performance of various methods in a real room. Three control methods were implemented (delay and sum, DS; acoustic contrast control, ACC; and pressure matching, PM) on two regular 24-element loudspeaker arrays (line and circle). The acoustic contrast between two zones was evaluated and the reproduced sound fields compared for uniformity of energy distribution. ACC generated the highest contrast, whilst PM produced a uniform bright zone. Listening tests were also performed using monophonic auralisations from measured system responses to collect ratings of perceived distraction due to the alternate audio programme. Distraction ratings were affected by control method and programme material. Copyright © (2013) by the Audio Engineering Society.

      PJ Jackson, F Jacobsen, P Coleman, JA Pedersen (2013)Sound field planarity characterized by superdirective beamforming, In: Proceedings of Meetings on Acoustics19

      The ability to replicate a plane wave represents an essential element of spatial sound field reproduction. In sound field synthesis, the desired field is often formulated as a plane wave and the error minimized; for other sound field control methods, the energy density or energy ratio is maximized. In all cases and further to the reproduction error, it is informative to characterize how planar the resultant sound field is. This paper presents a method for quantifying a region's acoustic planarity by superdirective beamforming with an array of microphones, which analyzes the azimuthal distribution of impinging waves and hence derives the planarity. Estimates are obtained for a variety of simulated sound field types, tested with respect to array orientation, wavenumber, and number of microphones. A range of microphone configurations is examined. Results are compared with delay-and-sum beamforming, which is equivalent to spatial Fourier decomposition. The superdirective beamformer provides better characterization of sound fields, and is effective with a moderate number of omni-directional microphones over a broad frequency range. Practical investigation of planarity estimation in real sound fields is needed to demonstrate its validity as a physical sound field evaluation measure. © 2013 Acoustical Society of America.

      Craig Cieciura, Russell Mason, Philip Coleman, Jon Francombe (2020)Understanding users' choices and constraints when positioning loudspeakers in living rooms

      A study was conducted in participants’ homes to ascertain how they would position one to eight compact wireless loudspeakers, with the goal of enhancing their existing system. The eleven participants described three key themes, creating an arrangement that: was spatially balanced and evenly distributed; maintained the room’s aesthetics; maintained the room’s functionality. In practice, the results showed that participants prioritised aesthetics and functionality, whilst balance was not usually achieved. It was concluded that a hierarchy of preferred positions in each space exists, as the same positions were reused whilst positioning differing numbers of loudspeakers and by different participants in each location. Consistencies were observed between the locations which can be used to estimate loudspeaker positions for a given living room layout.

      J Francombe, T Brookes, R Mason, R Flindt, P Coleman, Q Liu, PJB Jackson (2015)Production and reproduction of programme material for a variety of spatial audio formats, In: Proc. AES 138th Int. Conv. (e-Brief), Warsawpp. 4-4

      For subjective experimentation on 3D audio systems, suitable programme material is needed. A large-scale recording session was performed in which four ensembles were recorded with a range of existing microphone techniques (aimed at mono, stereo, 5.0, 9.0, 22.0, ambisonic, and headphone reproduction) and a novel 48-channel circular microphone array. Further material was produced by remixing and augmenting pre-existing multichannel content. To mix and monitor the programme items (which included classical, jazz, pop and experimental music, and excerpts from a sports broadcast and a lm soundtrack), a flexible 3D audio reproduction environment was created. Solutions to the following challenges were found: level calibration for different reproduction formats; bass management; and adaptable signal routing from different software and fille formats.

      Saeid Safavi, Turab Iqbal, Wenwu Wang, Philip Coleman, Mark D. Plumbley (2020)Open-Window: A Sound Event Dataset For Window State Detection and Recognition, In: Proc. 5th International Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2020)

      Situated in the domain of urban sound scene classification by humans and machines, this research is the first step towards mapping urban noise pollution experienced indoors and finding ways to reduce its negative impact in peoples’ homes. We have recorded a sound dataset, called Open-Window, which contains recordings from three different locations and four different window states; two stationary states (open and close) and two transitional states (open to close and close to open). We have then built our machine recognition base lines for different scenarios (open set versus closed set) using a deep learning framework. The human listening test is also performed to be able to compare the human and machine performance for detecting the window state just using the acoustic cues. Our experimental results reveal that when using a simple machine baseline system, humans and machines are achieving similar average performance for closed set experiments.

      Philip Coleman, Philip Jackson (2017)Planarity analysis of room acoustics for object-based reverberation, In: ICSV24 Proceedings The International Institute of Acoustics and Vibration (IIAV)

      Recent work into 3D audio reproduction has considered the definition of a set of parameters to encode reverberation into an object-based audio scene. The reverberant spatial audio object (RSAO) describes the reverberation in terms of a set of localised, delayed and filtered (early) reflections, together with a late energy envelope modelling the diffuse late decay. The planarity metric, originally developed to evaluate the directionality of reproduced sound fields, is used to analyse a set of multichannel room impulse responses (RIRs) recorded at a microphone array. Planarity describes the spatial compactness of incident sound energy, which tends to decrease as the reflection density and diffuseness of the room response develop over time. Accordingly, planarity complements intensity-based diffuseness estimators, which quantify the degree to which the sound field at a discrete frequency within a particular time window is due to an impinging coherent plane wave. In this paper, we use planarity as a tool to analyse the sound field in relation to the RSAO parameters. Specifically, we use planarity to estimate two important properties of the sound field. First, as high planarity identifies the most localised reflections along the RIR, we estimate the most planar portions of the RIR, corresponding to the RSAO early reflection model and increasing the likelihood of detecting prominent specular reflections. Second, as diffuse sound fields give a low planarity score, we investigate planarity for data-based mixing time estimation. Results show that planarity estimates on measured multichannel RIR datasets represent a useful tool for room acoustics analysis and RSAO parameterisation.

      Q Zhu, Philip Coleman, M Wu, J Yang (2017)Robust reproduction of sound zones with local sound orientation, In: The Journal of the Acoustical Society of America142(1)pp. EL118-EL122 Acoustical Society of America

      Pressure matching (PM) and planarity control (PC) methods can be used to re- produce local sound with a certain orientation at the listening zone, while suppressing the sound energy at the quiet zone. In this letter, regularized PM and PC, incorporating coarse error estimation, are introduced to increase the robustness in non-ideal reproduction scenarios. Facilitated by this, the interaction between regularization, robustness, (tuned) personal audio optimization and local directional performance is explored. Simulations show that under certain conditions, PC and weighted PM achieve comparable performance, while PC is more robust to a poorly selected regularization parameter.

      James Woodcock, Jon Franombe, Andreas Franck, Philip Coleman, Richard Hughes, Hansung Kim, Qingju Liu, Dylan Menzies, Marcos F Simón Gálvez, Yan Tang, Tim Brookes, William J Davies, Bruno M Fazenda, Russell Mason, Trevor J Cox, Filippo Maria Fazi, Philip Jackson, Chris Pike, Adrian Hilton (2018)A Framework for Intelligent Metadata Adaptation in Object-Based Audio, In: AES E-Librarypp. P11-3 Audio Engineering Society

      Object-based audio can be used to customize, personalize, and optimize audio reproduction depending on the speci?c listening scenario. To investigate and exploit the bene?ts of object-based audio, a framework for intelligent metadata adaptation was developed. The framework uses detailed semantic metadata that describes the audio objects, the loudspeakers, and the room. It features an extensible software tool for real-time metadata adaptation that can incorporate knowledge derived from perceptual tests and/or feedback from perceptual meters to drive adaptation and facilitate optimal rendering. One use case for the system is demonstrated through a rule-set (derived from perceptual tests with experienced mix engineers) for automatic adaptation of object levels and positions when rendering 3D content to two- and ?ve-channel systems.

      Luca Remaggi, Philip Jackson, Philip Coleman, Wenwu Wang (2017)Acoustic Reflector Localization: Novel Image Source Reversion and Direct Localization Methods, In: IEEE Transactions on Audio, Speech and Language Processing25(2)pp. 296-309 IEEE

      Acoustic reflector localization is an important issue in audio signal processing, with direct applications in spatial audio, scene reconstruction, and source separation. Several methods have recently been proposed to estimate the 3D positions of acoustic reflectors given room impulse responses (RIRs). In this article, we categorize these methods as “image-source reversion”, which localizes the image source before finding the reflector position, and “direct localization”, which localizes the reflector without intermediate steps. We present five new contributions. First, an onset detector, called the clustered dynamic programming projected phase-slope algorithm, is proposed to automatically extract the time of arrival for early reflections within the RIRs of a compact microphone array. Second, we propose an image-source reversion method that uses the RIRs from a single loudspeaker. It is constructed by combining an image source locator (the image source direction and range (ISDAR) algorithm), and a reflector locator (using the loudspeaker-image bisection (LIB) algorithm). Third, two variants of it, exploiting multiple loudspeakers, are proposed. Fourth, we present a direct localization method, the ellipsoid tangent sample consensus (ETSAC), exploiting ellipsoid properties to localize the reflector. Finally, systematic experiments on simulated and measured RIRs are presented, comparing the proposed methods with the state-of-the-art. ETSAC generates errors lower than the alternative methods compared through our datasets. Nevertheless, the ISDAR-LIB combination performs well and has a run time 200 times faster than ETSAC.

      Luca Remaggi, PJB Jackson, Philip Coleman (2015)Estimation of Room Reflection Parameters for a Reverberant Spatial Audio Object, In: Proc. AES 138th Int. Convention, Warsaw, Poland

      Estimating and parameterizing the early and late reflections of an enclosed space is an interesting topic in acoustics. With a suitable set of parameters, the current concept of a spatial audio object (SAO), which is typically limited to either direct (dry) sound or diffuse field components, could be extended to afford an editable spatial description of the room acoustics. In this paper we present an analysis/synthesis method for parameterizing a set of measured room impulse responses (RIRs). RIRs were recorded in a medium-sized auditorium, using a uniform circular array of microphones representing the perspective of a listener in the front row. During the analysis process, these RIRs were decomposed, in time, into three parts: the direct sound, the early reflections, and the late reflections. From the direct sound and early reflections, parameters were extracted for the length, amplitude, and direction of arrival (DOA) of the propagation paths by exploiting the dynamic programming projected phase-slope algorithm (DYPSA) and classical delay-and-sum beamformer (DSB). Their spectral envelope was calculated using linear predictive coding (LPC). Late reflections were modeled by frequency-dependent decays excited by band-limited Gaussian noise. The combination of these parameters for a given source position and the direct source signal represents the reverberant or “wet” spatial audio object. RIRs synthesized for a specified rendering and reproduction arrangement were convolved with dry sources to form reverberant components of the sound scene. The resulting signals demonstrated potential for these techniques, e.g., in SAO reproduction over a 22.2 surround sound system.

      Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2019)Creating Object-Based Stimuli to Explore Media Device Orchestration Reproduction Techniques, In: Proceedings of the AES 145th Convention, New York, NY, USA, 2018 October 17 – 201pp. 59-63 Audio Engineering Society

      Media Device Orchestration (MDO) makes use of interconnected devices to augment a reproduction system, and could be used to deliver more immersive audio experiences to domestic audiences. To investigate optimal rendering on an MDO-based system, stimuli were created via: 1) object-based audio (OBA) mixes undertaken in a reference listening room; and 2) up to 13 rendered versions of these employing a range of installed and ad-hoc loudspeakers with varying cost, quality and position. The programme items include audio-visual material (short film trailer and big band performance) and audio-only material (radio panel show, pop track, football match, and orchestral performance). The object-based programme items and alternate MDO configurations are made available for testing and demonstrating OBA systems.

      Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2018)Survey Of Media Device Ownership, Media Service Usage And Group Media Consumption In Uk Households Zenodo

      Data generated from a survey produced by the authors and distributed by GfK, to gather information about: audio and audio-visual media device ownership in UK households; types of audio and audio-visual media delivery methods and services used by UK audiences; smart-device and voice-assistant ownership; individual versus household group versus visitor group weekly media consumption time. This forms part of the PhD research of Craig Cieciura. This was experiment-based research to determine how to render object-based audio in the domestic environment using ad-hoc, audio-capable devices. References Cieciura, C., Mason, R., Coleman, P. and Paradis, M. 2018. Survey of media device ownership, media service usage, and group media consumption in UK households, Audio Engineering Society Preprint, 145th Convention, Engineering Brief 456.

      The topic of sound zone reproduction, whereby listeners sharing an acoustic space can receive personalized audio content, has been researched for a number of years. Recently, a number of sound zone systems have been realized, moving the concept towards becoming a practical reality. Current implementations of sound zone systems have relied upon conventional loudspeaker geometries such as linear and circular arrays. Line arrays may be compact, but do not necessarily give the system the opportunity to compensate for room reflections in real-world environments. Circular arrays give this opportunity, and also give greater flexibility for spatial audio reproduction, but typically require large numbers of loudspeakers in order to reproduce sound zones over an acceptable bandwidth. Therefore, one key area of research standing between the ideal capability and the performance of a physical system is that of establishing the number and location of the loudspeakers comprising the reproduction array. In this study, the topic of loudspeaker configurations was considered for two-zone reproduction, using a circular array of 60 loudspeakers as the candidate set for selection. A numerical search procedure was used to select a number of loudspeakers from the candidate set. The novel objective function driving the search comprised terms relating to the acoustic contrast between the zones, array effort, matrix condition number, and target zone planarity. The performance of the selected sets using acoustic contrast control was measured in an acoustically treated studio. Results demonstrate that the loudspeaker selection process has potential for maximising the contrast over frequency by increasing the minimum contrast over the frequency range 100--4000 Hz. The array effort and target planarity can also be optimised, depending on the formulation of the objective function. Future work should consider greater diversity of candidate locations.

      Philip Coleman, Miguel Blanco Galindo, Philip Jackson (2017)Comparison of microphone array geometries for multi-point sound field reproduction, In: ICSV 24 Proceedings International Institute of Acoustics and Vibration (IIAV)

      Multi-point approaches for sound field control generally sample the listening zone(s) with pressure microphones, and use these measurements as an input for an optimisation cost function. A number of techniques are based on this concept, for single-zone (e.g. least-squares pressure matching (PM), brightness control, planarity panning) and multi-zone (e.g. PM, acoustic contrast control, planarity control) reproduction. Accurate performance predictions are obtained when distinct microphone positions are employed for setup versus evaluation. While, in simulation, one can afford a dense sampling of virtual microphones, it is desirable in practice to have a microphone array which can be positioned once in each zone to measure the setup transfer functions between each loudspeaker and that zone. In this contribution, we present simulation results over a fixed dense set of evaluation points comparing the performance of several multi-point optimisation approaches for 2D reproduction with a 60 channel circular loudspeaker arrangement. Various regular setup microphone arrays are used to calculate the sound zone filters: circular grid, circular, dual-circular, and spherical arrays, each with different numbers of microphones. Furthermore, the effect of a rigid spherical baffle is studied for the circular and spherical arrangements. The results of this comparative study show how the directivity and effective frequency range of multi-point optimisation techniques depend on the microphone array used to sample the zones. In general, microphone arrays with dense spacing around the boundary give better angular discrimination, leading to more accurate directional sound reproduction, while those distributed around the whole zone enable more accurate prediction of the reproduced target sound pressure level.

      Miguel Blanco Galindo, Philip Coleman, Philip J. B. Jackson (2019)Robust hypercardioid synthesis for spatial audio capture: microphone geometry, directivity and regularization, In: T Tew, D Williams (eds.), 2019 AES INTERNATIONAL CONFERENCE ON IMMERSIVE AND INTERACTIVE AUDIO49 Audio Engineering Soc Inc

      Frequency-invariant beamformers are useful for spatial audio capture since their attenuation of sources outside the look direction is consistent across frequency. In particular, the least-squares beamformer (LSB) approximates arbitrary frequency-invariant beampatterns with generic microphone configurations. This paper investigates the effects of array geometry, directivity order and regularization for robust hypercardioid synthesis up to 15th order with the LSB, using three 2D 32-microphone array designs (rectangular grid, open circular, and circular with cylindrical baffle). While the directivity increases with order, the frequency range is inversely proportional to the order and is widest for the cylindrical array. Regularization results in broadening of the mainlobe and reduced on-axis response at low frequencies. The PEASS toolkit was used to evaluate perceptually beamformed speech signals.

      Philip Coleman, Anesa Hosein (2023)Using voluntary laboratory simulations as preparatory tasks to improve conceptual knowledge and engagement, In: European journal of engineering education48(5)pp. 899-912 Taylor & Francis

      Laboratory tasks often focus on mechanical procedures leaving limited time and opportunities for students to build conceptual knowledge. We investigate to what extent introducing simulation tasks to preparation work can enable students to build their conceptual knowledge. We surveyed two cohorts of students taking an electronics module. Laboratory report marks were also analysed across the two cohorts (before and after introducing simulations in the laboratory preparation). No significant difference was found between the cohorts but the maximum marks increased after simulations were introduced. Students perceived that using simulations aided their constructive knowledge and knowledge confidence. Analysis of the free-text responses suggests that students benefitted from the simulation tasks by visualising the theory and concepts, confirming and checking results, and exploring different scenarios before and after the physical laboratory session. These results suggest that laboratory practicals should be supported with simulation software where possible.

      P Coleman, PJB Jackson, M Olik, JA Pedersen (2013)Optimizing the planarity of sound zones, In: Proceedings of the AES International Conferencepp. 204-213

      Reproduction of personal sound zones can be attempted by sound field synthesis, energy control, or a combination of both. Energy control methods can create an unpredictable pressure distribution in the listening zone. Sound field synthesis methods may be used to overcome this problem, but tend to produce a lower acoustic contrast between the zones. Here, we present a cost function to optimize the cancellation and the plane wave energy over a range of incoming azimuths, producing a planar sound field without explicitly specifying the propagation direction. Simulation results demonstrate the performance of the methods in comparison with the current state of the art. The method produces consistent high contrast and a consistently planar target sound zone across the frequency range 80-7000Hz. Copyright © (2013) by the Audio Engineering Society.

      J Francombe, K Baykaner, R Mason, M Dewhirst, P Coleman, M Olik, PJB Jackson, S Bech, JA Pedersen (2013)Perceptually optimised loudspeaker selection for the creation of personal sound zones, In: Proceedings of the 52nd AES International Conferencepp. 169-178

      Sound eld control methods can be used to create multiple zones of audio in the same room. Separation achieved by such systems has classically been evaluated using physical metrics including acoustic contrast and target-to-interferer ratio (TIR). However, to optimise the experience for a listener it is desirable to consider perceptual factors. A search procedure was used to select 5 loudspeakers for production of 2 sound zones using acoustic contrast control. Comparisons were made between searches driven by physical (programme-independent TIR) and perceptual (distraction predictions from a statistical model) cost func- Tions. Performance was evaluated on TIR and predicted distraction in addition to subjective ratings. The perceptual cost function showed some benefits over physical optimisation, although the model used needs further work. Copyright © (2013) by the Audio Engineering Society.

      P Coleman, PJB Jackson, M Olik, M Møller, M Olsen, JA Pedersen (2014)Acoustic contrast, planarity and robustness of sound zone methods using a circular loudspeaker array, In: Journal of the Acoustical Society of America135(4)pp. 1929-1940

      Since the mid 1990s, acoustics research has been undertaken relating to the sound zone problem—using loudspeakers to deliver a region of high sound pressure while simultaneously creating an area where the sound is suppressed—in order to facilitate independent listening within the same acoustic enclosure. The published solutions to the sound zone problem are derived from areas such as wave field synthesis and beamforming. However, the properties of such methods differ and performance tends to be compared against similar approaches. In this study, the suitability of energy focusing, energy cancelation, and synthesis approaches for sound zone reproduction is investigated. Anechoic simulations based on two zones surrounded by a circular array show each of the methods to have a characteristic performance, quantified in terms of acoustic contrast, array control effort and target sound field planarity. Regularization is shown to have a significant effect on the array effort and achieved acoustic contrast, particularly when mismatched conditions are considered between calculation of the source weights and their application to the system.

      Philip Jackson, Filippo Fazi, Philip Coleman (2019)Personalising sound over loudspeakers University of Surrey

      In our information-overloaded daily lives, unwanted sounds create confusion, disruption and fatigue in what do and experience. Taking control of your own sound environment, you can design what information to hear and how. Providing personalised sound to different people over loudspeakers enables communication, human connection and social activity in a shared space, meanwhile addressing the individuals’ needs. Recent developments in object-based audio, robust sound zoning algorithms, computer vision, device synchronisation and electronic hardware facilitate personal control of immersive and interactive reproduction techniques. Accordingly, the creative sector is moving towards more demand for personalisation and personalisable content. This tutorial offers participants a novel and timely introduction to the increasingly valuable capability to personalise sound over loudspeakers, alongside resources for the audio signal processing community. Presenting the science behind personalising sound technologies and providing insights for making sound zones in practice, we hope to create better listening experiences. The tutorial attempts a holistic exposition of techniques for producing personal sound over loudspeakers. It incorporates a practical step-by-step guide to digital filter design for real-world multizone sound reproduction and relates various approaches to one another thereby enabling comparison of the listener benefits.

      Frequency-invariant beamformers are useful for spatial audio capture since their attenuation of sources outside the look direction is consistent across frequency. In particular, the least-squares beamformer (LSB) approximates arbitrary frequency-invariant beampatterns with generic microphone configurations. This paper investigates the effects of array geometry, directivity order and regularization for robust hypercardioid synthesis up to 15th order with the LSB, using three 2D 32-microphone array designs (rectangular grid, open circular, and circular with cylindrical baffle). While the directivity increases with order, the frequency range is inversely proportional to the order and is widest for the cylindrical array. Regularization results in broadening of the mainlobe and reduced on-axis response at low frequencies. The PEASS toolkit was used to evaluate perceptually beamformed speech signals.

      P Coleman, A Franck, PJB Jackson, R Hughes, L Remaggi, F Melchior (2016)On object based audio with reverberation Audio Engineering Society

      Object-based audio is gaining momentum as a means for future audio productions to be format-agnostic and interactive. Recent standardization developments make recommendations for object formats, however the capture, production and reproduction of reverberation is an open issue. In this paper, we review approaches for recording, transmitting and rendering reverberation over a 3D spatial audio system. Techniques include channel-based approaches where room signals intended for a specific reproduction layout are transmitted, and synthetic reverberators where the room effect is constructed at the renderer. We consider how each approach translates into an object-based context considering the end-to-end production chain of capture, representation, editing, and rendering. We discuss some application examples to highlight the implications of the various approaches.

      Miguel Blanco Galindo, Philip Coleman, Philip Jackson (2019)Robust hypercardioid synthesis for spatial audio capture: microphone geometry, directivity and robustness, In: AES E-Library

      Frequency-invariant beamformers are useful for spatial audio capture since their attenuation of sources outside the look direction is consistent across frequency. In particular, the least-squares beamformer (LSB) approximates arbitrary frequency-invariant beampatterns with generic microphone configurations. This paper investigates the effects of array geometry, directivity order and regularization for robust hypercardioid synthesis up to 15th order with the LSB, using three 2D 32-microphone array designs (rectangular grid, open circular, and circular with cylindrical baffle). While the directivity increases with order, the frequency range is inversely proportional to the order and is widest for the cylindrical array. Regularization results in broadening of the mainlobe and reduced on-axis response at low frequencies. The PEASS toolkit was used to evaluate perceptually beamformed speech signals.

      Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2019)Survey of Media Device Ownership, Media Service Usage and Group Media Consumption in UK Households, In: Proceedings of the AES 145th Convention, New York, NY, USA, 2018 October 17 – 201pp. 24-28 Audio Engineering Society

      Homes contain a plethora of devices for audio-visual content consumption, which intelligent reproduction systems can exploit to give the best possible experience. To investigate media device ownership in the home, media service-types usage and solitary versus group audio/audio-visual media consumption, a survey of UK households with 1102 respondents was undertaken. The results suggest that there is already significant ownership of wireless and smart loudspeakers, as well as other interconnected devices containing loudspeakers such as smartphones and tablets. Questions on group media consumption suggest that the majority of listeners spend more time consuming media with others than alone, demonstrating an opportunity for systems which can adapt to varying audience requirements within the same environment.

      Philip Coleman, A Franck, D Menzies, Philip Jackson (2017)Object-based reverberation encoding from first-order Ambisonic RIRs, In: Proceedings of 142nd AES International Convention Audio Engineering Society

      Recent work on a reverberant spatial audio object (RSAO) encoded spatial room impulse responses (RIRs) as object-based metadata which can be synthesized in an object-based renderer. Encoding reverberation into metadata presents new opportunities for end users to interact with and personalize reverberant content. The RSAO models an RIR as a set of early re ections together with a late reverberation filter. Previous work to encode the RSAO parameters was based on recordings made with a dense array of omnidirectional microphones. This paper describes RSAO parameterization from first-order Ambisonic (B-Format) RIRs, making the RSAO compatible with existing spatial reverb libraries. The object-based implementation achieves reverberation time, early decay time, clarity and interaural cross-correlation similar to direct Ambisonic rendering of 13 test RIRs.

      Luca Remaggi, Philip Jackson, Philip Coleman, T Parnell (2018)Estimation of Object-based Reverberation using an Ad-hoc Microphone Arrangement for Live Performance, In: Proceedings of 144th AES Convention Audio Engineering Society

      We present a novel pipeline to estimate reverberant spatial audio object (RSAO) parameters given room impulse responses (RIRs) recorded by ad-hoc microphone arrangements. The proposed pipeline performs three tasks: direct-to-reverberant-ratio (DRR) estimation; microphone localization; RSAO parametrization. RIRs recorded at Bridgewater Hall by microphones arranged for a BBC Philharmonic Orchestra performance were parametrized. Objective measures of the rendered RSAO reverberation characteristics were evaluated and compared with reverberation recorded by a Soundfield microphone. Alongside informal listening tests, the results confirmed that the rendered RSAO gave a plausible reproduction of the hall, comparable to the measured response. The objectification of the reverb from in-situ RIR measurements unlocks customization and personalization of the experience for different audio systems, user preferences and playback environments.

      P Coleman, PJB Jackson (2014)Planarity panning for listener-centered spatial audio, In: Proc. AES 55th Int. Conf., Helsinkipp. 8-8

      Techniques such as multi-point optimization, wave field synthesis and ambisonics attempt to create spatial effects by synthesizing a sound field over a listening region. In this paper, we propose planarity panning, which uses superdirective microphone array beamforming to focus the sound from the specified direction, as an alternative approach. Simulations compare performance against existing strategies, considering the cases where the listener is central and non-central in relation to a 60 channel circular loudspeaker array. Planarity panning requires low control effort and provides high sound field planarity over a large frequency range, when the zone positions match the target regions specified for the filter calculations. Future work should implement and validate the perceptual properties of the method.

      J Woodcock, C Pike, F Melchior, Philip Coleman, A Franck, Adrian Hilton (2016)Presenting the S3A Object-Based Audio Drama dataset, In: AES E-library Audio Engineering Society

      This engineering brief reports on the production of 3 object-based audio drama scenes, commissioned as part of the S3A project. 3D reproduction and an object-based workflow were considered and implemented from the initial script commissioning through to the final mix of the scenes. The scenes are being made available as Broadcast Wave Format files containing all objects as separate tracks and all metadata necessary to render the scenes as an XML chunk in the header conforming to the Audio Definition Model specification (Recommendation ITU-R BS.2076 [1]). It is hoped that these scenes will find use in perceptual experiments and in the testing of 3D audio systems. The scenes are available via the following link: http://dx.doi.org/10.17866/rd.salford.3043921.

      Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2018)Creating Object-Based Stimuli To Explore Media Device Orchestration Reproduction Techniques Zenodo

      Dataset containing Object-based versions and rendered out MDO loudspeaker feeds of two programme items, adapted from existing material to explore Media Device Orchestration reproduction techniques. This forms part of the PhD research of Craig Cieciura. This was experiment-based research to determine how to render object-based audio in the domestic environment using ad-hoc, audio-capable devices. References Cieciura, C., Mason, R., Coleman, P. and Paradis, M. 2018. Creating Object-Based Stimuli to Explore Media Device Orchestration Reproduction Techniques, Audio Engineering Society Preprint, 145th Convention, Engineering Brief 463.

      Miguel Blanco Galindo, Philip Jackson, Philip Coleman, Luca Remaggi (2017)Microphone array design for spatial audio object early reflection parametrisation from room impulse responses, In: ICSV 24 Proceedings International Institute of Acoustics and Vibration (IIAV)

      Room Impulse Responses (RIRs) measured with microphone arrays capture spatial and nonspatial information, e.g. the early reflections’ directions and times of arrival, the size of the room and its absorption properties. The Reverberant Spatial Audio Object (RSAO) was proposed as a method to encode room acoustic parameters from measured array RIRs. As the RSAO is object-based audio compatible, its parameters can be rendered to arbitrary reproduction systems and edited to modify the reverberation characteristics, to improve the user experience. Various microphone array designs have been proposed for sound field and room acoustic analysis, but a comparative performance evaluation is not available. This study assesses the performance of five regular microphone array geometries (linear, rectangular, circular, dual-circular and spherical) to capture RSAO parameters for the direct sound and early reflections of RIRs. The image source method is used to synthesise RIRs at the microphone positions as well as at the centre of the array. From the array RIRs, the RSAO parameters are estimated and compared to the reference parameters at the centre of the array. A performance comparison among the five arrays is established as well as the effect of a rigid spherical baffle for the circular and spherical arrays. The effects of measurement uncertainties, such as microphone misplacement and sensor noise errors, are also studied. The results show that planar arrays achieve the most accurate horizontal localisation whereas the spherical arrays perform best in elevation. Arrays with smaller apertures achieve a higher number of detected reflections, which becomes more significant for the smaller room with higher reflection density.

      Qingju Liu, Yong Xu, Philip Jackson, Wenwu Wang, Philip Coleman (2018)Iterative deep neural networks for speaker-independent binaural blind speech separation, In: ICASSP 2018 Proceedings IEEE

      In this paper, we propose an iterative deep neural network (DNN)-based binaural source separation scheme, for recovering two concurrent speech signals in a room environment. Besides the commonly-used spectral features, the DNN also takes non-linearly wrapped binaural spatial features as input, which are refined iteratively using parameters estimated from the DNN output via a feedback loop. Different DNN structures have been tested, including a classic multilayer perception regression architecture as well as a new hybrid network with both convolutional and densely-connected layers. Objective evaluations in terms of PESQ and STOI showed consistent improvement over baseline methods using traditional binaural features, especially when the hybrid DNN architecture was employed. In addition, our proposed scheme is robust to mismatches between the training and testing data.

      P Coleman, PJB Jackson, J Francombe (2015)Audio Object Separation Using Microphone Array Beamforming, In: Proc. AES 138th Int. Convention, Warsaw, Poland

      Audio production is moving toward an object-based approach, where content is represented as audio together with metadata that describe the sound scene. From current object definitions, it would usually be expected that the audio portion of the object is free from interfering sources. This poses a potential problem for object-based capture, if microphones cannot be placed close to a source. This paper investigates the application of microphone array beamforming to separate a mixture into distinct audio objects. Real mixtures recorded by a 48-channel microphone array in reflective rooms were separated, and the results were evaluated using perceptual models in addition to physical measures based on the beam pattern. The effect of interfering objects was reduced by applying the beamforming techniques.

      L Remaggi, PJB Jackson, P Coleman, W Wang (2014)Room boundary estimation from acoustic room impulse responses, In: Proc. Sensor Signal Processing for Defence (SSPD 2014)pp. 1-5

      Boundary estimation from an acoustic room impulse response (RIR), exploiting known sound propagation behavior, yields useful information for various applications: e.g., source separation, simultaneous localization and mapping, and spatial audio. The baseline method, an algorithm proposed by Antonacci et al., uses reflection times of arrival (TOAs) to hypothesize reflector ellipses. Here, we modify the algorithm for 3-D environments and for enhanced noise robustness: DYPSA and MUSIC for epoch detection and direction of arrival (DOA) respectively are combined for source localization, and numerical search is adopted for reflector estimation. Both methods, and other variants, are tested on measured RIR data; the proposed method performs best, reducing the estimation error by 30%.

      Philip Coleman, Qingju Liu, Jon Francombe, Philip Jackson (2018)Perceptual evaluation of blind source separation in object-based audio production, In: Latent Variable Analysis and Signal Separation - 14th International Conference, LVA/ICA 2018, Guildford, UK, July 2–5, 2018, Proceedingspp. 558-567 Springer Verlag

      Object-based audio has the potential to enable multime- dia content to be tailored to individual listeners and their reproduc- tion equipment. In general, object-based production assumes that the objects|the assets comprising the scene|are free of noise and inter- ference. However, there are many applications in which signal separa- tion could be useful to an object-based audio work ow, e.g., extracting individual objects from channel-based recordings or legacy content, or recording a sound scene with a single microphone array. This paper de- scribes the application and evaluation of blind source separation (BSS) for sound recording in a hybrid channel-based and object-based workflow, in which BSS-estimated objects are mixed with the original stereo recording. A subjective experiment was conducted using simultaneously spoken speech recorded with omnidirectional microphones in a rever- berant room. Listeners mixed a BSS-extracted speech object into the scene to make the quieter talker clearer, while retaining acceptable au- dio quality, compared to the raw stereo recording. Objective evaluations show that the relative short-term objective intelligibility and speech qual- ity scores increase using BSS. Further objective evaluations are used to discuss the in uence of the BSS method on the remixing scenario; the scenario shown by human listeners to be useful in object-based audio is shown to be a worse-case scenario.

      Miguel Blanco Galindo, Philip Jackson, Philip Coleman, Luca Remaggi (2017)Microphone array design for spatial audio object early reflection parametrisation from room impulse responses University of Surrey
      Luca Remaggi, Philip J B Jackson, Philip Coleman (2015)Source, sensor and reflector position estimation from acoustical room impulse responses, In: Proc. Int. Congr. Sound Vibrationpp. 1-8
      Baykaner Khan, Philip Coleman, Mason Russell, Philip . J. B. Jackson, Jon Francombe, Marek Olik, Søren Bech (2015)The Relationship Between Target Quality and Interference in Sound Zone, In: Journal of the Audio Engineering Society63(1/2)pp. 78-89 Audio Engineering Society

      Sound zone systems aim to control sound fields in such a way that multiple listeners can enjoy different audio programs within the same room with minimal acoustic interference. Often, there is a trade-off between the acoustic contrast achieved between the zones and the fidelity of the reproduced audio program in the target zone. A listening test was conducted to obtain subjective measures of distraction, target quality, and overall quality of listening experience for ecologically valid programs within a sound zoning system. Sound zones were reproduced using acoustic contrast control, planarity control, and pressure matching applied to a circular loudspeaker array. The highest mean overall quality was a compromise between distraction and target quality. The results showed that the term “distraction” produced good agreement among listeners, and that listener ratings made using this term were a good measure of the perceived effect of the interferer.

      L Remaggi, PJB Jackson, P Coleman, F Francombe (2015)Visualization of compact microphone array room impulse responses, In: Proc. AES 139th Int. Convention, New York NYpp. 4-4

      For many audio applications, availability of recorded multi-channel room impulse responses (MC-RIRs) is fundamental. They enable development and testing of acoustic systems for reflective rooms. We present multiple MC-RIR datasets recorded in diverse rooms, using up to 60 loudspeaker positions and various uniform compact microphone arrays. These datasets complement existing RIR libraries and have dense spatial sampling of a listening position. To reveal the encapsulated spatial information, several state of the art room visualization methods are presented. Results confirm the measurement fidelity and graphically depict the geometry of the recorded rooms. Further investigation of these recordings and visualization methods will facilitate object-based RIR encoding, integration of audio with other forms of spatial information, and meaningful extrapolation and manipulation of recorded compact microphone array RIRs.

      L Remaggi, PJB Jackson, P Coleman (2015)Source, sensor and reflector position estimation from acoustical room impulse responses, In: 22nd International Congress of Sound and Vibration

      The acoustic environment affects the properties of the audio signals recorded. Generally, given room impulse responses (RIRs), three sets of parameters have to be extracted in order to create an acoustic model of the environment: sources, sensors and reflector positions. In this paper, the cross-correlation based iterative sensor position estimation (CISPE) algorithm is presented, a new method to estimate a microphone configuration, together with source and reflector position estimators. A rough measurement of the microphone positions initializes the process; then a recursive algorithm is applied to improve the estimates, exploiting a delay-and-sum beamformer. Knowing where the microphones lie in the space, the dynamic programming projected phase slope algorithm (DYPSA) extracts the times of arrival (TOAs) of the direct sounds from the RIRs, and multiple signal classification (MUSIC) extracts the directions of arrival (DOAs). A triangulation technique is then applied to estimate the source positions. Finally, exploiting properties of 3D quadratic surfaces (namely, ellipsoids), reflecting planes are localized via a technique ported from image processing, by random sample consensus (RANSAC). Simulation tests were performed on measured RIR datasets acquired from three different rooms located at the University of Surrey, using either a uniform circular array (UCA) or uniform rectangular array (URA) of microphones. Results showed small improvements with CISPE pre-processing in almost every case.

      P Coleman, A Franck, P Jackson, R Hughes, L Remaggi, F Melchior (2017)Object-Based Reverberation for Spatial Audio, In: Journal of the Audio Engineering Society65(1/2)pp. 66-77 Audio Engineering Society

      Object-based audio is gaining momentum as a means for future audio content to be more immersive, interactive, and accessible. Recent standardization developments make recommendations for object formats, however, the capture, production and reproduction of reverberation is an open issue. In this paper, parametric approaches for capturing, representing, editing, and rendering reverberation over a 3D spatial audio system are reviewed. A framework is proposed for a Reverberant Spatial Audio Object (RSAO), which synthesizes reverberation inside an audio object renderer. An implementation example of an object scheme utilising the RSAO framework is provided, and supported with listening test results, showing that: the approach correctly retains the sense of room size compared to a convolved reference; editing RSAO parameters can alter the perceived room size and source distance; and, format-agnostic rendering can be exploited to alter listener envelopment.

      Philip Jackson, Mark D Plumbley, Wenwu Wang, Tim Brookes, Philip Coleman, Russell Mason, David Frohlich, Carla Bonina, David Plans (2017)Signal Processing, Psychoacoustic Engineering and Digital Worlds: Interdisciplinary Audio Research at the University of Surrey

      At the University of Surrey (Guildford, UK), we have brought together research groups in different disciplines, with a shared interest in audio, to work on a range of collaborative research projects. In the Centre for Vision, Speech and Signal Processing (CVSSP) we focus on technologies for machine perception of audio scenes; in the Institute of Sound Recording (IoSR) we focus on research into human perception of audio quality; the Digital World Research Centre (DWRC) focusses on the design of digital technologies; while the Centre for Digital Economy (CoDE) focusses on new business models enabled by digital technology. This interdisciplinary view, across different traditional academic departments and faculties, allows us to undertake projects which would be impossible for a single research group. In this poster we will present an overview of some of these interdisciplinary projects, including projects in spatial audio, sound scene and event analysis, and creative commons audio.

      Luca Remaggi, Philip J B Jackson, Philip Coleman, Jon Francombe (2016)Convention e-Brief, In: Proceedings of the 139th Audio Engineering Society International Convention Audio Engineering Society (AES)

      For many audio applications, availability of recorded multi-channel room impulse responses (MC-RIRs) is fundamental. They enable development and testing of acoustic systems for reflective rooms. We present multiple MC-RIR datasets recorded in diverse rooms, using up to 60 loudspeaker positions and various uniform compact microphone arrays. These datasets complement existing RIR libraries and have dense spatial sampling of a listening position. To reveal the encapsulated spatial information, several state of the art room visualization methods are presented. Results confirm the measurement fidelity and graphically depict the geometry of the recorded rooms. Further investigation of these recordings and visualization methods will facilitate object-based RIR encoding, integration of audio with other forms of spatial information, and meaningful extrapolation and manipulation of recorded compact microphone array RIRs.

      Philip Jackson, Filippo Fazi, Frank Melchior, Trevor Cox, Adrian Hilton, Chris Pike, Jon Francombe, Andreas Franck, Philip Coleman, Dylan Menzies-Gow, James Woodcock, Yan Tang, Qingju Liu, Rick Hughes, Marcos Simon Galvez, Teo de Campos, Hansung Kim, Hanne Stenzel Object-Based Audio Rendering, In: arXiv.org

      Apparatus and methods are disclosed for performing object-based audio rendering on a plurality of audio objects which define a sound scene, each audio object comprising at least one audio signal and associated metadata. The apparatus comprises: a plurality of renderers each capable of rendering one or more of the audio objects to output rendered audio data; and object adapting means for adapting one or more of the plurality of audio objects for a current reproduction scenario, the object adapting means being configured to send the adapted one or more audio objects to one or more of the plurality of renderers.

      M Olik, P Jackson, P Coleman, M Olsen, M Mo̸ller, S Bech (2013)Influence of low-order room reflections on sound zone system performance., In: J Acoust Soc Am133(5)pp. 3349-?

      Studies on sound field control methods able to create independent listening zones in a single acoustic space have recently been undertaken due to the potential of such methods for various practical applications, such as individual audio streams in home entertainment. Existing solutions to the problem have shown to be effective in creating high and low sound energy regions under anechoic conditions. Although some case studies in a reflective environment can also be found, the capabilities of sound zoning methods in rooms have not been fully explored. In this paper, the influence of low-order (early) reflections on the performance of key sound zone techniques is examined. Analytic considerations for small-scale systems reveal strong dependence of performance on parameters such as source positioning with respect to zone locations and room surfaces, as well as the parameters of the receiver configuration. These dependencies are further investigated through numerical simulation to determine system configurations which maximize the performance in terms of acoustic contrast and array control effort. The design rules for source and receiver positioning are suggested, for improved performance under a given set of constraints such as a number of available sources, zone locations, and the direction of the dominant reflection.