Dr Russell Mason


Senior Lecturer (IoSR), Tonmeister Programme Director

Academic and research departments

Music and Media.

About

University roles and responsibilities

  • Programme director of the BMus/BSc Tonmeister programme
  • Admissions tutor for the BMus/BSc Tonmeister programme

    Research

    Research interests

    Research projects

    Teaching

    Publications

    Highlights

    • Francombe, J., Brookes, T. and Mason, R. 2018: 'Determination and validation of mix parameters for modifying envelopment in object-based audio', Journal of the Audio Engineering Society, vol. 66, issue 3 (March), pp. 127-145.
    • Francombe, J., Brookes, T., Mason, R. and Woodcock, J. 2017: 'Evaluation of spatial audio reproduction methods (part 2): analysis of listener preference', Journal of the Audio Engineering Society, vol. 65, issue 3 (March), pp. 212-225.
    • Francombe, J., Brookes, T. and Mason, R. 2017: 'Evaluation of spatial audio reproduction methods (part 1): elicitation of perceptual differences', Journal of the Audio Engineering Society, vol. 65, issue 3 (March), pp. 198-211.
    J Francombe, T Brookes, R Mason, J Woodcock (2020)Data for 'Evaluation of Spatial Audio Reproduction Methods (Part 2): Analysis of Listener Preference', In: Adrian Hilton (eds.), Evaluation of Spatial Audio Reproduction Methods (Part 2): Analysis of Listener Preference University of Surrey

    Data accompanying the paper "Evaluation of Spatial Audi Reproduction Methods (Part2): Analysis of Listener Preference.

    Jon Francombe (2020)Data to accompany "Automatic text clustering for audio attribute elicitation experiment responses", In: Tim Brookes, Russell Mason, Adrian Hilton (eds.), Automatic text clustering for audio attribute elicitation experiment responses University of Surrey

    This is the dataset used for the accompanying paper "Automatic text clustering for audio attribute elicitation experiment responses".

    Daisuke Koya, Russell Mason, Martin Dewhirst, Soren Bech (2023)A Perceptual Model of Spatial Quality for Automotive Audio Systems, In: Journal of the Audio Engineering Society71(10)pp. 689-706 Audio Engineering Soc

    A perceptual model was developed to evaluate the spatial quality of automotive audio systems by adapting the Quality Evaluation of Spatial Transmission and Reproduction by an Artificial Listener (QESTRAL) model of spatial quality developed for domestic audio systems. The QESTRAL model was modified to use a combination of existing and newly created metrics, based on-in order of importance-the interaural cross-correlation, reproduced source angle, scene width, level, entropy, and spectral roll-off. The resulting model predicts the overall spatial quality of two-channel and five-channel automotive audio systems with a cross-validation R2 of 0.85 and root-mean-square error (RMSE) of 11.03%. The performance of the modified model improved considerably for automotive applications compared with that of the original model, which had a prediction R2 of 0.72 and RMSE of 29.39%. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems, which were predicted with an R2 of 0.77 and RMSE of 11.90%.

    Milap Rane, P. D. Coleman, Russell Mason, Søren Bech (2022)Future headphone technology; a survey of users' requirements
    Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2019)Creating Object-Based Stimuli to Explore Media Device Orchestration Reproduction Techniques Audio Engineering Society

    Media Device Orchestration (MDO) makes use of interconnected devices to augment a reproduction system, and could be used to deliver more immersive audio experiences to domestic audiences. To investigate optimal rendering on an MDO-based system, stimuli were created via: 1) object-based audio (OBA) mixes undertaken in a reference listening room; and 2) up to 13 rendered versions of these employing a range of installed and ad-hoc loudspeakers with varying cost, quality and position. The programme items include audio-visual material (short film trailer and big band performance) and audio-only material (radio panel show, pop track, football match, and orchestral performance). The object-based programme items and alternate MDO configurations are made available for testing and demonstrating OBA systems.

    Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2019)Survey of Media Device Ownership, Media Service Usage and Group Media Consumption in UK Households Audio Engineering Society

    Homes contain a plethora of devices for audio-visual content consumption, which intelligent reproduction systems can exploit to give the best possible experience. To investigate media device ownership in the home, media service-types usage and solitary versus group audio/audio-visual media consumption, a survey of UK households with 1102 respondents was undertaken. The results suggest that there is already significant ownership of wireless and smart loudspeakers, as well as other interconnected devices containing loudspeakers such as smartphones and tablets. Questions on group media consumption suggest that the majority of listeners spend more time consuming media with others than alone, demonstrating an opportunity for systems which can adapt to varying audience requirements within the same environment.

    Catherine Kim, Russell Mason, Timothy Brookes (2013)Head movements made by listeners in experimental and real-life listening activities, In: Journal of the Audio Engineering Society61(6 (Jun)pp. 425-438

    Understanding the way in which listeners move their heads must be part of any objective model for evaluating and reproducing the sonic experience of space. Head movement is part of the listening experience because it allows for sensing the spatial distribution of parameters. In the first experiment, the head positions of subjects was recorded when they were asked to evaluate perceived source location, apparent source width, envelopment, and timbre of synthesis stimuli. Head motion was larger when judging source width than when judging direction or timbre. In the second experiment, head movement was observed in natural listening activities such as concerts, movies, and video games. Because the statistics of movement were similar to that observed in the first experiment, laboratory results can to be used as the basis of an objective model of spatial behavior. The results were based on 10 subjects.

    LSR Simon, Russell Mason (2010)Time and level localisation curves for a regularly-spaced octagon loudspeaker array, In: Audio Engineering Society Preprint8079 Audio Engineering Society

    Multichannel microphone array designs often use the localisation curves that have been derived for 2-0 stereophony. Previous studies showed that side and rear perception of phantom image locations require somewhat different curves. This paper describes an experiment conducted to determine localisation curves using an octagonal loudspeaker setup. Various signals with a range of interchannel time and level differences were produced between pairs of adjacent loudspeakers, and subjects were asked to evaluate the perceived sound event’s direction and its locatedness. The results showed that the curves for the side pairs of adjacent loudspeakers are significantly different to the front and rear pairs. The resulting curves can be used to derive suitable microphone techniques for this loudspeaker setup.

    The subjective spatial effect of noise signals with sinusoidal ITD fluctuations was investigated. Both verbal and non-verbal elicitation experiments were carried out to examine the subjective effect of the ITD fluctuations with a number of fluctuation frequencies and fluctuation magnitudes. It was found that the predominant effect of increasing the fluctuation magnitude was an increase in the perceived width of the sound.

    C Kim, R Mason, T Brookes (2010)Development of a head-movement-aware signal capture system for the prediction of acoustical spatial impression, In: M Burgess, C Don (eds.), Proceedings of the 20th International Congress on Acoustics4pp. 2768-2775

    This research introduces a novel technique for capturing binaural signals for objective evaluation of spatial impression; the technique allows for simulation of the head movement that is typical in a range of listening activities. A subjective listening test showed that the amount of head movement made was larger when listeners were rating perceived source width and envelopment than when rating source direction and timbre, and that the locus of ear positions corresponding to the pattern of head movement formed a bounded sloped path – higher towards the rear and lower towards the front. Based on these findings, a signal capture system was designed comprising a sphere with multiple microphones, mounted on a torso. Evaluation of its performance showed that a perceptual model incorporating this capture system is capable of perceptually accurate prediction of source direction based on interaural time and level differences (ITD and ILD), and of spatial impression based on interaural cross-correlation coefficient (IACC). Investigation into appropriate parameter derivation and interpolation techniques determined that 21 pairs of spaced microphones were sufficient to measure ITD, ILD and IACC across the sloped range of ear positions.

    K Baykaner, C Hummersone, R Mason, S Bech (2013)The computational prediction of masking thresholds for ecologically valid interference scenarios, In: Proceedings of Meetings on Acoustics19

    Auditory interference scenarios, where a listener wishes to attend to some target audio while being presented with interfering audio, are prevalent in daily life. The goal of developing an accurate computational model which can predict masking thresholds for such scenarios is still incomplete. While some sophisticated, physiologically inspired, masking prediction models exist, they are rarely tested with ecologically valid programmes (such as music and speech). In order to test the accuracy of model predictions human listener data were required. To that end a masking threshold experiment was conducted for a variety of target and interferer programmes. The results were analysed alongside predictions made by the computational auditory signal processing and prediction model of (Jepsen et al. 2008). Masking thresholds were predicted to within 3.6 dB root mean squared error with the greatest prediction inaccuracies occurring in the presence of speech. These results are comparable to those in (Glasberg and Moore 2005) for predicting the audibility of time-varying sounds in the presence of background sounds, which otherwise represent the most accurate predictions of this type in the literature. © 2013 Acoustical Society of America.

    K Baykaner, C Hummersone, R Mason, S Bech (2013)The computational prediction of masking thresholds for ecologically valid interference scenarios., In: J Acoust Soc Am133(5)pp. 3426-?

    Auditory interference scenarios, where a listener wishes to attend to some target audio while being presented with interfering audio, are prevalent in daily life. The goal of developing an accurate computational model which can predict masking thresholds for such scenarios is still incomplete. While some sophisticated, physiologically inspired, masking prediction models exist, they are rarely tested with ecologically valid programs (such as music and speech). In order to test the accuracy of model predictions human listener data is required. To that end a masking threshold experiment was conducted for a variety of target and interferer programs. The results were analyzed alongside predictions made by the computational auditory signal processing and prediction model described by Jepsen et al. (2008). Masking thresholds were predicted to within 3 dB root mean squared error with the greatest prediction inaccuracies occurring in the presence of speech. These results are comparable to those of the model by Glasberg and Moore (2005) for predicting the audibility of time-varying sounds in the presence of background sounds, which otherwise represent the most accurate predictions of this type in the literature.

    A controlled subjective test was carried out to assess selected spatial qualities of three virtual home theatre processors. The subjective results were used to evaluate a number of objective measurements based on the interaural cross-correlation coefficient (IACC). A novel implementation of the IACC was found which appears to correlate well with the subjective data.

    J Francombe, T Brookes, RD Mason (2015)Elicitation of the differences between real and reproduced audio, In: Audio Engineering Society Preprint9307

    To improve the experience of listening to reproduced audio, it is beneficial to determine the differences between listening to a live performance and a recording. An experiment was performed in which three live performances (a jazz duet, a jazz-rock quintet, and a brass quintet) were captured and simultaneously replayed over a nine-channel with-height surround sound system. Experienced and inexperienced listeners moved freely between the live performance and the reproduction and described the difference in listening experience. In subsequent group discussions, the experienced listeners produced twenty-nine categories using some terms that are not commonly found in the current spatial audio literature. The inexperienced listeners produced five categories that overlapped with the experienced group terms but that were not as detailed.

    J Francombe, T Brookes, RD Mason (2015)Perceptual evaluation of spatial quality: where next?, In: 22nd International Congress on Sound and Vibration Proceedings

    From the early days of reproduced sound, engineers have sought to reproduce the spatial properties of sound fields, leading to the development of a range of technologies. Two-channel stereo has been prevalent for many years; however, systems with a higher number of discrete channels (including rear and height loudspeakers) are becoming more common and, recently, there has been a move towards loudspeaker-agnostic methods using audio objects. Perceptual evaluation, and perceptually-informed objective measurement, of alternative reproduction systems can inform further development and steer future innovations. It is important, therefore, that any gaps in the field of perceptual evaluation and measurement are identified and that future work aims to fill those gaps. A standard research paradigm in the field is identification of the perceptual attributes of a stimulus set, facilitating controlled listening tests and leading to the development of predictive models. There have been numerous studies that aim to discover the perceptual attributes of reproduced spatial sound, leading to more than fifty descriptive terms. However, a literature review revealed the following key problems: (i) there is little agreement on exact definitions, nor on the relative importance of each attribute; (ii) there may be important attributes that have not yet been identified (e.g. attributes arising from differences between real and reproduced audio, or pertaining to new 3D or object-based methods); and (iii) there is no model of overall spatial quality based directly on the important attributes. Consequently, the authors contend that future research should focus on: (i) ascertaining which attributes of reproduced spatial audio are most important to listeners; (ii) identifying any important attributes currently missing; (iii) determining the relationships between the important attributes and listener preference; (iv) modelling overall spatial quality in terms of the important perceptual attributes; and (v) modelling these perceptual attributes in terms of their physical correlates.

    This research aims, ultimately, to develop a system for the objective evaluation of spatial impression, incorporating the finding from a previous study that head movements are naturally made in its subjective evaluation. A spherical binaural capture model, comprising a head-sized sphere with multiple attached microphones, has been proposed. Research already conducted found significant differences in interaural time and level differences, and cross-correlation coefficient, between this spherical model and a head and torso simulator. It is attempted to lessen these differences by adding to the sphere a torso and simplified pinnae. Further analysis of the head movements made by listeners in a range of listening situations determines the range of head positions that needs to be taken into account. Analyses of these results inform the optimum positioning of the microphones around the sphere model.

    LESLIE GASTON-BIRD, RUSSELL DAVID MASON, ENZO DE SENA (2021)Inclusivity in Immersive Audio: Current Participation and Barriers to Entry

    Media and entertainment companies have embraced immersive audio technology for cinema, television, games, and music. Meanwhile, in recent years there has been a rise in the number of organizations welcoming underrepresented groups to the field of audio. However, although some disciplines such as music recording are seeing an increase in participation, others are not keeping pace. Immersive and spatial audio are disciplines in which diversity is measurably lacking. Audio based mixed-gender social media groups are comprised of less than 10% women and minorities, and groups dedicated to immersive audio exhibit poorer representation. Barriers to entry are societal as well as economic; however, outreach, networking opportunities, mentoring, and affordable education are remedies have been shown to be effective for related industries and should be adopted by the immersive audio industry.

    Christopher Hummersone, Russell Mason, Tim Brookes (2013)A Comparison of Computational Precedence Models for Source Separation in Reverberant Environments, In: Journal of the Audio Engineering Society61(7/8 (J)pp. 508-520 Audio Engineering Society

    Reverberation is a problem for source separation algorithms. Because the precedence effect allows human listeners to suppress the perception of reflections arising from room boundaries, numerous computational models have incorporated the precedence effect. However, relatively little work has been done on using the precedence effect in source separation algorithms. This paper compares several precedence models and their influence on the performance of a baseline separation algorithm. The models were tested in a variety of reverberant rooms and with a range of mixing parameters. Although there was a large difference in performance among the models, the one that was based on interaural coherence and onset-based inhibition produced the greatest performance improvement. There is a trade-off between selecting reliable cues that correspond closely to free-field conditions and maximizing the proportion of the input signals that contributes to localization. For optimal source separation performance, it is necessary to adapt the dynamic component of the precedence model to the acoustic conditions of the room.

    Joshua John Mannall, Paul Calamia, Lauri Savioja, Annika Neidhardt, Russell David Mason, Enzo De Sena (2024)Assessing Diffraction Perception Under Reverberant Conditions in Virtual Reality

    When a sound source is occluded, diffraction replaces direct sound as the first wavefront arrival and can influence important aspects of perception such as localisation. Few experiments have investigated how diffraction modelling influences the perceived plausibility of an acoustic simulation. In this paper, an experiment was run to investigate the plausibility of an acoustic simulation with and without diffraction in an L-shaped room in VR. The rendering was carried out using a real-time 6DOF geometrical acoustics and feedback-delay-network hybrid model, and diffraction was modelled using the physically accurate Biot-Tolstoy-Medwin model. The results show that diffraction increases the perceived plausibility of the acoustic simulation. In addition, the study compared diffraction of the direct sound alone and diffraction of both direct and reflected sound. A significant increase in plausibility was found by the addition of diffracted reflection paths, but only in the so-called shadow zone.

    Dominic Ward, Hagen Wierstorf, Russell Mason, Mark Plumbley, Christopher Hummersone (2017)Estimating the loudness balance of musical mixtures using audio source separation, In: Proceedings of the 3rd Workshop on Intelligent Music Production (WIMP 2017)

    To assist with the development of intelligent mixing systems, it would be useful to be able to extract the loudness balance of sources in an existing musical mixture. The relative-to-mix loudness level of four instrument groups was predicted using the sources extracted by 12 audio source separation algorithms. The predictions were compared with the ground truth loudness data of the original unmixed stems obtained from a recent dataset involving 100 mixed songs. It was found that the best source separation system could predict the relative loudness of each instrument group with an average root-mean-square error of 1.2 LU, with superior performance obtained on vocals.

    Russell Mason, Timothy Brookes, F Rumsey (2005)Frequency dependency of the relationship between perceived auditory source width and the interaural cross-correlation coefficient for time-invariant stimuli., In: Journal of the Acoustical Society of America117(3 Pt 1)pp. 1337-1350 Acoustical Society of America

    Previous research has indicated that the relationship between the interaural cross-correlation coefficient (IACC) of a narrow-band sound and its perceived auditory source width is dependent on its frequency. However, this dependency has not been investigated in sufficient detail for researchers to be able to properly model it in order to produce a perceptually relevant IACC-based model of auditory source width. A series of experiments has therefore been conducted to investigate this frequency dependency in a controlled manner, and to derive an appropriate model. Three main factors were discovered in the course of these experiments. First, the nature of the frequency dependency of the perceived auditory source width of stimuli with an IACC of 1 was determined, and an appropriate mathematical model was derived. Second, the loss of perceived temporal detail at high frequencies, caused by the breakdown of phase locking in the ear, was found to be relevant, and the model was modified accordingly using rectification and a low-pass filter. Finally, it was found that there was a further frequency dependency at low frequencies, and a method for modeling this was derived. The final model was shown to predict the experimental data well. (c) 2005 Acoustical Society of America.

    C Kim, R Mason, T Brookes (2007)An investigation into head movements made when evaluating various attributes of sound, In: Audio Engineering Society Preprint7031

    This research extends the study of head movements during listening by including various listening tasks where the listeners evaluate spatial impression and timbre, in addition to the more common task of judging source location. Subjective tests were conducted in which the listeners were allowed to move their heads freely whilst listening to various types of sound and asked to evaluate source location, apparent source width, envelopment, and timbre. The head movements were recorded with a head tracker attached to the listener’s head. From the recorded data, the maximum range of movement, mean position and speed, and maximum speed were calculated along each axis of translational and rotational movement. The effects of various independent variables, such as the attribute being evaluated, the stimulus type, the number of repetition, and the simulated source location were examined through statistical analysis. The results showed that whilst there were differences between the head movements of individual subjects, across all listeners the range of movement was greatest when evaluating source width and envelopment, less when localising sources, and least when judging timbre. In addition, the range and speed of head movement was reduced for transient signals compared to longer musical or speech phrases. Finally, in most cases for the judgement of spatial attributes, head movement was in the direction of source direction.

    A binaural hearing model has been developed over a number of years that predicts the perceived width and position of sounds, over frequency and over time. The most appropriate methods for applying this model to evaluations of spatial impression are considered, including suitable test signals. Examples of a range of measurements are shown in a range of situations.

    Andrew Pearce, Tim Brookes, Russell Mason, M Dewhirst (2016)Measurements to determine the ranking accuracy of perceptual models, In: 140th Convention Proceedings

    Linear regression is commonly used in the audio industry to create objective measurement models that predict subjective data. For any model development, the measure used to evaluate the accuracy of the prediction is important. The most common measures assume a linear relationship between the subjective data and the prediction, though in the early stages of model development this is not always the case. Measures based on rank ordering (such as Spearman’s test), can alternatively be used. Spearman’s test, however, does not consider the variance of the subjective data. This paper presents a method of incorporating the subjective variance into the Spearman’s rank ordering test using Monte Carlo simulations, and shows how this can be beneficial in the development of predictive models.

    C Kim, R Mason, T Brookes (2010)A quasi-binaural approach to head-movement-aware evaluation of spatial acoustics, In: Proceedings of the International Symposium on Room AcousticsGenera(1)pp. 292-300

    This research incorporates the nature of head movement made in listening activities, into the development of a quasibinaural acoustical measurement technique for the evaluation of spatial impression. A listening test was conducted where head movements were tracked whilst the subjects rated the perceived source width, envelopment, source direction and timbre of a number of stimuli. It was found that the extent of head movements was larger when evaluating source width and envelopment than when evaluating source direction and timbre. It was also found that the locus of ear positions corresponding to these head movements formed a bounded sloped path, higher towards the rear and lower towards the front. This led to the concept of a signal capture device comprising a torso-mounted sphere with multiple microphones. A prototype was constructed and used to measure three binaural parameters related to perceived spatial impression - interaural time and level differences (ITD and ILD) and interaural crosscorrelation coefficient (IACC). Comparison of the prototype measurements to those made with a rotating Head and Torso Simulator (HATS) showed that the prototype could be perceptually accurate for the prediction of source direction using ITD and ILD, and for the prediction of perceived spatial impression using IACC. Further investigation into parameter derivation and interpolation methods indicated that 21 pairs of discretely spaced microphones were sufficient to measure the three binaural parameters across the sloped range of ear positions identified in the listening test.

    Russell Mason, Tim Brookes, F Rumsey (2005)The effect of various source signal properties on measurements of the interaural cross-correlation coefficient, In: Acoustical Science and Technology26(2)pp. 102-113 Acoustical Society of Japan

    Measurements that attempt to predict the perceived spatial impression of musical signals in concert halls typically are conducted by calculating the interaural cross-correlation coefficient (IACC) of an impulse response. The causes of interaural decorrelation are investigated and it is found that this is affected by frequency dependent interaural time and level differences and variations in these over time. It is found that the IACC of impulsive and of narrowband tonal signals can be very different from each other in a wide range of acoustical environments, due to the differences in the spectral content and the duration of the signals. From this, it is concluded that measurements made of impulsive signals are unsuitable for attempting to predict the perceived spatial impression of musical signals. It is suggested that further work is required to develop a set of test signals that is representative of a wide range of musical stimuli

    Hagen Wierstorf, Dominic Ward, Russell Mason, Emad M Grais, Christopher Hummersone, Mark Plumbley (2017)Perceptual Evaluation of Source Separation for Remixing Music, In: 143rd AES Convention Paper No 9880 Audio Engineering Society

    Music remixing is difficult when the original multitrack recording is not available. One solution is to estimate the elements of a mixture using source separation. However, existing techniques suffer from imperfect separation and perceptible artifacts on single separated sources. To investigate their influence on a remix, five state-of-the-art source separation algorithms were used to remix six songs by increasing the level of the vocals. A listening test was conducted to assess the remixes in terms of loudness balance and sound quality. The results show that some source separation algorithms are able to increase the level of the vocals by up to 6 dB at the cost of introducing a small but perceptible degradation in sound quality.

    C Hummersone, R Mason, T Brookes (2010)A comparison of computational precedence models for source separation in reverberant environments, In: Audio Engineering Society Preprint7981
    Jon Francombe, Timothy Brookes, Russell Mason, J Woodcock (2017)Evaluation of Spatial Audio Reproduction Methods (Part 2): Analysis of Listener Preference, In: Data for 'Evaluation of Spatial Audio Reproduction Methods (Part 2): Analysis of Listener Preference' Audio Engineering Society

    It is desirable to determine which of the many different spatial audio reproduction systems listeners prefer, and the perceptual attributes that are most important to listener experience, so that future systems can be perceptually optimized. A paired comparison preference rating experiment was performed alongside a free elicitation task for eight reproduction methods (consumer and professional systems with a wide range of expected quality) and seven program items (representative of potential broadcast material). The experiment was performed by groups of experienced and inexperienced listeners. Thurstone Case V modeling was used to produce preference scales. Both listener groups preferred systems with increased spatial content; nineand five-channel systems were most preferred. The use of elicited attributes was analyzed alongside the preference ratings, resulting in an approximate hierarchy of attribute importance: three attributes (amount of distortion, output quality, and bandwidth) were found to be important for differentiating systems where there was a large preference difference; sixteen were always important (most notably enveloping and horizontal width); and seven were used alongside small preference differences.

    The spatial quality of automotive audio systems is often compromised due to their unideal listening environments. Automotive audio systems need to be developed quickly due to industry demands. A suitable perceptual model could evaluate the spatial quality of automotive audio systems with similar reliability to formal listening tests but take less time. Such a model is developed in this research project by adapting an existing model of spatial quality for automotive audio use. The requirements for the adaptation were investigated in a literature review. A perceptual model called QESTRAL was reviewed, which predicts the overall spatial quality of domestic multichannel audio systems. It was determined that automotive audio systems are likely to be impaired in terms of the spatial attributes that were not considered in developing the QESTRAL model, but metrics are available that might predict these attributes. To establish whether the QESTRAL model in its current form can accurately predict the overall spatial quality of automotive audio systems, MUSHRA listening tests using headphone auralisation with head tracking were conducted to collect results to be compared against predictions by the model. Based on guideline criteria, the model in its current form could not accurately predict the overall spatial quality of automotive audio systems. To improve prediction performance, the QESTRAL model was recalibrated and modified using existing metrics of the model, those that were proposed from the literature review, and newly developed metrics. The most important metrics for predicting the overall spatial quality of automotive audio systems included those that were interaural cross-correlation (IACC) based, relate to localisation of the frontal audio scene, and account for the perceived scene width in front of the listener. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems. The resulting model predicts the overall spatial quality of 2- and 5-channel automotive audio systems with a cross-validation performance of R^2 = 0.85 and root-mean-square error (RMSE) = 11.03%.

    Milap Rane, Philip Coleman, Russell Mason, Søren Bech (2022)Quantifying headphone listening experience in virtual sound environments using distraction, In: EURASIP Journal on Audio, Speech and Music Processing202230 Springer

    Headphones are commonly used in various environments including at home, outside and on public transport. However, the perception and modelling of the interaction of headphone audio and noisy environments is relatively unresearched. This work investigates the headphone listening experience in noisy environments using the perceptual attributes of distraction and quality of listening experience. A virtual sound environment was created to simulate real-world headphone listening, with variations in foreground sounds, background contexts and busyness, headphone media content and simulated active noise control. Listening tests were performed, where 15 listeners rated both distraction and quality of listening experience across 144 stimuli using a multiple-stimulus presentation. Listener scores were analysed and compared to a computational model of listener distraction. The distraction model was found to be a good predictor of the perceptual distraction rating, with a correlation of 0.888 and an RMSE of 13.4%, despite being developed to predict distraction in the context of audio-on-audio interference in sound zones. In addition, perceived distraction and quality of listening experience had a strong negative correlation of-0.953. Furthermore, the busyness and type of the environment, headphone media, loudness of the foreground sound and active noise control on/off were significant factors in determining the distraction and quality of listening experience scores.

    Joshua Mannall, Lauri Savioja, Paul Calamia, Russell David Mason, Enzo De Sena (2023)Efficient diffraction modelling using neural networks and infinite impulse response filters *, In: Journal of the Audio Engineering Society. [electronic resource]

    Creating plausible geometric acoustic simulations in complex scenes requires the inclusion of diffraction modelling. Current real-time diffraction implementations use the Uniform Theory of Diffraction (UTD) which assumes all edges are infinitely long. We utilise recent advances in machine learning to create an efficient infinite impulse response model trained on data generated using the physically accurate Biot-Tolstoy-Medwin model. We propose an approach to data generation that allows our model to be applied to higher-order diffraction. We show that our model is able to approximate the Biot-Tolstoy-Medwin model with a mean absolute level difference of 1.0 dB for 1st-order diffraction while maintaining a higher computational efficiency than the current state of the art using UTD.

    Craig Cieciura, Russell David Mason, Philip Coleman, Jon Francombe (2020)Understanding Users' Choices and Constraints when Positioning Loudspeakers in Living Rooms Zenodo

    Dataset pertaining to an experiment concerning positions of ad-hoc loudspeakers and mobile phones in domestic living rooms. This forms part of the PhD research of Craig Cieciura. This was experiment-based research to determine how to render object-based audio in the domestic environment using ad-hoc, audio-capable devices. References AES148 (2020): Cieciura, C., Mason, R., Coleman, P. and Francombe, J. 2020. Understanding users’ choices and constraints when positioning loudspeakers in living rooms, Audio Engineering Society Preprint, 148th Convention, Engineering Brief (number tbc).

    J Francombe, RD Mason, M Dewhirst, S Bech (2012)Determining the threshold of acceptability for an interfering audio programme, In: Audio Engineering Society Preprint8639

    An experiment was performed in order to establish the threshold of acceptability for an interfering audio programme on a target audio programme, varying the following physical parameters: target programme, interferer programme, interferer location, interferer spectrum, and road noise level. Factors were varied in three levels in a Box-Behnken fractional factorial design. The experiment was performed in three scenarios: information gathering, entertainment, and reading/working. Nine listeners performed a method of adjustment task to determine the threshold values. Produced thresholds were similar in the information and entertainment scenarios, however there were significant differences between subjects, and factor levels also had a significant effect: interferer programme was the most important factor across the three scenarios, whilst interferer location was the least important.

    The subjective spatial effect of decaying noise signals with interaural time difference fluctuations was investigated. These fluctuations were created by sinusoidal interchannel time difference fluctuations between signals which were presented over loudspeakers. Both verbal and non-verbal elicitation techniques were applied to examine the subjective effect. It was found that the predominant effect of increasing the fluctuation magnitude was an increase in the apparent width of the acoustical environment whilst the apparent size of the perceived sound source did not change.

    The effect of the audio frequency of narrow-band noise signals with a sinusoidal ITD fluctuation was investigated. To examine this, a subjective experiment was carried out using a match to sample method and stimuli delivered over headphones. It was found that the magnitude of the subjective effect is dependent on audio frequency and that the relationship between the audio frequency and a constant subjective effect appears to be based on equal maximum phase difference fluctuations.

    Russell Mason, N Ford, F Rumsey, B de Bruyn (2001)Verbal and non-verbal elicitation techniques in the subjective assessment of spatial sound reproduction, In: Journal of the Audio Engineering Society49(5)pp. 366-384 Audio Engineering Society

    Current research into spatial audio has shown an increasing interest in the way subjective attributes of reproduced sound are elicited from listeners. The emphasis at present is on verbal semantics, however, studies suggest that nonverbal methods of elicitation could be beneficial. Research into the relative merits of these methods has found that nonverbal responses may result in different elicited attributes compared to verbal techniques. Nonverbal responses may be closer to the perception of the stimuli than the verbal interpretation of this perception. There is evidence that drawing is not as accurate as other nonverbal methods of elicitation when it comes to reporting the localization of auditory images. However, the advantage of drawing is its ability to describe the whole auditory space rather than a single dimension.

    Khan Baykaner, Christopher Hummersone, Russell Mason, Soren Bech (2013)Selection of temporal windows for the computational prediction of masking thresholds, In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processingpp. 408-412 IEEE

    In the field of auditory masking threshold predictions an optimal method for buffering a continuous, ecologically valid programme combination into discrete temporal windows has yet to be determined. An investigation was carried out into the use of a variety of temporal window durations, shapes, and steps, in order to discern the resultant effect upon the accuracy of various masking threshold prediction models. Selection of inappropriate temporal windows can triple the prediction error in some cases. Overlapping windows were found to produce the lowest errors provided that the predictions were smoothed appropriately. The optimal window shape varied across the tested models. The most accurate variant of each model resulted in root mean squared errors of 2.3, 3.4, and 4.2 dB.

    C Pike, RD Mason, T Brookes (2014)The effect of auditory memory on the perception of timbre, In: Audio Engineering Society Preprint9028

    Listeners are more sensitive to timbral differences when comparing stimuli side-by-side than temporally-separated. The contributions of auditory memory and spectral compensation to this effect are unclear. A listening test examined the role of auditory memory in timbral discrimination, across retention intervals (RIs) of up to 40 s. For timbrally complex music stimuli discrimination accuracy was good across all RIs, but there was increased sensitivity to onset spectrum, which decreased with increasing RI. Noise stimuli showed no onset sensitivity but discrimination performance declined with RIs of 40 s. The difference between program types may suggest different onset sensitivity and memory encoding (categorical vs non-categorical). The onset bias suggests that memory effects should be measured prior to future investigation of spectral compensation.

    Jon Francombe, Russell Mason, Martin Dewhirst, So/ren Bech (2013)Modelling listener distraction resulting from audio-on-audio interference, In: Proceedings of Meetings on Acoustics19(1)

    As devices that produce audio become more commonplace and increasingly portable, situations in which two competing audio programmes are present occur more regularly. In order to support the design of systems intended to mitigate the effects of interfering audio (including sound field control, noise cancellation or source separation systems), it is desirable to model the perceived distraction in such situations. Distraction ratings were collected for a range of audio-on-audio interference situations including various target and interferer programmes at three interferer levels, with and without road noise. Time-frequency target-to-interferer ratio (TIR) maps of the stimuli were created using a simple auditory model. A number of feature sets were extracted from the TIR maps, including combinations of mean, standard deviation, minimum and maximum TIR taken across the duration of the programme item. In order to predict distraction ratings from the features, linear regression models were produced. The models were evaluated for goodness-of-fit (RMSE) and generalizability (using a K-fold cross-validation procedure). The best model performed well, with almost all predictions falling within the 95% confidence intervals of the perceptual data. A validation data set was used to test the model, suggesting areas for future improvement.

    The subjective spatial effect of continuous noise signals with interaural time difference fluctuations was investigated. These fluctuations were created by sinusoidal interchannel time difference fluctuations between signals that were presented over loudspeakers. Both verbal and non-verbal elicitation techniques were applied to examine the subjective effect. It was found that the predominant effect of increasing the fluctuation magnitude was an increase in the apparent width of the perceived sound source.

    LSR Simon, Russell Mason, F Rumsey (2009)Localisation curves for a regularly-spaced octagon loudspeaker array, In: Audio Engineering Society Preprint7915

    Multichannel microphone array designs often use the localisation curves that have been derived for 2-0 stereophony. Previous studies showed that side and rear perception of phantom image locations require somewhat different curves. This paper describes an experiment conducted to evaluate localisation curves using an octagonal loudspeaker setup. Interchannel level differences were produced between the loudspeaker pairs forming each of the segments of the loudspeaker array, one at a time, and subjects were asked to evaluate the perceived sound event’s direction and its locatedness. The results showed that the localisation curves derived for 2-0 stereophony are not directly applicable, and that different localisation curves are required for each loudspeaker pair.

    R Mason, T Brookes, F Rumsey (2002)The perceptual relevance of extant techniques for the objective measurement of spatial impression, In: Proceedings of the Institute of Acoustics24pp. 9-?
    C Pike, T Brookes, R Mason (2013)Auditory adaptation to loudspeaker and listening room acoustics, In: 135th Audio Engineering Society Convention 2013pp. 116-125

    Timbrai qualities of loudspeakers and rooms are often compared in listening tests involving short listening periods. Outside the laboratory, listening occurs over a longer time course. In a study by Olive et al. (1995) smaller timbrai differences between loudspeakers and between rooms were reported when comparisons were made over longer versus shorter time periods. This is a form of timbrai adaptation, a decrease in sensitivity to timbre over time. The current study confirms this adaptation and establishes that it is not due to response bias but may be due to timbrai memory, specific mechanisms compensating for transmission channel acoustics, or attentional factors. Modifications to listening tests may be required where tests need to be representative of listening outside of the laboratory.

    Two objective measurement techniques have been proposed that relate the fluctuations in interaural time difference to one or more attributes of subjective spatial perception. This paper reviews these measurements, discusses how these fluctuations may be created in a real acoustical environment, summarises the experiments carried out to elicit the subjective effect of the fluctuations, and suggests ways in which this research can be applied to sound reproduction.

    Jon Francombe, Timothy Brookes, Russell Mason (2017)Evaluation of Spatial Audio Reproduction Methods (Part 1): Elicitation of Perceptual Differences, In: Journal of the Audio Engineering Society65(3)pp. 198-211 Audio Engineering Society

    There are a wide variety of spatial audio reproduction systems available, from a single loudspeaker to many spatially distributed loudspeakers. An important factor in the selection, development, or optimization of such systems is listener preference, and the important perceptual characteristics that contribute to this. An experiment was performed to determine the attributes that contribute to listener preference for a range of spatial audio reproduction methods. Experienced and inexperienced listeners made preference ratings for combinations of seven program items replayed over eight reproduction systems, and reported the reasons for their judgments. Automatic text clustering reduced redundancy in the responses by approximately 90%, facilitating subsequent group discussions that produced clear attribute labels, descriptions, and scale end-points. Twenty-seven and twenty-four attributes contributed to preference for the experienced and inexperienced listeners respectively. The two sets of attributes contain a degree of overlap (ten attributes from the two sets were closely related); however, the experienced listeners used more technical terms whilst the inexperienced listeners used more broad descriptive categories.

    Christopher Hummersone, Russell Mason, Tim Brookes (2010)Dynamic precedence effect modeling for source separation in reverberant environments, In: IEEE Transactions on Audio, Speech and Language Processing18(7)pp. 1867-1871 IEEE

    Reverberation continues to present a major problem for sound source separation algorithms. However, humans demonstrate a remarkable robustness to reverberation and many psychophysical and perceptual mechanisms are well documented. The precedence effect is one of these mechanisms; it aids our ability to localize sounds in reverberation. Despite this, relatively little work has been done on incorporating the precedence effect into automated source separation. Furthermore, no work has been carried out on adapting a precedence model to the acoustic conditions under test and it is unclear whether such adaptation, analogous to the perceptual Clifton effect, is even necessary. Hence, this study tests a previously proposed binaural separation/precedence model in real rooms with a range of reverberant conditions. The precedence model inhibitory time constant and inhibitory gain are varied in each room in order to establish the necessity for adaptation to the acoustic conditions. The paper concludes that adaptation is necessary and can yield significant gains in separation performance. Furthermore, it is shown that the initial time delay gap and the direct-to-reverberant ratio are important factors when considering this adaptation.

    MILAP DILIP RANE, RUSSELL DAVID MASON, PHILIP COLEMAN, Søren Bech (2022)Survey of User Perspectives on Headphone Technology

    Headphones are widely used to consume media content at home and on the move. Developments in signal processing technology and object-based audio media formats have raised new opportunities to improve the user experience by tailoring the audio rendering depending on the characteristics of the listener's environment. However, little is known about what consumers consider to be the deficiencies in current headphone-based listening, and therefore how best to target new developments in headphone technology. More than 400 respondents worldwide took part in a headphone listening experience survey. They were asked about how headphones could be improved, considering various contexts (home, outside, and public transport) and content (music, spoken word, radio drama/tv/film/online content, and telecommunication). The responses were coded into themes covering technologies (e.g. noise cancellation and transparency) and features (e.g. 3D audio) that they would like to see in future headphones. These observations highlight that users' requirements differ depending on the listening environment, but also highlight that the majority are satisfied by their headphone listening experience at home. The type of programme material also caused differences in the users' requirements, indicating that there is most scope for improving users' headphone listening experience for music. The survey also presented evidence of users' desire for newer technologies and features including 3D audio and sharing of multiple audio streams.

    M Olik, P Coleman, PJB Jackson, J Francombe, R Mason, M Olsen, M Møller, S Bech (2013)A comparative performance study of sound zoning methods in a reflective environment, In: Proceedings of the 52nd AES International Conferencepp. 214-223

    Whilst sound zoning methods have typically been studied under anechoic conditions, it is desirable to evaluate the performance of various methods in a real room. Three control methods were implemented (delay and sum, DS; acoustic contrast control, ACC; and pressure matching, PM) on two regular 24-element loudspeaker arrays (line and circle). The acoustic contrast between two zones was evaluated and the reproduced sound fields compared for uniformity of energy distribution. ACC generated the highest contrast, whilst PM produced a uniform bright zone. Listening tests were also performed using monophonic auralisations from measured system responses to collect ratings of perceived distraction due to the alternate audio programme. Distraction ratings were affected by control method and programme material. Copyright © (2013) by the Audio Engineering Society.

    (2018)Latent Variable Analysis and Signal Separation: 14th International Conference, LVA/ICA 2018, Guildford, UK, July 2–5, 2018, Proceedings, In: Yannick Deville, Sharon Gannot, Russell Mason, Mark D. Plumbley, Dominic Ward (eds.), Latent Variable Analysis and Signal Separation10891 Springer International Publishing

    This book constitutes the proceedings of the 14th International Conference on Latent Variable Analysis and Signal Separation, LVA/ICA 2018, held in Guildford, UK, in July 2018.The 52 full papers were carefully reviewed and selected from 62 initial submissions. As research topics the papers encompass a wide range of general mixtures of latent variables models but also theories and tools drawn from a great variety of disciplines such as structured tensor decompositions and applications; matrix and tensor factorizations; ICA methods; nonlinear mixtures; audio data and methods; signal separation evaluation campaign; deep learning and data-driven methods; advances in phase retrieval and applications; sparsity-related methods; and biomedical data and methods.

    A measurement model based on the interaural cross-correlation coefficient (IACC) that attempts to predict the perceived source width of a range of auditory stimuli is currently under development. It is necessary to combine the predictions of this model with measurements of interaural time difference (ITD) to allow the model to provide its output on a meaningful scale and to allow integration of results across frequency. A detailed subjective experiment was undertaken using narrow-band stimuli with a number of centre frequencies, IACCs and ITDs. Subjects were asked to indicate the perceived position of the left and right boundaries of a number of these stimuli by altering the ITD of a pair of white noise comparison stimuli. It is shown that an existing IACC-based model provides a poor prediction of the subjective results but that modifications to the model significantly increase its accuracy.

    Andrew J. R. Simpson, Gerard Roma, Emad M. Grais, Russell D. Mason, Chris Hummersone, Antoine Liutkus, Mark D. Plumbley (2016)Evaluation of Audio Source Separation Models Using Hypothesis-Driven Non-Parametric Statistical Methods, In: 2016 24TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO)2016-pp. 1763-1767 IEEE

    Audio source separation models are typically evaluated using objective separation quality measures, but rigorous statistical methods have yet to be applied to the problem of model comparison. As a result, it can be difficult to establish whether or not reliable progress is being made during the development of new models. In this paper, we provide a hypothesis-driven statistical analysis of the results of the recent source separation SiSEC challenge involving twelve competing models tested on separation of voice and accompaniment from fifty pieces of "professionally produced" contemporary music. Using non-parametric statistics, we establish reliable evidence for meaningful conclusions about the performance of the various models.

    R Mason, S Harrington (2007)Perception and detection of auditory offsets with single simple musical stimuli in a reverberant environment, In: Proceedings of the 30th Audio Engineering Society International Conferencepp. 331-342

    It is apparent that little research has been undertaken into the perception and automated detection of auditory offsets compared to auditory onsets. A study was undertaken which took a perceptually motivated approach to the detection of auditory offsets. Firstly, a subjective experiment was completed that investigated the effect of: the sound source temporal properties; the presence or absence of reverberation; the direct to reverberant level; and the presence or absence of binaural cues on the perceived auditory offset time. It was found in this case that: the sound source temporal properties had a small effect; the presence of reverberation caused the perceived auditory offset to be later in most cases; the direct to reverberant ratio had no significant effect; and the binaural cues had no significant effect on the perceived offset times. Measurements were conducted which showed that the -30dB threshold below the peak level of the slowest decaying frequency bands could be used as a reasonable predictor of the subjective results.

    Craig Cieciura, Russell Mason, Philip Coleman, Jon Francombe (2020)Understanding users' choices and constraints when positioning loudspeakers in living rooms

    A study was conducted in participants’ homes to ascertain how they would position one to eight compact wireless loudspeakers, with the goal of enhancing their existing system. The eleven participants described three key themes, creating an arrangement that: was spatially balanced and evenly distributed; maintained the room’s aesthetics; maintained the room’s functionality. In practice, the results showed that participants prioritised aesthetics and functionality, whilst balance was not usually achieved. It was concluded that a hierarchy of preferred positions in each space exists, as the same positions were reused whilst positioning differing numbers of loudspeakers and by different participants in each location. Consistencies were observed between the locations which can be used to estimate loudspeaker positions for a given living room layout.

    Joshua Mannall, Lauri Savioja, Paul Calamia, Russell Mason, Enzo De Sena (2023)Efficient Diffraction Modeling Using Neural Networks and Infinite Impulse Response Filters, In: Journal of the Audio Engineering Society71(9)pp. 566-576 Audio Engineering Soc

    Creating plausible geometric acoustic simulations in complex scenes requires the inclusion of diffraction modeling. Current real-time diffraction implementations use the Uniform Theory of Diffraction, which assumes all edges are infinitely long. The authors utilize recent advances in machine learning to create an efficient infinite impulse response model trained on data generated using the physically accurate Biot-Tolstoy-Medwin model. The authors propose an approach to data generation that allows their model to be applied to higher-order diffraction. They show that their model is able to approximate the Biot-Tolstoy-Medwin model with a mean absolute level difference of 1.0 dB for first-order diffraction while maintaining a higher computational efficiency than the current state of the art using the Uniform Theory of Diffraction.

    R Mason, T Brookes, F Rumsey (2004)Evaluation of an auditory source width prediction model based on the interaural cross-correlation coefficient, In: Journal of the Acoustical Society of America116pp. 2475-?

    A model based on the interaural cross-correlation coefficient (IACC) has been developed that aims to predict the perceived source width of a wide range of sounds. The following factors differentiate it from more commonly used IACC-based measurements: the use of a running measurement to quantify variations in width over time; half-wave rectification and low pass filtering of the input signal to mimic the breakdown of phase locking in the ear; compensation for the frequency and loudness dependency of perceived width; combination of a model of perceived location with a model of perceived width; and conversion of the results to an intuitive scale. Objective and subjective methods have been used to evaluate the accuracy and limitations of the resulting measurement model.

    J Francombe, T Brookes, R Mason, R Flindt, P Coleman, Q Liu, PJB Jackson (2015)Production and reproduction of programme material for a variety of spatial audio formats, In: Proc. AES 138th Int. Conv. (e-Brief), Warsawpp. 4-4

    For subjective experimentation on 3D audio systems, suitable programme material is needed. A large-scale recording session was performed in which four ensembles were recorded with a range of existing microphone techniques (aimed at mono, stereo, 5.0, 9.0, 22.0, ambisonic, and headphone reproduction) and a novel 48-channel circular microphone array. Further material was produced by remixing and augmenting pre-existing multichannel content. To mix and monitor the programme items (which included classical, jazz, pop and experimental music, and excerpts from a sports broadcast and a lm soundtrack), a flexible 3D audio reproduction environment was created. Solutions to the following challenges were found: level calibration for different reproduction formats; bass management; and adaptable signal routing from different software and fille formats.

    J Francombe, RD Mason, M Dewhirst, S Bech (2014)Investigation of a random radio sampling method for selecting ecologically valid music programme material, In: Audio Engineering Society Preprint9029

    When performing subjective tests of an audio system, it is necessary to use appropriately selected programme material to excite that system. Programme material is often required to be wide-ranging and representative of commonly consumed audio, whilst having minimal selection bias. A random radio sampling procedure was investigated for its ability to produce such a stimulus set. Nine popular stations were sampled at six di↵erent times of day over a number of days to produce a 200-item pool. Musical and signal-based characteristics were examined; the items were found to span a wide range of genres and years, and physical similarities were found between items in the same genre. The proposed method is beneficial for collecting a wide and representative stimulus set.

    Russell Mason, Tim Brookes, Francis Rumsey (2003)Creation and verification of a controlled experimental stimulus for investigating selected perceived spatial attributes, In: 114th AES Conventionpp. 22-25

    In order to undertake controlled investigations into perceptual effects that relate to the interaural cross-correlation coefficient, experiment stimuli that meet a tight set of criteria are required. The requirements of each stimulus are that it is narrow band, normally has a constant cross-correlation coefficient over time, and can be altered to cover the full range of values of cross-correlation coefficient, including specified variations over time if required. Stimuli created using a technique based on amplitude modulation are found to meet these criteria, and their use in a number of subjective experiments is described.

    AJR Simpson, G Roma, Emad M Grais, Russell Mason, Christopher Hummersone, A Liutkus, Mark Plumbley (2016)Evaluation of Audio Source Separation Models Using Hypothesis-Driven Non-Parametric Statistical Methods, In: European Signal Processing Conference (EUSIPCO) 2016

    Audio source separation models are typically evaluated using objective separation quality measures, but rigorous statistical methods have yet to be applied to the problem of model comparison. As a result, it can be difficult to establish whether or not reliable progress is being made during the development of new models. In this paper, we provide a hypothesis-driven statistical analysis of the results of the recent source separation SiSEC challenge involving twelve competing models tested on separation of voice and accompaniment from fifty pieces of “professionally produced” contemporary music. Using nonparametric statistics, we establish reliable evidence for meaningful conclusions about the performance of the various models.

    Daisuke Koya, Russell Mason, Martin Dewhirst, Søren Bech (2022)A Perceptual Model of Spatial Quality for Automotive Audio Systems, In: Journal of the Audio Engineering SocietyIn press(In press) Audio Engineering Society (AES)

    A perceptual model was developed to evaluate the spatial quality of automotive audio systems by adapting the Quality Evaluation of Spatial Transmission and Reproduction by an Artificial Listener (QESTRAL) model of spatial quality developed for domestic audio systems. The QESTRAL model was modified to use a combination of existing and newly created metrics, based on—in order of importance—the interaural cross-correlation (IACC), reproduced source angle, scene width, level, entropy, and spectral rolloff. The resulting model predicts the overall spatial quality of 2-and 5-channel automotive audio systems with a cross-validation R 2 of 0.85 and root-mean-square error (RMSE) of 11.03%. The performance of the modified model improved considerably for automotive applications compared to that of the original model, which had a prediction R 2 of 0.72 and RMSE of 29.39%. Modifying the model for automotive audio systems did not invalidate its use for domestic audio systems, which were predicted with an R 2 of 0.77 and RMSE of 11.90%.

    James Woodcock, Jon Franombe, Andreas Franck, Philip Coleman, Richard Hughes, Hansung Kim, Qingju Liu, Dylan Menzies, Marcos F Simón Gálvez, Yan Tang, Tim Brookes, William J Davies, Bruno M Fazenda, Russell Mason, Trevor J Cox, Filippo Maria Fazi, Philip Jackson, Chris Pike, Adrian Hilton (2018)A Framework for Intelligent Metadata Adaptation in Object-Based Audio, In: AES E-Librarypp. P11-3 Audio Engineering Society

    Object-based audio can be used to customize, personalize, and optimize audio reproduction depending on the speci?c listening scenario. To investigate and exploit the bene?ts of object-based audio, a framework for intelligent metadata adaptation was developed. The framework uses detailed semantic metadata that describes the audio objects, the loudspeakers, and the room. It features an extensible software tool for real-time metadata adaptation that can incorporate knowledge derived from perceptual tests and/or feedback from perceptual meters to drive adaptation and facilitate optimal rendering. One use case for the system is demonstrated through a rule-set (derived from perceptual tests with experienced mix engineers) for automatic adaptation of object levels and positions when rendering 3D content to two- and ?ve-channel systems.

    Andy Pearce, Timothy Brookes, Russell Mason (2017)Timbral attributes for sound effect library searching, In: AES E-Librarypp. 2-2 Audio Engineering Society

    To improve the search functionality of online sound effect libraries, timbral information could be extracted using perceptual models, and added as metadata, allowing users to filter results by timbral characteristics. This paper identifies the timbral attributes that end-users commonly search for, to indicate the attributes that might usefully be modelled for automatic metadata generation. A literature review revealed 1187 descriptors that were subsequently reduced to a hierarchy of 145 timbral attributes. This hierarchy covered the timbral characteristics of source types and modifiers including musical instruments, speech, environmental sounds, and sound recording and reproduction systems. A part-manual, part-automated comparison between the hierarchy and a freesound.org search history indicated that the timbral attributes hardness, depth, and brightness occur in searches most frequently.

    Jon Francombe, Russell Mason, Philip Jackson, Timothy Brookes, R Hughes, J Woodcock, A Franck, F Melchior, C Pike (2017)Media Device Orchestration for Immersive Spatial Audio Reproduction, In: Audio Mostly conference on Augmented and Participatory Sound and Music Experiences Proceedings ACM

    Whilst it is possible to create exciting, immersive listening experiences with current spatial audio technology, the required systems are generally difficult to install in a standard living room. However, in any living room there is likely to already be a range of loudspeakers (such as mobile phones, tablets, laptops, and so on). ____Media device orchestration" (MDO) is the concept of utilising all available devices to augment the reproduction of a media experience. In this demonstration, MDO is used to augment low channel count renderings of various programme material, delivering immersive three-dimensional audio experiences.

    R Mason, F Rumsey (2002)A comparison of objective measurements for predicting selected subjective spatial attributes, In: Audio Engineering Society Preprint5591

    A controlled subjective experiment was undertaken to evaluate the relative merits of objective measurement techniques for predicting selected perceived spatial attributes of reproduced sound. The stimuli consisted of a number of anechoic recordings of single sound sources that were reproduced in a simulated concert hall and captured using a number of simulated multichannel microphone techniques. These were reproduced in a listening room and the subjects were asked to judge the perceived source width and perceived environment width of each stimulus. A number of objective measurements were made at the listening position and these were then compared with the subjective judgements. The results showed that a perceptually-grouped measurement of the experimental stimuli using a technique based on the interaural cross-correlation coefficient matched the subjective judgements most accurately, though the difference between this measurement and a number of other types was small.

    C Pike, RD Mason, T Brookes (2014)Auditory compensation for spectral coloration, In: Audio Engineering Society Preprint9138

    The “spectral compensation effect” (Watkins, 1991) describes a decrease in perceptual sensitivity to spectral modifications caused by the transmission channel (e.g., loudspeakers, listening rooms). Few studies have examined this effect: its extent and perceptual mechanisms are not confirmed. The extent to which compensation affects the perception of sounds colored by loudspeakers and other channels should be determined. This compensation has been mainly studied with speech. Evidence suggests that speech engages special perceptual mechanisms, so compensation might not occur with non-speech sounds. The current study provides evidence of compensation for spectrum in nonspeech tests: channel coloration was reduced by approximately 20%.

    T Ashby, R Mason, T Brookes (2011)Prediction of perceived elevation using multiple psuedo-binaural microphones, In: Audio Engineering Society Preprint

    Computational auditory models that predict the perceived location of sound sources in terms of azimuth are already available, yet little has been done to predict perceived elevation. Interaural time and level differences, the primary cues in horizontal localisation, do not resolve source elevation, resulting in the ‘Cone of Confusion’. In natural listening, listeners can make head movements to resolve such confusion. To mimic the dynamic cues provided by head movements, a multiple microphone sphere was created, and a hearing model was developed to predict source elevation from the signals captured by the sphere. The prototype sphere and hearing model proved effective in both horizontal and vertical localisation. The next stage of this research will be to rigorously test a more physiologically accurate capture device.

    Jon Francombe, Timothy Brookes, Russell Mason (2018)Determination and validation of mix parameters for modifying envelopment in object-based audio, In: Journal of the Audio Engineering Society66(3)pp. 127-145 Audio Engineering Society

    Envelopment is an important attribute of listener preference for spatial audio reproduction. Object-based audio offers the possibility of altering the rendering of an audio scene in order to modify or maintain perceptual attributes - including envelopment - if the relationships between attributes and mix parameters are known. In a method of adjustment experiment, mixing engineers were asked to produce mixes of four program items at low, medium, and high levels of envelopment, in 2-channel, 5-channel, and 22-channel reproduction systems. The participants could vary a range of level, position, and equalization parameters that can be modified in object-based audio systems. The parameters could be varied separately for different semantic object categories. Nine parameters were found to have significant relationships with envelopment; parameters relating to the horizontal and vertical spread of sources were shown to be most important. A follow-on experiment demonstrated that these parameters can be adjusted to produce a range of envelopment levels in other program items.

    Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2019)Creating Object-Based Stimuli to Explore Media Device Orchestration Reproduction Techniques, In: Proceedings of the AES 145th Convention, New York, NY, USA, 2018 October 17 – 201pp. 59-63 Audio Engineering Society

    Media Device Orchestration (MDO) makes use of interconnected devices to augment a reproduction system, and could be used to deliver more immersive audio experiences to domestic audiences. To investigate optimal rendering on an MDO-based system, stimuli were created via: 1) object-based audio (OBA) mixes undertaken in a reference listening room; and 2) up to 13 rendered versions of these employing a range of installed and ad-hoc loudspeakers with varying cost, quality and position. The programme items include audio-visual material (short film trailer and big band performance) and audio-only material (radio panel show, pop track, football match, and orchestral performance). The object-based programme items and alternate MDO configurations are made available for testing and demonstrating OBA systems.

    T Ashby, RD Mason, T Brookes (2013)Head movements in three-dimensional localisation, In: Audio Engineering Society Preprint8881

    Previous studies give contradicting evidence as to the importance of head movements in localisation. In this study head movements were shown to increase localisation response accuracy in elevation and azimuth. For elevation, it was found that head movement improved localisation accuracy in some cases and that when pinna cues were impeded the significance of head movement cues was increased. For azimuth localization, head movement reduced front-back confusions. There was also evidence that head movement can be used to enhance static cues for azimuth localisation. Finally, it appears that head movement can increase the accuracy of listeners’ responses by enabling an interaction between auditory and visual cues.

    R Mason, T Brookes, F Rumsey (2004)Spatial impression: measurement and perception of concert hall acoustics and reproduced sound, In: Proceedings of the International Symposium on Room Acousticspp. ?-?

    Auditory width measurements based on the interaural cross-correlation coefficient (IACC) are often used in the field of concert hall acoustics. However, there are a number of problems with such measurements, including large variations around the centre of a room and a limited range of values at low frequencies. This paper explores how some of these problems can be solved by applying the IACC in a more perceptually valid manner and using it as part of a more complete hearing model. It is proposed that measurements based on the IACC may match the perceived width of stimuli more accurately if a source signal is measured rather than an impulse response, and when factors such as frequency and loudness are taken into account. Further developments are considered, including methods to integrate the results calculated in different frequency bands, and the temporal response of spatial perception

    Matthew Vowels, Russell Mason (2020)Comparison of pairwise dissimilarity and projective mapping tasks with auditory stimuli, In: Journal of the Audio Engineering Society Audio Engineering Society

    Two methods for undertaking subjective evaluation were compared: a pairwise dissimilarity task (PDT) and a projective mapping task (PMT). For a set of unambiguous, synthetic, auditory stimuli the aim was to determine: whether the PMT limits the recovered dimensionality to two dimensions; how subjects respond using PMT’s two-dimensional response format; the relative time required for PDT and PMT; and hence whether PMT is an appropriate alternative to PDT for experiments involving auditory stimuli. The results of both Multi-Dimensional Scaling (MDS) analyses and Multiple Factor Analyses (MFA) indicate that, with multiple participants, PMT allows for the recovery of three meaningful dimensions. The results from the MDS and MFA analyses of the PDT data, on the other hand, were ambiguous and did not enable recovery of more than two meaningful dimensions. This result was unexpected given that PDT is generally considered not to limit the dimensionality that can be recovered. Participants took less time to complete the experiment using PMT compared to PDT (a median ratio of approximately 1:4), and employed a range of strategies to express three perceptual dimensions using PMT’s two-dimensional response format. PMT may provide a viable and efficient means to elicit up to 3-dimensional responses from listeners.

    R Mason, T Brookes, F Rumsey (2004)Development of the interaural cross-correlation coefficient into a more complete auditory width prediction model, In: Proceedings of the 18th International Congress on AcousticsIVpp. 2453-2456

    Auditory width measurements based on the interaural cross-correlation coefficient (IACC) are often used in the field of concert hall acoustics. However, there are a number of problems with such measurements, including large variations around the centre of a room and a limited range of values at low frequencies. This paper explores how some of these problems can be solved by applying the IACC in a more perceptually valid manner and using it as part of a more complete hearing model. It is proposed that measurements based on the IACC may match the perceived width of stimuli more accurately if a source signal is measured rather than an impulse response, and when factors such as frequency and loudness are taken into account. Further developments are considered, including methods to integrate the results calculated in different frequency bands, and the temporal response of spatial perception

    Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2018)Survey Of Media Device Ownership, Media Service Usage And Group Media Consumption In Uk Households Zenodo

    Data generated from a survey produced by the authors and distributed by GfK, to gather information about: audio and audio-visual media device ownership in UK households; types of audio and audio-visual media delivery methods and services used by UK audiences; smart-device and voice-assistant ownership; individual versus household group versus visitor group weekly media consumption time. This forms part of the PhD research of Craig Cieciura. This was experiment-based research to determine how to render object-based audio in the domestic environment using ad-hoc, audio-capable devices. References Cieciura, C., Mason, R., Coleman, P. and Paradis, M. 2018. Survey of media device ownership, media service usage, and group media consumption in UK households, Audio Engineering Society Preprint, 145th Convention, Engineering Brief 456.

    J Francombe, R Mason, M Dewhirst, S Bech (2015)A model of distraction in an audio-on-audio interference situation with music program material, In: Journal of the Audio Engineering Society63(1-2)pp. 63-77

    Audio-on-audio interference situations are a common occurrence in everyday life; they may be naturally occurring or be a side-effect of a non-ideal personal sound zone system. In order to evaluate and optimize such situations in a perceptually relevant manner, it is desirable to develop a model of listener experience. Distraction ratings were collected for 100 randomly created audio-on-audio interference situations with music target and interferer programs. A large set of features was also extracted from the audio; the feature extraction was motivated by a qualitative analysis of subject responses. An iterative linear regression procedure was used to develop a predictive model. The selected features were related to the overall loudness, loudness ratio, perceptual evaluation of audio source separation (PEASS) toolbox interference-related perceptual score, and frequency content of the interferer. The model was found to predict accurately for the training and validation data sets (RMSE of approximately 10%), with the exception of a small number of outlying stimuli.

    Dominic Ward, Russell D. Mason, Ryan Chungeun Kim, Fabian-Robert Stöter, Antoine Liutkus, Mark D. Plumbley (2018)SISEC 2018: state of the art in musical audio source separation - Subjective selection of the best algorithm, In: Proceedings of the 4th Workshop on Intelligent Music Production, Huddersfield, UK, 14 September 2018 University of Huddersfield

    The Signal Separation Evaluation Campaign (SiSEC) is a large-scale regular event aimed at evaluating current progress in source separation through a systematic and reproducible comparison of the participants’ algorithms, providing the source separation community with an invaluable glimpse of recent achievements and open challenges. This paper focuses on the music separation task from SiSEC 2018, which compares algorithms aimed at recovering instrument stems from a stereo mix. In this context, we conducted a subjective evaluation whereby 34 listeners picked which of six competing algorithms, with high objective performance scores, best separated the singing-voice stem from 13 professionally mixed songs. The subjective results reveal strong differences between the algorithms, and highlight the presence of song-dependent performance for state-of-the-art systems. Correlations between the subjective results and the scores of two popular performance metrics are also presented.

    H-K Lee, R Mason, F Rumsey (2007)Perceptually modelled effects of interchannel crosstalk in multichannel microphone technique, In: Audio Engineering Society Convention 1237200pp. ?-?

    One of the most noticeable perceptual effects of interchannel crosstalk in multichannel microphone technique is an increase in perceived source width. The relationship between the perceived source-width-increasing effect and its physical causes was analysed using an IACC-based objective measurement model. A description of the measurement model is presented and the measured data obtained from stimuli created with crosstalk and those without crosstalk are analysed visually. In particular, frequency and envelope dependencies of the measured results and their relationship with the perceptual effect are discussed. The relationship between the delay time of the crosstalk signal and the effect of different frequency content on the perceived source width is also discussed in this paper.

    In a previous study it was discovered that listeners normally make head movements attempting to evaluate source width and envelopment as well as source location. To accommodate this finding in the development of an objective measurement model for spatial impression, two capturing models were introduced and designed in this research, based on binaural technique: 1) rotating Head And Torso Simulator (HATS), and 2) a sphere with multiple microphones. As an initial study, measurements of interaural time difference (ITD), level difference (ILD) and cross-correlation coefficient (IACC) made with the HATS were compared with those made with a sphere containing two microphones. The magnitude of the differences was judged in a perceptually relevant manner by comparing them with the just-noticeable differences (JNDs) of these parameters. The results showed that the differences were generally not negligible, implying the necessity of enhancement of the sphere model, possibly by introducing equivalents of the pinnae or torso. An exception was the case of IACC, where the reference of JND specification affected the perceptual significance of its difference between the two models.

    J Francombe, K Baykaner, R Mason, M Dewhirst, P Coleman, M Olik, PJB Jackson, S Bech, JA Pedersen (2013)Perceptually optimised loudspeaker selection for the creation of personal sound zones, In: Proceedings of the 52nd AES International Conferencepp. 169-178

    Sound eld control methods can be used to create multiple zones of audio in the same room. Separation achieved by such systems has classically been evaluated using physical metrics including acoustic contrast and target-to-interferer ratio (TIR). However, to optimise the experience for a listener it is desirable to consider perceptual factors. A search procedure was used to select 5 loudspeakers for production of 2 sound zones using acoustic contrast control. Comparisons were made between searches driven by physical (programme-independent TIR) and perceptual (distraction predictions from a statistical model) cost func- Tions. Performance was evaluated on TIR and predicted distraction in addition to subjective ratings. The perceptual cost function showed some benefits over physical optimisation, although the model used needs further work. Copyright © (2013) by the Audio Engineering Society.

    R Mason, F Rumsey (1999)An investigation of microphone techniques for ambient sound in surround sound systems, In: Audio Engineering Society Preprint4912

    A controlled subjective test was carried out to assess selected qualities of three ambient microphone techniques for surround sound The effects of signal delay and microphone distance were explored. The tests indicate that the perceived results are programme dependent, but that a compromise can be found using delayed close microphones, giving similar quality for the range of programme items used.

    C Pike, Russell Mason, Tim Brookes (2014)Auditory adaptation to static spectra

    Auditory adaptation is thought to reduce the perceptual impact of static spectral energy and increase sensitivity to spectral change. Research suggests that this adaptation helps listeners to extract stable speech cues across different talkers, despite inter-talker spectral variations caused by differing vocal tract acoustics. This adaptation may also be involved in compensation for distortions caused by transmission channels more generally (e.g. distortions caused by the room or loudspeaker through which a sound has passed). The magnitude of this adaptation and its ecological importance has not been established. The physiological and psychological mechanisms behind adaptation are also not well understood. The current research aimed to confirm that adaptation to transmission channel spectrum occurs when listening to speech produced though two types of transmission channel: loudspeakers and rooms. The loudspeaker is analogous to the vocal tract of a talker, imparting resonances onto a sound source which reaches the listener both directly and via reflections. The room-affected speech however, reaches the listener only via reflections – there is no direct path. Larger adaptation to the spectrum of the room was found, compared to adaptation to the spectrum of the loudspeaker. It appears that when listening to speech, mechanisms of adaptation to room reflections, and adaptation to loudspeaker/vocal tract spectrum, may be different.

    AJR Simpson, G Roma, Emad M Grais, Russell Mason, Christopher Hummersone, Mark Plumbley (2017)Psychophysical Evaluation of Audio Source Separation Methods, In: LNCS: Latent Variable Analysis and Signal Separation10169pp. 211-221 Springer

    Source separation evaluation is typically a top-down process, starting with perceptual measures which capture fitness-for-purpose and followed by attempts to find physical (objective) measures that are predictive of the perceptual measures. In this paper, we take a contrasting bottom-up approach. We begin with the physical measures provided by the Blind Source Separation Evaluation Toolkit (BSS Eval) and we then look for corresponding perceptual correlates. This approach is known as psychophysics and has the distinct advantage of leading to interpretable, psychophysical models. We obtained perceptual similarity judgments from listeners in two experiments featuring vocal sources within musical mixtures. In the first experiment, listeners compared the overall quality of vocal signals estimated from musical mixtures using a range of competing source separation methods. In a loudness experiment, listeners compared the loudness balance of the competing musical accompaniment and vocal. Our preliminary results provide provisional validation of the psychophysical approach

    R Mason (1999)Microphone techniques for multichannel surround sound, In: Proceedings of the 1999 AES UK conference, Audio: the second centurypp. 15-24

    A controlled subjective test was carried out to assess selected qualities of three microphone techniques for capturing the ambient sound of a concert hall surround sound. The effects of signal delay between the front and rear channels and microphone distance were explored. The tests indicate that the perceived results are programme-dependent, but that a compromise can be found using delayed close microphones, giving similar quality for the range of programme items used.

    Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2019)Survey of Media Device Ownership, Media Service Usage and Group Media Consumption in UK Households, In: Proceedings of the AES 145th Convention, New York, NY, USA, 2018 October 17 – 201pp. 24-28 Audio Engineering Society

    Homes contain a plethora of devices for audio-visual content consumption, which intelligent reproduction systems can exploit to give the best possible experience. To investigate media device ownership in the home, media service-types usage and solitary versus group audio/audio-visual media consumption, a survey of UK households with 1102 respondents was undertaken. The results suggest that there is already significant ownership of wireless and smart loudspeakers, as well as other interconnected devices containing loudspeakers such as smartphones and tablets. Questions on group media consumption suggest that the majority of listeners spend more time consuming media with others than alone, demonstrating an opportunity for systems which can adapt to varying audience requirements within the same environment.

    R Mason, C Kim, T Brookes (2009)Perception of head-position-dependent variations in interaural cross-correlation coefficient, In: Audio Engineering Society Preprint7729

    Experiments were undertaken to elicit the perceived effects of head-position-dependent variations in the interaural cross-correlation coefficient of a range of signals. A graphical elicitation experiment showed that the variations in the IACC strongly affected the perceived width and depth of the reverberant environment, as well as the perceived width and distance of the sound source. A verbal experiment gave similar results, and also indicated that the head-position-dependent IACC variations caused changes in the perceived spaciousness and envelopment of the stimuli.

    The subjective spatial effect of continuous noise signals with interaural time difference fluctuations was investigated. These fluctuations were created by sinusoidal interchannel time difference fluctuations between signals that were presented over loudspeakers. Both verbal and non-verbal elicitation techniques were applied to examine the subjective effect. It was found that the predominant effect of increasing the fluctuation magnitude was an increase in the apparent width of the perceived sound source.

    Christopher Hummersone, Russell Mason, Tim Brookes (2011)Ideal Binary Mask Ratio: a novel metric for assessing binary-mask-based sound source separation algorithms, In: IEEE Transactions on Audio, Speech and Language Processing19(7)pp. 2039-2045 IEEE

    A number of metrics has been proposed in the literature to assess sound source separation algorithms. The addition of convolutional distortion raises further questions about the assessment of source separation algorithms in reverberant conditions as reverberation is shown to undermine the optimality of the ideal binary mask (IBM) in terms of signal-to-noise ratio (SNR). Furthermore, with a range of mixture parameters common across numerous acoustic conditions, SNR–based metrics demonstrate an inconsistency that can only be attributed to the convolutional distortion. This suggests the necessity for an alternate metric in the presence of convolutional distortion, such as reverberation. Consequently, a novel metric—dubbed the IBM ratio (IBMR)—is proposed for assessing source separation algorithms that aim to calculate the IBM. The metric is robust to many of the effects of convolutional distortion on the output of the system and may provide a more representative insight into the performance of a given algorithm.

    Emad M. Grais, Hagen Wierstorf, Dominic Ward, Russell Mason, Mark Plumbley (2019)Referenceless Performance Evaluation of Audio Source Separation using Deep Neural Networks, In: Proceedings 2019 27th European Signal Processing Conference (EUSIPCO) IEEE

    Current performance evaluation for audio source separation depends on comparing the processed or separated signals with reference signals. Therefore, common performance evaluation toolkits are not applicable to real-world situations where the ground truth audio is unavailable. In this paper, we propose a performance evaluation technique that does not require reference signals in order to assess separation quality. The proposed technique uses a deep neural network (DNN) to map the processed audio into its quality score. Our experiment results show that the DNN is capable of predicting the sources-to-artifacts ratio from the blind source separation evaluation toolkit [1] for singing-voice separation without the need for reference signals.

    R Mason, F Rumsey, SK Zielinski (2005)PCA-based down-mixing
    T Neher, Tim Brookes, Russell Mason (2006)Musically Representative Signals for Use in Interaural Cross-Correlation Coefficient Measurements, In: Acta Acustica United with Acustica92(5)pp. 787-796 Hirzel Verlag

    Typically, measurements that aim to predict perceived spatial impression of music signals in concert halls are performed by calculating the interaural cross-correlation coefficient (IACC) of a binaurally-recorded impulse response. Previous research, however, has shown that this can lead to results very different from those obtained if a musical input signal is used. The reasons for this discrepancy were investigated, and it was found that the overall duration of the source signal, its onset and offset times, and the magnitude and rate of any spectral fluctuations, have a very strong effect on the IACC. Two test signals, synthesised to be representative of a wide range of musical stimuli, can extend the external validity of traditional IACC-based measurements.

    A Pearce, Tim Brookes, M Dewhirst, Russell Mason (2016)Eliciting the most prominent perceived differences between microphones, In: Journal of the Acoustical Society of America

    The attributes contributing to the differences perceived between microphones (when auditioning recordings made with those microphones) are not clear from previous research. Consideration of technical specifications and expert opinions indicated that recording five programme items with eight studio and two MEMS microphones could allow determination of the attributes related to the most prominent inter-microphone differences. Pairwise listening comparisons between the resulting 50 recordings, followed by multi-dimensional scaling analysis, revealed up to five salient dimensions per programme item; seventeen corresponding pairs of recordings were selected exemplifying the differences across those dimensions. Direct elicitation and panel discussions on the seventeen pairs identified a hierarchy of 40 perceptual attributes. An attribute contribution experiment on the 31 lowest-level attributes in the hierarchy allowed them to be ordered by degree of contribution and showed brightness, harshness, and clarity to always contribute highly to perceived inter-microphone differences. This work enables the future development of objective models to predict these important attributes.

    J Rämö, S Marsh, S Bech, Russell Mason, S Holdt Jensen (2016)Validation of a perceptual distraction model in a complex personal sound zone system, In: 141st Audio Engineering Society Convention Proceedings9665

    This paper evaluates a previously proposed perceptual model predicting user’s perceived distraction caused by interfering audio programmes. The distraction model was originally trained using a simple sound reproduction system for music-on-music interference situations and it has not been formally tested using more complex sound systems. A listening experiment was conducted to evaluate the performance of the model, using music target and speech interferer reproduced by a complex personal sound-zone system. The model was found to successfully predict the perceived distraction of a more complex sound reproducing system with different target-interferer pairs than it was originally trained for. Thus, the model can be used as a tool for personal sound-zone evaluation and optimization tasks.

    J Francombe, R Mason, M Dewhirst, S Bech (2013)Modeling listener distraction resulting from audio-on-audio interference., In: Journal of Acoustical Society of America133(5)pp. 3367-3367 Acoustical Society of America

    As devices that produce audio become more commonplace and increasingly portable, situations in which two competing audio programs are present occur more regularly. In order to support the design of systems intended to mitigate the effects of interfering audio (including sound field control, noise cancelation or source separation systems), it is desirable to model the perceived distraction in such situations. Distraction ratings were collected for a range of audio-on-audio interference situations including various target and interferer programs at three interferer levels, with and without road noise. Time-frequency target-to-interferer ratio (TIR) maps of the stimuli were created using a simple auditory model. A number of feature sets were extracted from the TIR maps, including combinations of mean, standard deviation, minimum and maximum TIR taken across the duration of the program item. In order to predict distraction ratings from the features, linear regression models were produced. The models were evaluated for goodness-of-fit (RMSE) and generalizability (using a K-fold cross-validation procedure). The best model performed well, with almost all predictions falling within the 95% confidence intervals of the perceptual data. A validation data set was used to test the model, suggesting areas for future improvement.

    Chungeun Kim, Emad M Grais, Russell Mason, Mark D Plumbley (2018)Perception of phase changes in the context of musical audio source separation, In: 145th AES Convention10031 AES

    This study investigates into the perceptual consequence of phase change in conventional magnitude-based source separation. A listening test was conducted, where the participants compared three different source separation scenarios, each with two phase retrieval cases: phase from the original mix or from the target source. The participants’ responses regarding their similarity to the reference showed that 1) the difference between the mix phase and the perfect target phase was perceivable in the majority of cases with some song-dependent exceptions, and 2) use of the mix phase degraded the perceived quality even in the case of perfect magnitude separation. The findings imply that there is room for perceptual improvement by attempting correct phase reconstruction, in addition to achieving better magnitude-based separation.

    T Ashby, Russell Mason, Tim Brookes (2014)Elevation localisation response accuracy on vertical planes of differing azimuth, In: Audio Engineering Society Preprint9046

    Head movement has been shown to significantly improve localisation response accuracy in elevation. It is unclear from previous research whether this is due to static cues created once the head has reached a new stationary position or dynamic cues created through the act of moving the head. In this experiment listeners were asked to report the location of loudspeakers placed on vertical planes at four different azimuth angles (0°, 36°, 72°, 108°) with no head movement. Static elevation response accuracy was significantly more accurate for sources away from the median plane. This finding, combined with the statement that listeners orient to face the source when localising, suggests that dynamic cues are the cause of improved localisation through head movement.

    K Baykaner, C Hummersone, R Mason, S Bech (2013)The prediction of the acceptability of auditory interference based on audibility, In: Proceedings of the 52nd AES International Conferencepp. 162-168

    In order to evaluate the ability of sound eld control methods to generate independent listening zones within domestic and automotive environments, it is useful to be able to predict, without listening tests, the accept- Ability of auditory interference scenarios. It was considered likely that a relationship would exist between masking thresholds and acceptability thresholds, thus a listening test was carried out to gather acceptability thresholds to compare with existing masking data collected under identical listening conditions. An analysis of the data revealed that a linear regression model could be used to predict acceptability thresholds, from only masking thresholds, with RMSE = 2.6 dB and R = 0.86. The same linear regression model was used to predict acceptability thresholds but with masking threshold predictions as the input. The results had RMSE = 4.2 dB and R = 0.88. Copyright © (2013) by the Audio Engineering Society.

    Dominic Ward, Hagen Wierstorf, Russell Mason, Emad M. Grais, Mark Plumbley (2018)BSS eval or peass? Predicting the perception of singing-voice separation, In: Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)pp. 596-600 Institute of Electrical and Electronics Engineers (IEEE)

    There is some uncertainty as to whether objective metrics for predicting the perceived quality of audio source separation are sufficiently accurate. This issue was investigated by employing a revised experimental methodology to collect subjective ratings of sound quality and interference of singing-voice recordings that have been extracted from musical mixtures using state-of-the-art audio source separation. A correlation analysis between the experimental data and the measures of two objective evaluation toolkits, BSS Eval and PEASS, was performed to assess their performance. The artifacts-related perceptual score of the PEASS toolkit had the strongest correlation with the perception of artifacts and distortions caused by singing-voice separation. Both the source-to-interference ratio of BSS Eval and the interference-related perceptual score of PEASS showed comparable correlations with the human ratings of interference.

    MF Simon-Galvez, D Menzies, Russell Mason, FM Fazi (2016)Object-Based Audio Reproduction using a Listener-Position Adaptive Stereo System, In: Journal of the Audio Engineering Society64(10)pp. 740-751 Audio Engineering Society

    This work introduces a listener-position adaptive stereo reproduction system 10 that allows for the reproduction of 2D object-based audio and for a more accurate 11 localisation when the listener is located outside the sweet spot. The adaptation is 12 composed of two parts; a compensation system that updates the loudspeakers feeds 13 so that the loudspeaker input signals are delivered to the listener with the same 14 magnitude and phase independently of the listening position, as it would occur in 15 a symmetric listening configuration, and an object-based rendering system using 16 conventional panning algorithms. Robustness simulations show that an accurate 17 localisation is possible when the audio objects are panned in the angle seen between 18 the listener and the two loudspeakers. This has been further assessed by objective 19 and subjective localisation experiments

    Leandro Da Silva Pires, Maurílio Nunes Vieira, Hani Camille Yehia, Alexander Mattioli Pasqual, Timothy Brookes, Russell Mason (2018)MODELO DE DISTÂNCIA AUDITIVA PERCEBIDA PARA O ALGORITMO DE LOUDNESS ITU-R BS.1770, In: XXVIII Encontro da SOBRAC Sociedade Brasileira de Acús

    A Recomendação BS.1770 do Setor de Radiocomunicação da União Internacional de Telecomunicações (ITU-R) para medição de intensidade sonora percebida loudness em conteúdo multicanal se estabeleceu como padrão de facto na indústria de áudio, e de jure para a radiodifusão digital. Embora sua ponderação em frequência contabilize os efeitos acústicos da cabeça, o modelo é insensível à distância fonte-ouvinte, importante dica de localização de objetos sonoros. Testes subjetivos foram conduzidos para investigar o efeito da distância auditiva percebida na sensação de loudness provocada por ruído, fala, música e sons ambientais. Com base nas variações encontradas, uma adaptação no algoritmo ITU-R é proposta e testada em observância às avaliações dos participantes do experimento. As diferenças de nível de loudness situaram-se no interior dos intervalos de confiança das diferenças de nível apontadas pelos participantes do experimento nas distâncias fonte-ouvinte mais comuns em salas de estar. (The International Telecommunication Union, Radiocommunication Sector (ITU-R) Recommendation BS.1770 for loudness measurement in mutichannel audio is established as a de facto standard for audio companies and de jure for digital broadcasters. Although its frequency weighting accounts for acoustic effects of the head, the model is insensitive to source distance. Listening tests were undertook to investigate the effect of auditory distance perception on loudness of noise, speech, music and environmental sounds. Based on the variations found, an adaptation of the ITU-R algorithm is proposed and evaluated against subject responses. Resulting differences in loudness levels were within the confidence intervals of the level differences indicated by the subjects in source-reciever distances commonly found in living rooms.)

    Jon Francombe, Timothy Brookes, Russell Mason, J Woodcock (2016)Determining and Labeling the Preference Dimensions of Spatial Audio Replay, In: QoMEX2016

    There are many spatial audio reproduction systems currently in domestic use (e.g. mono, stereo, surround sound, sound bars, and headphones). In an experiment, pairwise pref-erence magnitude ratings for a range of such systems were collected from trained and untrained listeners. The ratings were analysed using internal preference mapping to: (i) uncover the principal perceptual dimensions of listener preference; (ii) label the dimensions based on the important perceptual attributes; and (iii) observe differences between trained and untrained listeners. To aid with labelling the dimensions, perceptual attributes were elicited alongside the preference ratings and were analysed by: (i) considering a metric derived from the frequency of use of each attribute and the magnitude of the related preference judgements; and (ii) observing attribute use for comparisons between specific methods. The first preference dimension accounted for the vast majority of the variance in ratings; it was related to multiple important attributes, including those associated with spatial capability and freedom from distortion. All participants exhibited a preference for reproduction methods that were positively correlated with the first dimension (most notably 5-, 9-, and 22-channel surround sound). The second dimension accounted for only a very small proportion of the variance, and appeared to separate the headphone method from the other methods. The trained and untrained listeners generally showed opposite preferences in the second dimension, suggesting that trained listeners have a higher preference for headphone reproduction than untrained listeners.

    C Kim, R Mason, T Brookes (2010)Validation of a simple spherical head model as a signal capture device for head-movement-aware prediction of perceived spatial impression, In: Proceedings of the 40th International AES Conferencepp. ?-?

    In order to take head movement into account in objective evaluation of perceived spatial impression (including source direction), a suitable binaural capture device is required. A signal capture system was suggested that consisted of a head-sized sphere containing multiple pairs of microphones which, in comparison to a rotating head and torso simulator (HATS), has the potential for improved measurement speed and the capability to measure time varying systems, albeit at the expense of some accuracy. The error introduced by using a relatively simple sphere compared to a more physically accurate HATS was evaluated in terms of three binaural parameters related to perceived spatial impression – interaural time and level differences (ITD and ILD) and interaural cross-correlation coefficient (IACC). It was found that whilst the error in the IACC measurements was perceptually negligible when the sphere was mounted on a torso, the differences in measured ITD and ILD values between the sphere-with-torso and HATS were not perceptually negligible. However, it was found that the sphere-with-torso could give accurate predictions of source location based on ITD and ILD, through the use of a look-up table created from known ITD-ILD-direction mappings. Therefore the validity of the multi-microphone sphere-with-torso as a binaural signal capture device for perceptually relevant measurements of source direction (based on ITD and ILD) and spatial impression (based on IACC) was demonstrated.

    Andy Pearce, Tim Brookes, Russell Mason (2019)Modelling Timbral Hardness, In: Applied Sciences9(3)

    Hardness is the most commonly searched timbral attribute within freesound.org, a commonly used online sound effect repository. A perceptual model of hardness was developed to enable the automatic generation of metadata to facilitate hardness-based filtering or sorting of search results. A training dataset was collected of 202 stimuli with 32 sound source types, and perceived hardness was assessed by a panel of listeners. A multilinear regression model was developed on six features: maximum bandwidth, attack centroid, midband level, percussive-to-harmonic ratio, onset strength, and log attack time. This model predicted the hardness of the training data with R2 = 0.76. It predicted hardness within a new dataset with R2 = 0.57, and predicted the rank order of individual sources perfectly, after accounting for the subjective variance of the ratings. Its performance exceeded that of human listeners.

    K Baykaner, C Hummersone, RD Mason, S Bech (2014)The acceptability of speech with interfering radio programme material, In: Audio Engineering Society Preprint9020

    A listening test was conducted to investigate the acceptability of audio-on-audio interference for radio pro- grammes featuring speech as the target. 21 subjects, including na ̈ıve and expert listeners, were presented with 200 randomly assigned pairs of stimuli and asked to report, for each trial, whether the listening scenario was acceptable or unacceptable. Stimuli pairs were set to randomly selected SNRs ranging from 0 to 45 dB. Results showed no significant di↵erence between subjects according to listening experience. A logistic re- gression to acceptability was carried out based on SNR. The model had accuracy R2 = 0.87, RMSE = 14%,and RMSE* = 7%. By accounting for the presence of background audio in the target programme, 90% of the variance could be explained.

    Craig Cieciura, Russell Mason, Philip Coleman, Matthew Paradis (2018)Creating Object-Based Stimuli To Explore Media Device Orchestration Reproduction Techniques Zenodo

    Dataset containing Object-based versions and rendered out MDO loudspeaker feeds of two programme items, adapted from existing material to explore Media Device Orchestration reproduction techniques. This forms part of the PhD research of Craig Cieciura. This was experiment-based research to determine how to render object-based audio in the domestic environment using ad-hoc, audio-capable devices. References Cieciura, C., Mason, R., Coleman, P. and Paradis, M. 2018. Creating Object-Based Stimuli to Explore Media Device Orchestration Reproduction Techniques, Audio Engineering Society Preprint, 145th Convention, Engineering Brief 463.

    Jon Francombe, Timothy Brookes, Russell Mason (2017)Automatic text clustering for audio attribute elicitation experiment responses, In: Data to accompany "Automatic text clustering for audio attribute elicitation experiment responses" Audio Engineering Society

    Collection of text data is an integral part of descriptive analysis, a method commonly used in audio quality evaluation experiments. Where large text data sets will be presented to a panel of human assessors (e.g., to group responses that have the same meaning), it is desirable to reduce redundancy as much as possible in advance. Text clustering algorithms have been used to achieve such a reduction. A text clustering algorithm was tested on a dataset for which manual annotation by two experts was also collected. The comparison between the manual annotations and automatically-generated clusters enabled evaluation of the algorithm. While the algorithm could not match human performance, it could produce a similar grouping with a significant redundancy reduction (approximately 48%).

    RD Mason, C Kim, T Brookes (2008)Taking head movements into account in measurement of spatial attributes, In: Proceedings of the Institute of Acoustics Reproduced Sound Conference30(6)pp. 239-246

    Measurements of the spatial attributes of auditory environments or sound reproduction systems commonly only consider a single receiver position. However, it is known that humans make use of head movement to help to make sense of auditory scenes, especially when the physical cues are ambiguous. Results are summarised from a three-year research project which aims to develop a practical binaural-based measurement system that takes head movements into account. Firstly, the head movements made by listeners in various situations were investigated, which showed that a wide range of head movements are made when evaluating source width and envelopment, and minimal head movements made when evaluating timbre. Secondly, the effect of using a simplified sphere model containing two microphones instead of a head and torso simulator was evaluated, and methods were derived to minimise the errors in measured cues for spatial perception that were caused by the simplification of the model. Finally, the results of the two earlier stages were combined to create a multi-microphone sphere that can be used to measure spatial attributes incorporating head movements in a perceptually-relevant manner, and which allows practical and rapid measurements to be made.

    T Ashby, Tim Brookes, Russell Mason (2014)Towards a head-movement-aware spatial localisation model: Elevation, In: 21st International Congress on Sound and Vibration 2014, ICSV 20144pp. 2808-2815

    A multiple-microphone-sphere-based localisation model has been developed that predicts source location by modelling the cues given by head movement. In order to inform improvements to this model, a series of experiments was devised to investigate the impact of head movement cues on the localisation response accuracy of human listeners. It was shown that head movements improve elevation localisation response accuracy for noise sources. When pinna cues are impaired the significance of head movement cues increases. The improved localisation resulting from head movement is due to dynamic cues available during the period of movement, and not to improved static cues available once the head is turned to face the sound source. Head movements improve elevation localisation to a similar degree for band- limited sources with differing centre frequencies (500 Hz, 2 kHz and 6 kHz), which indicates that both dynamic ILDs and dynamic ITDs are used. Head movements do not improve elevation response accuracy for programme items with less than an octave bandwidth. Head movements improve elevation response accuracy to a greater degree for sources further away from the equatorial plane.

    J Francombe, TS Brookes, R Mason, F Melchior (2015)Loudness matching multichannel audio programme material with listeners and predictive models, In: 139th International AES Convention papers

    Loudness measurements are often necessary in psychoacoustic research and legally required in broadcasting. However, existing loudness models have not been widely tested with new multichannel audio systems. A trained listening panel used the method of adjustment to balance the loudnesses of eight reproduction methods: low-quality mono, mono, stereo, 5-channel, 9-channel, 22-channel, ambisonic cuboid, and headphones. Seven programme items were used, including music, sport, and a lm soundtrack. The results were used to test loudness models including simple energy-based metrics, variants of ITU-R BS.1770, and complex psychoacoustically motivated models. The mean differences between the perceptual results and model predictions were statistically insignificant for all but the simplest model. However, some weaknesses in the model predictions were highlighted.

    The most common surround sound format (often known as 5.1) does not enable accurate positioning of sounds to the side or the rear. Based on a detailed analysis of the binaural hearing cues used by humans, a new surround sound loudspeaker format has been developed using 8 loudspeakers arranged in a regular octagon. Listening tests have been conducted to demonstrate the superiority of this setup compared to 5.1 in terms of accurate sound positioning around a listener. In order to enable development of microphone techniques to capture soundfields for this reproduction system, localisation curves needed to be derived that map the relationship between a range of interchannel time and levels differences of signals (ICTDs and ICLDs respectively) and the perceived sound location. Various signals with a range of ICLDs and ICTDs were produced between pairs of adjacent loudspeakers, and listeners were asked to evaluate the perceived sound's direction and its locatedness. The results showed that the curves for the side pairs of adjacent loudspeakers are significantly different to the front and rear pairs. The resulting curves have been used to derive suitable microphone techniques for this loudspeaker setup.

    Leandro Pires, Maurilio Vieira, Hani Yehia, Tim Brookes, Russell Mason (2020)A NEW SET OF DIRECTIONAL WEIGHTS FOR ITU-R BS.1770 LOUDNESS MEASUREMENT OF MULTICHANNEL AUDIO, In: ICT Discoveries

    The ITU-R BS.1770 multichannel loudness algorithm performs a sum of channel energies with weighting coefficients based on azimuth and elevation angles of arrival of the audio signal. In its current version, these coefficients were estimated based on binaural summation gains and not on subjective directional loudness. Also, the algorithm lacks directional weights for wider elevation angles (jfj 30). A listening test with broadband stimuli was conducted to collect subjective data on directional effects. The results were used to calculate a new set of directional weights. A modified version of the loudness algorithm with these estimated weights was tested against its benchmark using the collected data, and using program material rendered to reproduction systems with different loudspeaker configurations. The modified algorithm performed better than the benchmark, particularly with reproduction systems with more loudspeakers positioned out of the horizontal plane.

    R Mason, N Ford, F Rumsey, B de Bruyn (2000)Verbal and non-verbal elicitation techniques in the subjective assessment of spatial sound reproduction, In: Audio Engineering Society Preprint5225

    Current research into spatial audio has shown an increasing interest in the way subjective attributes of reproduced sound are elicited from listeners. The emphasis at present is on verbal semantics, however studies suggest that non-verbal methods of elicitation could be beneficial. Research into the relative merits of these methods has found that non-verbal responses may result in different elicited attributes compared to verbal techniques. Non-verbal responses may be closer to the perception of the stimuli than the verbal interpretation of this perception. There is evidence that drawing is not as accurate as other non-verbal methods of elicitation when it comes to reporting the localisation of auditory images. However, the advantage of drawing is its ability to describe the whole auditory space rather than a single dimension.

    Baykaner Khan, Philip Coleman, Mason Russell, Philip . J. B. Jackson, Jon Francombe, Marek Olik, Søren Bech (2015)The Relationship Between Target Quality and Interference in Sound Zone, In: Journal of the Audio Engineering Society63(1/2)pp. 78-89 Audio Engineering Society

    Sound zone systems aim to control sound fields in such a way that multiple listeners can enjoy different audio programs within the same room with minimal acoustic interference. Often, there is a trade-off between the acoustic contrast achieved between the zones and the fidelity of the reproduced audio program in the target zone. A listening test was conducted to obtain subjective measures of distraction, target quality, and overall quality of listening experience for ecologically valid programs within a sound zoning system. Sound zones were reproduced using acoustic contrast control, planarity control, and pressure matching applied to a circular loudspeaker array. The highest mean overall quality was a compromise between distraction and target quality. The results showed that the term “distraction” produced good agreement among listeners, and that listener ratings made using this term were a good measure of the perceived effect of the interferer.

    Christopher Hummersone, Russell Mason, Tim Brookes (2011)A Perceptually–Inspired Approach to Machine Sound Source Separation in Real Rooms

    Automated separation of the constituent signals of complex mixtures of sound has made significant progress over the last two decades. Unfortunately, completing this task in real rooms, where echoes and reverberation are prevalent, continues to present a significant challenge. Conversely, humans demonstrate a remarkable robustness to reverberation. An overview is given of a project that set out to model some of the aspects of human auditory perception in order to improve the efficacy of machine sound source separation in real rooms. Using this approach, the models that were developed achieved a significant improvement in separation performance. The project also showed that existing models of human auditory perception are markedly incomplete and work is currently being undertaken to model additional aspects that had previously been neglected. Work completed so far has shown that an even greater improvement in separation performance will be possible. The work could have many applications, including intelligent hearing aids and intelligent security cameras, and could be incorporated in to many other products that perform automated listening tasks, such as speech recognition, speech enhancement, noise reduction and medical transcription.

    C Kim, Russell Mason, Tim Brookes (2011)Head-movement-aware signal capture for evaluation of spatial acoustics, In: Building Acoustics18(1)pp. 207-226 Multi Science Publishing

    This research incorporates the nature of head movement made in listening activities, into the development of a quasi- binaural acoustical measurement technique for the evaluation of spatial impression. A listening test was conducted where head movements were tracked whilst the subjects rated the perceived source width, envelopment, source direction and timbre of a number of stimuli. It was found that the extent of head movements was larger when evaluating source width and envelopment than when evaluating source direction and timbre. It was also found that the locus of ear positions corresponding to these head movements formed a bounded sloped path, higher towards the rear and lower towards the front. This led to the concept of a signal capture device comprising a torso-mounted sphere with multiple microphones. A prototype was constructed and used to measure three binaural parameters related to perceived spatial impression - interaural time and level differences (ITD and ILD) and interaural cross- correlation coefficient (IACC). Comparison of the prototype measurements to those made with a rotating Head and Torso Simulator (HATS) showed that the prototype could be perceptually accurate for the prediction of source direction using ITD and ILD, and for the prediction of perceived spatial impression using IACC. Further investigation into parameter derivation and interpolation methods indicated that 21 pairs of discretely spaced microphones were sufficient to measure the three binaural parameters across the sloped range of ear positions identified in the listening test.

    Philip Jackson, Mark D Plumbley, Wenwu Wang, Tim Brookes, Philip Coleman, Russell Mason, David Frohlich, Carla Bonina, David Plans (2017)Signal Processing, Psychoacoustic Engineering and Digital Worlds: Interdisciplinary Audio Research at the University of Surrey

    At the University of Surrey (Guildford, UK), we have brought together research groups in different disciplines, with a shared interest in audio, to work on a range of collaborative research projects. In the Centre for Vision, Speech and Signal Processing (CVSSP) we focus on technologies for machine perception of audio scenes; in the Institute of Sound Recording (IoSR) we focus on research into human perception of audio quality; the Digital World Research Centre (DWRC) focusses on the design of digital technologies; while the Centre for Digital Economy (CoDE) focusses on new business models enabled by digital technology. This interdisciplinary view, across different traditional academic departments and faculties, allows us to undertake projects which would be impossible for a single research group. In this poster we will present an overview of some of these interdisciplinary projects, including projects in spatial audio, sound scene and event analysis, and creative commons audio.

    T Ashby, R Mason, T Brookes (2011)Prediction of perceived elevation using multiple psuedo-binaural microphones, In: Audio Engineering Society Preprint8389

    Computational auditory models that predict the perceived location of sound sources in terms of azimuth are already available, yet little has been done to predict perceived elevation. Interaural time and level differences, the primary cues in horizontal localisation, do not resolve source elevation, resulting in the ‘Cone of Confusion’. In natural listening, listeners can make head movements to resolve such confusion. To mimic the dynamic cues provided by head movements, a multiple microphone sphere was created, and a hearing model was developed to predict source elevation from the signals captured by the sphere. The prototype sphere and hearing model proved effective in both horizontal and vertical localisation. The next stage of this research will be to rigorously test a more physiologically accurate capture device.

    Jon Francombe, James Woodcock, Richard J. Hughes, Russell Mason, Andreas Franck, Chris Pike, Tim Brookes, William J. Davies, Philip J.B. Jackson, Trevor J. Cox, Filippo M. Fazi, Adrian Hilton (2018)Qualitative evaluation of media device orchestration for immersive spatial audio reproduction, In: Journal of the Audio Engineering Society66(6)pp. 414-429 Audio Engineering Society

    The challenge of installing and setting up dedicated spatial audio systems can make it difficult to deliver immersive listening experiences to the general public. However, the proliferation of smart mobile devices and the rise of the Internet of Things mean that there are increasing numbers of connected devices capable of producing audio in the home. ____Media device orchestration" (MDO) is the concept of utilizing an ad hoc set of devices to deliver or augment a media experience. In this paper, the concept is evaluated by implementing MDO for augmented spatial audio reproduction using objectbased audio with semantic metadata. A thematic analysis of positive and negative listener comments about the system revealed three main categories of response: perceptual, technical, and content-dependent aspects. MDO performed particularly well in terms of immersion/envelopment, but the quality of listening experience was partly dependent on loudspeaker quality and listener position. Suggestions for further development based on these categories are given.

    LSR Simon, Russell Mason (2011)Spaciousness rating of 8-channel stereophony-based microphone arrays, In: Audio Engineering Society Preprint8340 Audio Engineering Society

    In previous studies, the localisation accuracy and the spatial impression of 3-2 stereo microphone arrays were discussed. These showed that 3-2 stereo cannot produce stable images to the side and to the rear of the listener. An octagon loudspeaker array was therefore proposed. Microphone array design for this loudspeaker configuration was studied in terms of localisation accuracy, locatedness and sound image width. This paper describes an experiment conducted to evaluate the spaciousness of 10 different microphone arrays used in different acoustical environments. Spaciousness was analyzed as a function of sound signal, acoustical environment and microphone array’s characteristics. It showed that the height of the microphone array and the original acoustical environment are the two variables that have the most influence on the perceived spaciousness, but that microphone directivity and the position of sound sources is also important.

    T Brookes, Russell Mason (2013)Auditory source width of paired vs single narrow-band signals in terms of centre-frequency, loudness level and interaural cross-correlation coefficient, In: Journal of the Acoustical Society of America Acoustical Society of America

    Auralisation is the process of rendering virtual sound fields. It is used in areas including: acoustic design, defence, gaming and audio research. As part of a PhD project concerned with the influence of loudspeaker directivity on the perception of reproduced sound, a fully-computed auralisation system has been developed. For this, acoustic modelling software is used to synthesise and extract binaural impulse responses of virtual rooms. The resulting audio is played over headphones and allows listeners to experience the excerpt being reproduced within the synthesised environment. The main advance with this system is that impulse responses are calculated for a number of head positions, which allows the listeners to move when listening to the recreated sounds. This allows for a much more realistic simulation, and makes it especially useful for conducting subjective experiments on sound reproduction systems and/or acoustical environments which are either not available or are even impractical to create. Hence, it greatly increases the range and type of experiments that can be undertaken at Surrey. The main components of the system are described, together with the results from a validation experiment which demonstrate that this system provides similar results to experiments conducted previously using loudspeakers in an anechoic chamber.

    Additional publications