Professor Sabine Braun


Professor of Translation Studies, Director of the Centre for Translation Studies, Co-Director (FASS) of the Surrey Institute for People-Centred Artificial Intelligence
MA (Heidelberg), Dr Phil (Tuebingen)

About

Areas of specialism

Human-machine interaction and integration in translation and interpreting; Technologies in interpreting; Video-mediated/Distance interpreting; Audio description; Intersemiotic and audiovisual translation; Educational technologies; Virtual reality

University roles and responsibilities

  • Director of the Centre for Translation Studies
  • Associate Dean (Research & Innovation), 2017-21
  • Co-Director, Surrey Institute of People-Centred Artificial Intelligence

    My qualifications

    MA Translation
    University of Heidelberg
    Dr Phil in Applied English Linguisics
    University of Tübingen

    Research

    Research interests

    Research projects

    Active projects

    Research projects

    Supervision

    Postgraduate research supervision

    Postgraduate research supervision

    Teaching

    Publications

    G. Hieke, E. D. Williams, P. Gill, G. Black, L. Islam, C. Vindrola-Padros, J. Yargawa, Sabine Braun, K. L. Whitaker (2024)Uptake and experience of professional interpreting services in primary care in a South Asian Population; a national cross-sectional study, In: BMC Primary Care BMC

    Background: Interpreting services bridge language barriers that may prevent patients and clinicians from understanding each other, impacting quality of care and health outcomes. Despite this, there is limited up-to-date evidence regarding the barriers to and facilitators of uptake in primary care. The aim of this study was to ascertain current national uptake and experience of interpreting services in primary care (general practice) by South Asian communities in England. Methods: We conducted a national cross-sectional survey in 2023 with people with limited or no English language proficiency (n=609). Multilingual researchers interviewed people from Bangladeshi (n=213), Indian (n=200), and Pakistani (n=196) backgrounds from four regions in England (Greater London, Midlands, Yorkshire and the Humber, North West). Results: Sixty-three percent of participants reported using professional interpreting services in primary care. The most common modality was face-to-face interpreting (55%), followed by telephone (17%) and video (8%). Multivariable analysis identified several correlates of lower uptake: participants from Indian backgrounds, those living in the Midlands, and those whose family member/friend interpreted for them within the past year were less likely to have used a professional interpreter provided by their general/family practice. Participants who had visited primary care within the last 12 months, had requested an interpreter but were told they could not have one, were informed about professional interpreting services, and were given choice in their language support were more likely to have used a professional interpreter. Conclusions: Our approach provides novel data on professional interpreting service use and evidence about the factors that may play a role in patient uptake and experience

    Sabine Braun (2024)Distance interpreting as a professional profile, In: Handbook of the Language Industrypp. 449-472 De Gruyter Mouton

    This chapter explores the evolving practice of distance interpreting (DI), which involves using audio or video communication technology to facilitate interpreting when the interpreter and at least one communication participant are not in the same physical space. While DI is not a new practice, the COVID-19 pandemic accelerated its adoption, making it a widespread modality of interpreting for virtual and hybrid events. The chapter begins with a comprehensive characterization of DI, covering key concepts and systematizing various DI configurations such as virtual interpreting and remote interpreting. It then examines DI practices in conference, legal and healthcare interpreting – the best-documented fields – highlighting the major developments in each field and discussing their implications for the quality and effectiveness of communication with DI. The final section explores current research topics in DI, including interpreters' perceptions, interpreting performance quality, human factors such as stress and fatigue, interactional aspects, working conditions, strategies and the potential for adapting to DI. By tracing each topic across different fields of interpreting, this section aims to highlight shared concerns regarding DI as well as differences across fields in order to gain a nuanced understanding of DI and its impact on interpreting workflows, interpreters' experiences, performance and wellbeing.

    Jaleh Delfani, Constantin Orasan, Hadeel Saadany, Ozlem Temizoz, Eleanor Taylor-Stilgoe, Diptesh Kanojia, Sabine Braun, Barbara Schouten Google Translate Error Analysis for Mental Healthcare Information: Evaluating Accuracy, Comprehensibility, and Implications for Multilingual Healthcare Communication

    This study explores the use of Google Translate (GT) for translating mental healthcare (MHealth) information and evaluates its accuracy, comprehensibility, and implications for multilingual healthcare communication through analysing GT output in the MHealth domain from English to Persian, Arabic, Turkish, Romanian, and Spanish. Two datasets comprising MHealth information from the UK National Health Service website and information leaflets from The Royal College of Psychiatrists were used. Native speakers of the target languages manually assessed the GT translations, focusing on medical terminology accuracy, comprehensibility, and critical syntactic/semantic errors. GT output analysis revealed challenges in accurately translating medical terminology, particularly in Arabic, Romanian, and Persian. Fluency issues were prevalent across various languages, affecting comprehension, mainly in Arabic and Spanish. Critical errors arose in specific contexts, such as bullet-point formatting, specifically in Persian, Turkish, and Romanian. Although improvements are seen in longer-text translations, there remains a need to enhance accuracy in medical and mental health terminology and fluency, whilst also addressing formatting issues for a more seamless user experience. The findings highlight the need to use customised translation engines for Mhealth translation and the challenges when relying solely on machine-translated medical content, emphasising the crucial role of human reviewers in multilingual healthcare communication.

    Sabine Braun, Kim Starr, Jorma Laaksonen (2021)Comparing human and automated approaches to visual storytelling, In: Sabine Braun, Kim Starr (eds.), Innovation in Audio Description Researchpp. 159-196 Routledge

    This chapter focuses on the recent surge of interest in automating methods for describing audiovisual content ,whether for image search and retrieval, visual storytelling or in response to the rising demand for audio description following changes to regulatory frameworks. While computer vision communities have intensified research into the automatic generation of video descriptions (Bernardi et al., 2016), the automation of still image captioning remains a challenge in terms of accuracy (Husain and Bober, 2016). Moving images pose additional challenges linked to temporality, including co-referencing (Rohrbach et al., 2017) and other features of narrative continuity (Huang et al., 2016). Machine-generated descriptions are currently less sophisticated than their human equivalents, and frequently incoherent or incorrect. By contrast, human descriptions are more elaborate and reliable but are expensive to produce. Nevertheless, they offer information about visual and auditory elements in audiovisual content that can be exploited for research into machine training. Based on our research conducted in the EU-funded MeMAD project, this chapter outlines a methodological approach for a systematic comparison of human- and machine-generated video descriptions, drawing on corpus-based and discourse-based approaches, with a view to identifying key characteristics and patterns in both types of description, and exploiting human knowledge about video description for machine training. This chapter focuses on the recent surge of interest in automating methods for describing audiovisual content, whether for image search and retrieval, visual storytelling or in response to the rising demand for audio description following changes to regulatory frameworks. A model for machine-generated content description is therefore likely to be a more achievable goal in the shorter term than a model for generating elaborate audio descriptions. Relevance Theory (RT) focuses on the human ability to derive meaning through inferential processes. RT asserts that these processes are highly inferential, drawing on common knowledge and cultural experience, and that they are guided by the human tendency to maximise relevance and assumption that speakers/storytellers normally choose the optimally relevant way of communicating their intentions. Moving on from basic comprehension of events to interpretation and conjecture requires the viewer to employ ‘extradiegetic’ references such as social convention, cultural norms and life experience.

    SABINE Braun (2005)From pedagogically relevant corpora to authentic language learning contents, In: ReCALL17(1)47pp. 47-64 Cambridge University Press

    The potential of corpora for language learning and teaching has been widely acknowledged and their ready availability on the Web has facilitated access for a broad range of users, including language teachers and learners. However, the integration of corpora into general language learning and teaching practice has so far been disappointing. In this paper, I will argue that the shape of many existing corpora, designed with linguistic research goals in mind, clashes with pedagogic requirements for corpus design and use. Hence, a ‘pedagogic mediation of corpora’ is required (cf. Widdowson, 2003). I will also show that the realisation of this requirement touches on both the development of appropriate corpora and the ways in which they are exploited by learners and teachers. I will use a small English Interview Corpus (ELISA) to outline possible solutions for a pedagogic mediation. The major aspect of this is the combination of two approaches to the analysis and exploitation of a pedagogically relevant corpus: a corpus-based and a discourse-based approach.

    This paper reports on a long-term European project collaboration between academic researchers and non-academic institutions in Europe to investigate the quality and viability of video-mediated interpreting in legal proceedings (AVIDICUS: Assessment of Video-Mediated Interpreting in the Criminal Justice System).

    Mira Kadrić, Sabine Braun (2014)Giving interpreters a voice: interpreting studies meets theatre studies, In: The interpreter and translator trainer8(3)452pp. 452-468 Routledge

    Interpreters have to negotiate interpersonal power relations, for which their professional training often leaves them insufficiently prepared. The article outlines an approach to organising the teaching of interpreters with a view to giving them a voice under challenging social constraints. From the point of view of educational sociology this implies strengthening students' individual potential for self-determination on a number of levels, especially in taking on increased social responsibility. This provides the basis for specifying didactic strategies tailored to individual forms of interpreting, incorporating approaches adapted from other disciplines, especially practical theatre studies, into the context of interpreter training. The interdisciplinary elements are not simply used in a cumulative fashion, but are complementary to each other. From the participant's perspective they can be regarded as dialogically structured life contexts, and from the observer's perspective they are construed as a system. This idea is illustrated in more detail with methods taken from the Theatre of the Oppressed.

    P D Ritsos, R Gittins, S Braun, C Slater, J C Roberts (2013)Training Interpreters using Virtual Worlds Springer-Verlag Berlin Heidelberg

    With the rise in population migration there has been an increased need for professional interpreters who can bridge language barriers and operate in a variety of fields such as business, legal, social and medical. Interpreters require specialized training to cope with the idiosyncrasies of each eld and their potential clients need to be aware of professional parlance. We present `Project IVY'. In IVY, users can make a selection from over 30 interpreter training scenarios situated in the 3D virtual world. Users then interpret the oral interaction of two avatar actors. In addition to creating di erent 3D scenarios, we have developed an asset management system for the oral les and permit users (mentors of the training interpreters) to easily upload and customize the 3D environment and observe which scenario is being used by a student. In this article we present the design and development of the IVY Virtual Environment and the asset management system. Finally we make discussion over our plans for further development.

    Sabine Braun (2007)Audio Description from a discourse perspective: a socially relevant framework for research and training, In: Linguistica Antverpiensia, New Series – Themes in Translation Studies6pp. 357-369

    The topic of this paper is Audio Description (AD) for blind and partially sighted people. I will outline a discourse-based approach to AD focussing on the role of mental modelling, local and global coherence, and different types of inferences (explicatures and implicatures). Applying these concepts to AD, I will discuss initial insights and outline questions for empirical research. My main aim is to show that a discourse-based approach to AD can provide an informed framework for research, training and practice.

    Sabine Braun (2020)"You are just a disembodied voice really": Perceptions of video remote interpreting by legal interpreters and police officers, In: Linking up with Video: Perspectives on interpreting practice and researchpp. 47-78 John Benjamins

    This contribution is devoted to the voices of users of video remote interpreting (VRI) in a particular setting, namely legal interpreters and police officers. Focusing on an aspect that has received little attention to date, viz. the interpreters' and legal stakeholders' perceptions of VRI as a novel configuration in the legal setting, we use the Social Construction of Technology (SCOT) as a theoretical framework to analyse a set of interviews that were conducted with interpreters and police officers after they had completed a simulated VRI session. As a first step, the participants were prompted to compare this simulated experience to their real-life experience to check the degree of reality of the simulated encounters. Next, they were asked to talk about attitudes towards VRI and to reflect on their experience with VRI during the simulation. Among the key outcomes of this investigation is that the two social groups - police officers and interpreters - have different views, but also that there is a considerable degree of variation among the interpreters, indicating a low degree of stabilisation of VRI as a concept and practice among the interpreters.

    Zeljko Radic, Sabine Braun, Elena Davitti (2023)Introducing Speech Recognition in Non-live Subtitlingto Enhance the Subtitler Experience, In: Proceedings of the International Conference HiT-IT 2023pp. 167-176

    Interlingual Subtitle Voicing (ISV) is a new technique that focuses on using speech recognition (SR), rather than traditional keyboard-based techniques for the creation of non-live subtitles. SR has successfully been incorporated into intralingual live subtitling environments for the purposes of accessibility in major languages (real-time subtitles for the deaf and hard of hearing). However, it has not yet been integrated as a helpful tool for the translation of non-live subtitles to any great and meaningful extent, especially for lower resourced languages likeCroatian. This paper presents selected results from a larger PhD study entitled ‘Interlingual Subtitle Voicing: A New Technique for the Creation of Interlingual Subtitles, A Case Study in Croatian’. More specifically, the paper focuses on the second supporting research question that explores participants feedback about the ISV technique, as a novel workflow element, and the accompanying technology. To explore this technique, purpose-made subtitling software was created, namely SpeakSubz. The constant enhancements of the tool akin to software updates are informed by participants’ empirical results and qualitative feedback and shaped by subtitlers’ needs. Some of the feedback from the main ISV study is presented in this paper.

    Sabine Braun (2006)Multimedia communication technologies and their impact on interpreting, In: H Gerzymisch-Arbogast (eds.), Proceedings Of The Marie Curie Euroconferences MuTra: Audiovisual Translation Scenarios Copenhagen, 1-5 May 2006. Online

    In line with the aim of the MuTra conference to address "the multiple (multilingual, multimedia, multimodal and polysemiotic) dimensions of modern translation scenarios" and to raise "questions as to the impact of new technologies on the form, content, structure and modes of translated products" (Gerzymisch-Arbogast: 2007: 7), this paper will investigate the impact of multimedia communication technologies on interpreting. The use of these technologies has led to new forms of interpreting in which interpreting takes place from a distance, aided by technical mediation. After reviewing the major new and emerging forms, I will outline a set of research questions that need to be addressed and, by way of example, discuss the results of research on interpreter adaptation in videoconference interpreting.

    S Braun, K Kohn (2012)Towards a pedagogic corpus approach to business and community interpreter training, In: B Ahrens, M Albl-Mikasa, C Sasse (eds.), Dolmetschqualität in Praxis, Lehre und Forschung. Festschrift für Sylvia Kalinapp. 185-204 Gunter Narr

    This paper will focus on the use of spoken corpora in this context. "Applied Corpus Linguistics‟ has produced a growing body of research into the use of corpora in language pedagogy, with most recent work focusing on spoken and multimedia corpora for language teaching. We will argue that interpreter training for business and community settings can benefit immensely from this research and we discuss how these approaches can be adapted to suit the needs of business and community interpreter training. Section 2 provides further background to contextualise the idea and the concept of corpus-based interpreter training. Sections 3 and 4 outline a discourse processing model of interpreting and a range of source text related challenges of interpreting as a framework for developing appropriate annotation categories. Section 5 presents initial ideas for the design of a pedagogical corpus for interpreter training. Section 6 concludes the paper by highlighting how this approach is integrated into the wider context of the IVY project and its aim to support business and community interpreter training.

    This paper explores data from video-mediated remote interpreting (RI) which was originally generated with the aim of investigating and comparing the quality of the interpreting performance in onsite and remote interpreting in legal contexts. One unexpected finding of this comparison was that additions and expansions were significantly more frequent in RI, and that their frequency increased further after a phase of familiarisation and training for the participating interpreters, calling for a qualitative exploration of the motives and functions of the additions and expansions. This exploration requires an appropriate methodology. Whilst introspective data give insights into interpreting processes and the motivations guiding the interpreter’s choices, they tend to be unsystematic and incomplete. Micro-analytical approaches such as Conversation Analysis are a promising alternative, especially when enriched with social macro-variables. In line with this, the present paper has a dual aim. The primary aim is to explore the nature of additions and expansions in RI, examining especially to what extent they are indicative of interpreting problems, to what degree they are specific to the videoconference situation, what they reveal about, and how they affect the interpreter’s participation in RI. The secondary aim is to evaluate the micro-analytical approach chosen for this exploration.

    Eloy Rodríguez González, Muhammad Ahmed Saeed, Tomasz Korybski, Elena Davitti, Sabine Braun (2023)Reimagining the remote simultaneous interpreting interface to improve support for interpreters, In: Technological Innovation Put to the Service of Language Learning, Translation and Interpreting: Insights from Academic and Professional Contextspp. 227-246 Peter Lang

    Remote simultaneous interpreting (RSI) draws on Information and Communication Technologies (ICTs) to facilitate multilingual communication by connecting conference interpreters to in-presence, virtual or hybrid events. Early solutions for RSI involved interpreters working in interpreting booths with physical hardware. However, in recent years, cloud-based solutions for RSI have emerged, with innovative Simultaneous Interpreting Delivery Platforms (SIDPs) at their core, enabling RSI delivery from anywhere. Initial explorations of the cloud-based solutions suggest that there is room for improving many of the widely used SIDPs. This paper outlines an ongoing experimental study that investigates two aspects of SIDPs: the design of the interpreter interface and the integration of automatic speech recognition (ASR) in the interface to aid/augment the interpreter's source-text comprehension. Preliminary pilot study data suggests interpreters have a preference towards cleaner interfaces with better view of the speaker's hand gestures and body language. Performance analysis of a subsample of three participants indicates that while the most experienced interpreter had a similar performance across different experimental conditions (i.e., presentation of source speech with/without ASR-generated transcript), differences were apparent for the other two interpreters. KEYWORDS remote simultaneous interpretation (RSI), simultaneous interpreting delivery platforms (SIDPs), presence, user experience, automatic speech recognition (ASR)

    S Braun (2010)Getting past 'Groundhog Day': Spoken multimedia corpora for student-centred corpus exploration, In: T Harris, M Moreno (eds.), Corpus Linguistics in language teaching.pp. 75-98 Peter Lang

    Since the pioneering work of John Sinclair on building and using corpora for researching, describing and teaching language, much thought has been given to corpora in Applied Linguistics (Hunston 2002), how to use corpora in language teaching (Sinclair 2004), teaching and learning by doing corpus analysis (Kettemann / Marko 2002) and similar themes. A look at the titles of recent papers, monographs and edited volumes—which are printed in italics in this introduction—suggests that Applied Corpus Linguistics (Connor / Upton 2004) has established itself as a specific and expanding field of study. It has provided ideas on how to manage the step from corpora to classroom (O’Keeffe et al. 2007) and has produced a growing body of research into the use of corpora in the foreign language classroom (Hidalgo et al. 2007). At face value, the enthusiasm of the research community seems to be increasingly shared by practising teachers. At many teacher training seminars at which I have discussed the use(fulness) of corpus resources, I have met teachers who—at the end of the seminar—were eager to use corpora with their students and were especially interested in the growing number of easily accessible web-based resources. But in spite of everyone’s best intentions, the use of corpora in language classrooms remains the exception, and the question of what it takes to get past ‘Groundhog Day’ in corpus-based language learning and teaching is far from being solved. Spoken corpora may not be the obvious solution. The use of Spoken corpora in Applied Linguistics (Campoy / Luzón 2007) is usually considered to be more challenging than the use of written corpora, since spoken language is often perceived to be ‘messy’, grammatically challenging and lexically poor. Moreover, spoken corpora have traditionally been more difficult to build and distribute. However, multimedia technologies have not only made this easier but they have also opened up new ways of exploiting corpus data. Against this backdrop, this paper will argue that spoken multimedia corpora are not simply an interesting type of corpus for language learning, but that they can in fact lead the way in bringing corpus technology and language pedagogy together (Braun et al. 2006). After a brief review of some of the prevailing obstacles for a more wide-spread use of corpora by students and some common approaches and solutions to the problems at hand (in section 2), one approach to designing a pedagogically viable corpus will be discussed in more detail (in section 3). The approach will then be exemplified (in section 4) using the ELISA corpus, a spoken multimedia corpus of professional English, to illustrate how corpus-based work can be expanded beyond the conventional methods of ‘data-driven learning’. The paper will be concluded with an outlook on some more recent initiatives of spoken corpus development (in section 5). The wider aim of this paper is to stimulate further discussion about, and research into, the development of pedagogically viable corpora, tools and methods which can foster student-centred corpus use in language learning and other areas such as translator / interpreter training and the study of language-based communication in general.

    S Braun, Taylor, J (2012)AVIDICUS comparative studies - part I: Traditional interpreting and remote interpreting in police interviews, In: S Braun, J Taylor (eds.), Videoconference and Remote Interpreting in Criminal Proceedingspp. 99-117 Intersentia
    Sabine Braun, Kim Starr (2020)Innovation in audio description research Routledge

    This state-of-the-art volume covers recent developments in research on Audio Description, the professional practice dedicated to making audiovisual products, artistic artefacts and performances accessible to those with supplementary visual and cognitive needs. This book is key reading for researchers, advanced students and practitioners of audiovisual translation, media, film and performance studies, as well as those in related fields including cognition, narratology, computer vision and artificial intelligence

    Diana Singureanu, Graham Hieke, Joanna Gough, Sabine Braun (2023)'I am his extension in the courtroom': How court interpreters cope with the demands of video-mediated interpreting in hearings with remote defendants, In: Interpreting technologies: current and future trendspp. 72-108 John Benjamins

    Video mediated interpreting (VMI) remains perhaps one of the most controversial topics in interpreting studies. The practice of VMI has, however, grown rapidly during the Covid-19 pandemic, and the global shift towards working online for prolonged periods has also shifted the focus of research on VMI from investigating the feasibility of VMI to developing a better understanding of the factors that can contribute to sustaining it. Pre-pandemic research on VMI has spanned all fields of interpreting: conference (e.g., Moser-Mercer, 2003; Mouzourakis, 2006; Roziner and Shlesinger, 2010), legal (e.g., Braun and Taylor, 2012a; Braun, Davitti and Dicerto, 2018; Fowler, 2018) and medical (e.g., Locatis et al., 2010; De Boe, 2020; Hansen and Svennevig, 2021). This research has generated mixed, sometimes contradictory results. In relation to legal settings, the main lines of enquiry revolve around the classical aspects of interpreting quality (Balogh and Hertog, 2011; Braun, 2013, 2014; Braun and Taylor, 2012b; Miler-Cassino and Rybinska, 2012), interpreters’ working conditions (Fowler, 2018), the interpreter’s role (Devaux, 2016) as well as participants’ behaviour (Fowler, 2016), the communicative ecology of VMI (Licoppe and Verdier, 2013; Licoppe, 2015; Licoppe and Veyrier, 2017) and the impact of VMI training on interpreting performance and stakeholders’ perceptions of VMI (e.g. Braun et al., 2012; Braun, 2014).The existing research conducted within legal settings indicates that VMI creates a range of challenges, but also that some of them are specific to the actual configuration of VMI that is used, especially the distribution of participants. For court hearings in which the interpreter is not co-located with the participant requiring the interpretation, two of the main challenges identified in previous research are a lack of rapport between the interpreter and the remote end user (Fowler, 2016) and limitations regarding the mode of interpreting. Licoppe and Verdier (2013), for example, found that interpreters could only work in consecutive mode in this setting and were thus reliant on the other participants’ awareness or willingness to pause for the interpretation. However, further studies of authentic legal proceedings are needed to confirm the identified challenges and to establish best practices for each configuration. There also seems to be a lack of consensus among interpreters regarding their perceptions of VMI, with some interpreters expressing a more positive attitude towards it and others struggling to see beyond the ‘cost-cutting’ implications (Devaux, 2016; Braun, Davitti and Dicerto, 2018). This poses the question whether there are also individual differences at play influencing how interpreters renegotiate the video-mediated modality with its added layers of complexity (Braun, 2018).In this chapter we examine one particular configuration of VMI in which a defendant takes part in the proceedings via video link from prison whilst all other participants including the interpreter are physically present in the courtroom. Drawing on observation and interview data collected between March 2019 and March 2020 in magistrates’ courts in the London area, with a focus on extradition hearings, we examine the complexities of VMI in this configuration, and the associated strategies employed by the interpreters who participated in this study. We focus on three aspects, namely a) challenges encountered by interpreters when working in the described VMI configuration, i.e. factors that have a negative impact on the interpreting process or outcome and/or on the proceedings in this configuration, and interpreters’ responses (strategies); b) aspects that compensate for the challenges; and c) positive aspects of VMI. This is a first step towards answering our central question in this study, namely to what extent the selection of interpreting strategies is related to individual differences between interpreters.

    A Chmiel, M Tymczyńska, S Braun, C Slater (2012)Kształcenie kooperatywne i sytuacyjne metodą projektów: zastosowanie wirtualnego środowiska IVY w szkoleniu tłumaczy ustnych [Cooperative learning and situated project-based learning: Integrating the IVY virtual environment in interpreter training], In: P Janikowski (eds.), Tłumaczenie Ustne - Teoria, Praktyka, Dydaktyka [Interpreting — theory, practice, didactics]2: Stapp. 213-240 Wydawnictwo WSL

    This paper reports on an empirical case study conducted to investigate the overall conditions and challenges of integrating corpus materials and corpus-based learning activities into Englishlanguage classes at a secondary school in Germany. Starting from the observation that in spite of the large amount of research into corpus-based language learning, hands-on work with corpora has remained an exception in secondary schools, the paper starts by outlining a set of pedagogical requirements for corpus integration and the approach which has formed the basis for designing the case study. Then the findings of the study are reported and discussed. As a result of the methodological challenges identified in the study, the author argues for a move from 'data-driven learning' to needs-driven corpora, corpus activities and corpus methodologies.

    Kim Starr, Sabine Braun (2020)Audio description 2.0: Re-versioning audiovisual accessibility to assist emotion recognition, In: Innovation in Audio Description Researchpp. 97-120 Routledge

    In this chapter we consider the feasibility of using audio description (AD) as a method for enhancing cognitive, rather than physical, audiovisual accessibility. It is based on a study that centres on remodelling AD for an entirely new audience: individuals experiencing emotion recognition difficulties (ERD) which make identifying the affective states of protagonists portrayed in narratively simulated social situations more challenging. As an alternative to applying standard AD techniques we adopt a purpose-driven approach, trialling bespoke AD focused on reinforcing the emotional subtext and causal links in audiovisual material, with the intention of improving narrative comprehension and thus enhancing general cognitive availability. Working with an audience of young individuals on the autism spectrum, our study considers two approaches to remodelling AD for emotion recognition. The first ERD-specific approach, derived from an endgame-oriented text typology (Reiss, in Nord, 1997: 37–38) that was primarily ‘operative’, resulted in target texts which were descriptive in nature (‘EMO-AD’); by contrast our second ERD variant, evolved from a text type grounded in an ‘operative-expressive’ aesthetic, took the form of a more interpretive text (‘CXT-AD’). For the sake of completeness each of these ERD-AD variants was compared, according to content and style, with standard AD designed for sight-impaired audiences (‘BVI-AD’).

    Wei Zhang, Elena Davitti, Sabine Braun (2024)Charting the landscape of remote medical interpreting: an international survey of interpreters working in remote modalities in healthcare services, In: Perspectives : Studies in translation theory and practice Routledge

    The COVID-19 pandemic has accelerated the growth of remote interpreting, yet research on several aspects of remote medical interpreting (RMI) remains limited. Against this backdrop, this study reports key findings from a survey of professional healthcare interpreters with experience in RMI (N=47), addressing various gaps in RMI research, including interlocutor distribution and technology use, factors affecting interpreters’ perceived impact of RMI on their performance, medical settings in which RMI is used, and working conditions. Results indicate that most interpreters have experience with both telephone interpreting (TI) and video interpreting (VI) in the healthcare context, encountering various medical settings, distribution patterns and technological configurations. Quantitative findings reveal four similar normative configurations of interlocutor distribution in both TI and VI, each with slightly different normative technologies. TI is perceived to have a more negative impact on overall performance than VI, which receives more positive evaluations regarding source text comprehension, target text production, rapport between interlocutors, concentration, stress, and fatigue. Qualitative results reveal common challenges shared by TI and VI, with COVID-19 exacerbating some of them. This study contributes to establishing a systematic understanding of the complexity of RMI across multiple dimensions and provides a nuanced perspective on both TI and VI.

    Katriina L. Whitaker, Demi Krystallidou, Emily D. Williams, Georgia Black, Cecilia Vindrola-Padros, Paramjit Gill, Sabine Braun (2022)Understanding Uptake and Experience of Interpreting Services in Primary Care in a South Asian Population in the UK, In: JAMA Network Open5(11)e2244092 American Medical Association

    Introduction Addressing language barriers in accessing health care may improve equitable access in line with current United Nations Sustainable Development Goals.1 English proficiency is associated with socioeconomic position, social segregation, and employment,2 and the intersectionality of ethnicity, immigration status, and lack of language proficiency results in cumulative disadvantage.3 Guidance for commissioners in the UK states that language and communication requirements should not prevent patients from receiving equitable care.4 Limited evidence is available on interpreting service uptake and patient experience that is crucial to ensure services reduce ethnic and socioeconomic health inequalities.5 We aimed to address this evidence gap. Methods This national, cross-sectional community-based pilot survey conducted from December 1, 2020, to January 5, 2021, adhered to the STROBE reporting guideline. Ethical approval was obtained from the University of Surrey. Survey interviews were conducted by telephone by multilingual researchers, and participants provided verbal informed consent. Eligibility criteria included self-reported limited or no English language proficiency, age older than 18 years, and self-reported Pakistani, Indian, or Bangladeshi ethnicity. Convenience and snowball sampling were undertaken to identify eligible participants across the UK, including London, Birmingham, Leicester, Manchester/Oldham, and Bradford. Measures included type(s) of interpreting service used and perceived barriers to their uptake. We evaluated differences between people who had and had not used interpreting services with χ2 and Fisher exact tests. Two-sided P < .05 indicated statistical significance. Analyses were performed using SPSS, version 28.0.1.0 (IBM Corporation). Results Of 105 people in the sample, 35 (33.3%) each reported Indian, Bangladeshi, or Pakistani ethnicity, with ages ranging from 18 to 79 years. Fifty-four participants (51.4%) were women and 51 (48.6%) were men; 83 (79.0%) were married or cohabiting; and 17 (16.2%) had no formal education. Sixty-three participants (60.0%) reported using at least 1 type of formal interpreting service, including face-to-face (57 [54.3%]), telephone (18 [17.1%]), and video-mediated (5 [4.8%]). Forty-seven participants (44.8%) reported family or friends interpreting for them during consultations; of these, only 18 (38.3%) reported formal interpreting service uptake. Thirty-four participants (32.4%) reported having a physician or nurse who speaks their language; of these, 11 (32.4%) used a formal interpreting service. Thirty-seven participants (35.2%) reported being offered a choice of language support by primary care clinicians. Compared with participants who had never used formal interpreting services, those who had were more likely to have no formal education (16 of 63 [25.4%] vs 1 of 42 [2.4%]), report lower confidence in managing conditions (24 of 63 [38.1%] vs 7 of 42 [16.7%]), perceive a need for language support (51 of 63 [81.0%] vs 16 of 42 [38.1%]), and have been told about language support by primary care clinicians (35 of 63 [55.6%] vs 12 of 42 [28.6%]) (Table). The Figure summarizes interpreting service barriers. Discussion This cross-sectional survey study found that most respondents reported using at least 1 type of formal interpreting service, with face-to-face interpreting being most common, followed by telephone interpreting. Video-mediated interpreting use was rare. However, nearly half of the respondents relied on family or friends. Raising awareness of professional interpreting services, patient education, and addressing perceived barriers to accessing formal language support services have the potential to improve access among groups who lack English proficiency. Our study has some limitations. Data were collected during the COVID-19 pandemic, which may have affected responses, although we did not restrict responses to this timescale, and some likely related to prepandemic experiences. Although we found important indications about the likely influences on interpreting service uptake, larger-scale studies are required to account for the selection bias associated with snowball sampling.6 Use of formal interpreters is known to close gaps in quality of clinical care for patients with limited English proficiency. Our survey, which was developed to understand why uptake and experiences may vary, can be used at scale to obtain this vital information to improve equitable health service access.

    S Braun (2008)Audiodescription Research: State of the Art and Beyond, In: Translation Studies in the New Millennium6pp. 14-30 School of Applied Languages, Bilkent University, Ankara, Turkey

    Audiodescription (AD) is a growing arts and media access service for visually impaired people. As a practice rooted in intermodal mediation, i.e. ’translating’ visual images into verbal descriptions, it is in urgent need of interdisciplinary research-led grounding. Seeking to stimulate further research in this field, this paper aims to discuss the major dimensions of AD, give an overview of completed an ongoing research relating to each of these dimensions and outline questions for further academic study.

    Demi Krystallidou, Özlem Temizöz, Fang Wang, Melanie de Looper, Emilio Di Maria, Nora Gattiglia, Stefano Giani, Graham Hieke, Wanda Morganti, Cecilia Serena Pace, Barbara Schouten, Sabine Braun (2024)Communication in refugee and migrant mental healthcare: A systematic rapid review on the needs, barriers and strategies of seekers and providers of mental health services, In: Health policy (Amsterdam)139104949 Elsevier B.V

    •There is a strong need for language support in mental health services.•Migrants, refugees and healthcare professionals are not aware of language support options.•Systemic, interpersonal, and intrapersonal factors affect uptake of language support options.•Improving language support and cultural competency in mental health services is essential.•Seeking, providing and accessing mental health services is a complex system. Migrants and refugees may not access mental health services due to linguistic and cultural discordance between them and health and social care professionals (HSCPs). The aim of this review is to identify the communication needs and barriers experienced by third-country nationals (TCNs), their carers, and HSCPs, as well as the strategies they use and their preferences when accessing/providing mental health services and language barriers are present. We undertook a rapid systematic review of the literature (01/01/2011 – 09/03/2022) on seeking and/or providing mental health services in linguistically discordant settings. Quality appraisal was performed, data was extracted, and evidence was reviewed and synthesised qualitatively. 58/5,650 papers met the inclusion criteria. Both TCNs (and their carers) and HSCPs experience difficulties when seeking or providing mental health services and language barriers are present. TCNs and HSCPs prefer linguistically and culturally concordant provision of mental health services but professional interpreters are often required. However, their use is not always preferred, nor is it without problems. Language barriers impede TCNs’ access to mental health services. Improving language support options and cultural competency in mental health services is crucial to ensure that individuals from diverse linguistic and cultural backgrounds can access and/or provide high-quality mental health services.

    When interpreting takes place in a videoconference setting, the intrinsic technological challenges and the very remoteness of the interpreters' location compound the complexity of the task. Existing research on remote interpreting and the problems it entails focusses on remote conference interpreting, in which the interpreters are physically separated from the conference site while the primary interlocutors are together on site as usual. In an effort to broaden the scope of research in the area of remote interpreting to include other types and to address other questions, in particular that of the interpreters' adaptability to new working conditions, this paper analyses small-group videoconferences in which the primary interlocutors as well as the interpreters all work from different locations. The findings from an empirical case study (based on recordings of videoconference sessions as well as introspective data) are used to identify and exemplify different types of interpreter adaptation.

    SABINE BRAUN, Kim Linda Starr (2019)Finding the Right Words: Investigating Machine-Generated Video Description Quality Using a Corpus-Based Approach, In: Journal of Audiovisual Translation

    This paper examines first steps in identifying and compiling human-generated corpora for the purpose of determining the quality of computer-generated video descriptions. This is part of a study whose general ambition is to broaden the reach of accessible audiovisual content through semi-automation of its description for the benefit of both end-users (content consumers) and industry professionals (content creators). Working in parallel with machine-derived video and image description datasets created for the purposes of advancing computer vision research, such as Microsoft COCO (Lin et al., 2015) and TGIF (Li et al., 2016), we examine the usefulness of audio descriptive texts as a direct comparator. Cognisant of the limitations of this approach, we also explore alternative human-generated video description datasets including bespoke content description. Our research forms part of the MeMAD (Methods for Managing Audiovisual Data) project, funded by the EU Horizon 2020 programme.

    KIM STARR, SABINE BRAUN, JALEH DELFANI (2020)Taking a Cue From the Human: Linguistic and Visual Prompts for the Automatic Sequencing of Multimodal Narrative, In: Journal of Audiovisual Translation

    Human beings find the process of narrative sequencing in written texts and moving imagery a relatively simple task. Key to the success of this activity is establishing coherence by using critical cues to identify key characters, objects, actions and locations as they contribute to plot development.

    Sabine Braun, C Slater (2014)Populating a 3D virtual learning environment for interpreting students with bilingual dialogues to support situated learning in an institutional context, In: The Interpreter and Translator Trainer8(3)pp. 469-485 Taylor & Francis

    The point of departure of this paper is an immersive (avatar-based) 3D virtual environment which was developed in the European project IVY – Interpreting in Virtual Reality – to simulate interpreting practice. Whilst this environment is the first 3D environment dedicated to interpreter-mediated communication, research in other educational contexts suggests that such environments can foster learning (Kim, Lee and Thomas 2012). The IVY 3D environment offers a range of virtual ‘locations’ (e.g. business meeting room, tourist office, doctor’s surgery) which serve as backdrops for the practice of consecutive and dialogue interpreting in business and public service contexts. The locations are populated with relevant objects and with robot-avatars who act as speakers by presenting recorded monologues and bilingual dialogues. Students, represented by their own avatars, join them to practise interpreting. This paper focuses on the development of the bilingual dialogues, which are at the heart of many interpreter-mediated business and public service encounters but which are notoriously difficult to obtain for educational purposes. Given that interpreter training institutions usually need to offer bilingual resources of comparable difficulty levels in many language combinations, ad-hoc approaches to the creation of such materials are normally ruled out. The approach outlined here was therefore to start from available corpora of spoken language that were designed with pedagogical applications in mind (Braun 2005, Kohn 2012). The paper begins by explaining how the dialogues were created and then discusses the benefits and potential shortcomings of this approach in the context of interpreter education. The main points of discussion concern (1) the level of systematicity and authenticity that can be achieved with this corpus-based approach; (2) the potential of a 3D virtual environment to increase this sense of authenticity and thus to enable students to experience the essence of dialogue interpreting in a simulated environment.

    S Braun (2013)Keep your distance? Remote interpreting in legal proceedings: A critical assessment of a growing practice, In: F Pöchhacker, M Liu (eds.), Interpreting: international journal of research and practice in interpreting15(2)pp. 200-228 Benjamins

    Remote interpreting, whereby the interpreter is physically separated from those who need the interpretation, has been investigated in relation to conference and healthcare settings. By contrast, very little is known about remote interpreting in legal proceedings, where this method of interpreting is increasingly used to optimise interpreters’ availability. This paper reports the findings of an experimental study investigating the viability of videoconference-based remote interpreting in legal contexts. The study compared the quality of interpreter performance in traditional and remote interpreting, both using the consecutive mode. Two simulated police interviews of detainees, recreating authentic situations, were interpreted by eight interpreters with accreditation and professional experience in police interpreting. The languages involved were French (in most cases the interpreter’s native language) and English. Each interpreter interpreted one of the interviews in remote interpreting, and the other in a traditional face-to-face setting. Various types of problem in the interpretations were analysed, quantitatively and qualitatively. Among the key findings are a significantly higher number of interpreting problems, and a faster decline of interpreting performance over time, in remote interpreting. The paper gives details of these findings, and discusses the potential legal consequences of the problems identified.

    Mariachiara Russo, Emilia Iglesias Fernandez, Sabine Braun (2020)Introduction, In: The interpreter and translator trainer14(3)pp. 235-239 Taylor & Francis
    Sabine Braun (2016)The importance of being relevant? A cognitive-pragmatic framework for conceptualising audiovisual translation, In: Target: International Journal of Translation Studies28(2)pp. 302-313 John Benjamins Publishing

    Inspired by the belief that cognitive and pragmatic models of communication and discourse processing offer great potential for the study of Audiovisual Translation (AVT), this paper will review such models and discuss their contribution to conceptualising the three inter-related sub-processes underlying all forms of AVT: the comprehension of the multimodal discourse by the translator; the translation of selected elements of this discourse; and the comprehension of the newly formed multimodal discourse by the target audience. The focus will be on two models, Relevance Theory, which presents the most comprehensive pragmatic model of communication and Mental Model Theory, which underlies cognitive models of discourse processing. The two approaches will be used to discuss and question common perceptions of AVT as being ‘constrained’ and ‘partial’ translation.

    S Braun, J Taylor (2012)Video-mediated interpreting in criminal proceedings: two European surveys, In: S Braun, J Taylor (eds.), Videoconference and Remote Interpreting in Criminal Proceedingspp. 69-98 Intersentia
    S Braun (2003)Kommunikation unter widrigen Umständen? – Optimierungsstrategien in zweisprachigen Videokonferenz-Gesprächen, In: Connecting Perspectives. Videokonferencz: Beiträge zu ihrer Erforschung und Anwendungpp. 167-185 Shaker
    Sabine Braun, Angela Chambers (2006)Elektronische Korpora als Resource für den Fremdsprachenunterricht, In: UOH Jung (eds.), Praktische Handreichung für Fremdsprachenlehrerpp. 330-337 Lang

    In diesem Beitrag geht es um Möglichkeiten der Nut¬zung von Korpora im Sekundarschulbereich. Nach einem Überblick über einschlägige Korpusressourcen, Analy¬severfahren und Tools werden in knappen Zügen die Grundlagen der Korpusnutzung im Sprachlernkontext skizziert und anschließ end verschiedene Möglichkeiten für die Nutzung von Korpora gesprochener und ge¬schriebener Sprache illustriert.

    The potential of corpora for language learning and teaching has been widely acknowledged and their ready availability on the Web has facilitated access for a broad range of users, including language teachers and learners. However, the integration of corpora into general language learning and teaching practice has so far been disappointing. In this paper, I will argue that the shape of many existing corpora, designed with linguistic research goals in mind, clashes with pedagogic requirements for corpus design and use. Hence, a ‘pedagogic mediation of corpora’ is required (cf. Widdowson, 2003). I will also show that the realisation of this requirement touches on both the development of appropriate corpora and the ways in which they are exploited by learners and teachers. I will use a small English Interview Corpus (ELISA) to outline possible solutions for a pedagogic mediation. The major aspect of this is the combination of two approaches to the analysis and exploitation of a pedagogically relevant corpus: a corpus-based and a discourse-based approach.

    Sabine Braun, Elena Davitti, Sara Dicerto (2018)Video-Mediated Interpreting in Legal Settings: Assessing the Implementation, In: J Napier, R Skinner, Sabine Braun (eds.), Here or there: research on interpreting via video linkpp. 144-179 Gallaudet

    This chapter reports the key findings of the European AVIDICUS 3 project,1 which focused on the use of video-mediated interpreting in legal settings across Europe. Whilst judicial and law enforcement authorities have turned to videoconferencing to minimise delays in legal proceedings, reduce costs and improve access to justice, research into the use of video links in legal proceedings has called for caution. Sossin and Yetnikoff (2007), for example, contend that the availability of financial resources for legal proceedings cannot be disentangled from the fairness of judicial decision-making. The Harvard Law School (2009: 1193) warns that, whilst the use of video links may eliminate delays, it may also reduce an individual’s “opportunity to be heard in a meaningful manner”. In proceedings that involve an interpreter, procedural fairness and “the opportunity to be heard in a meaningful manner” are closely linked to the quality of the interpretation. The use of video links in interpreter-mediated proceedings therefore requires a videoconferencing solution that provides optimal support for interpreting as a crucial prerequisite for achieving the ultimate goal, i.e. fairness of justice. Against this backdrop, the main aim of AVIDICUS 3 was to identify institutional processes and practices of implementing and using video links in legal proceedings and to assess them in terms of how they accommodate and support bilingual communication mediated through an interpreter. The focus was on spoken-language interpreting. The project examined 12 European jurisdictions (Belgium, Croatia, England and Wales, Finland, France, Hungary, Italy, the Netherlands, Poland, Scotland, Spain and Sweden). An ethnographic approach was adopted to identify relevant practices, including site visits, in-depth and mostly in-situ interviews with over 100 representatives from different stakeholder groups, observations of real-life proceedings, and the analysis of a number of policy documents produced in the justice sector. The chapter summarises and systematises the findings from the jurisdictions included in this study. The assessment focuses on the use of videoconferencing in both national and cross-border proceedings, and covers different applications of videoconferencing in the legal system, including its use for links between courts and remote participants (e.g. witnesses, defendants in prison) and its use to access interpreters who work offsite (see Braun 2015; Skinner, Napier & Braun in this volume).

    When interpreting takes place in a videoconference setting, the intrinsic technological challenges and the very remoteness of the interpreters’ location compound the complexity of the task. Existing research on remote interpreting and the problems it entails focusses on remote conference interpreting, in which the interpreters are physically separated from the conference site while the primary interlocutors are together on site as usual. In an effort to broaden the scope of research in the area of remote interpreting to include other types and to address other questions, in particular that of the interpreters’ adaptability to new working conditions, this paper analyses small-group videoconferences in which the primary interlocutors as well as the interpreters all work from different locations. The findings from an empirical case study (based on recordings of videoconference sessions as well as introspective data) are used to identify and exemplify different types of interpreter adaptation.

    PD Ritsos, R Gittins, JC Roberts, S Braun, C Slater (2012)Using virtual reality for interpreter-mediated communication and training, In: Proceedings of the 2012 International Conference on Cyberworlds, Cyberworlds 2012pp. 191-198

    As international businesses adopt social media and virtual worlds as mediums for conducting international business, so there is an increasing need for interpreters who can bridge the language barriers, and work within these new spheres. The recent rise in migration (within the EU) has also increased the need for professional interpreters in business, legal, medical and other settings. Project IVY attempts to provide bespoke 3D virtual environments that are tailor made to train interpreters to work in the new digital environments, responding to this increased demand. In this paper we present the design and development of the IVY Virtual Environment. We present past and current design strategies, our implementation progress and our future plans for further development. © 2012 IEEE.

    S Braun, J Taylor (2012)Video-mediated interpreting: an overview of current practice and research, In: S Braun, J Taylor (eds.), Videoconference and Remote Interpreting in Criminal Proceedingspp. 33-68 Intersentia
    Sabine Braun, Catherine Slater, N Botfield (2015)Evaluating the pedagogical affordances of a bespoke 3D virtual learning environment for interpreters and their clients, In: J Napier, S Ehrlich (eds.), Interpreter Education in the Digital Age: Innovation, Access, and Changepp. 39-67 Gallaudet University Press

    Computer-generated 3D virtual worlds offer a number of affordances that make them attractive and engaging sites for learning, such as providing learners with a sense of presence, opportunities for synchronous and asynchronous interaction (e.g. in the form of voice or text chat, document viewing and sharing), and possibilities for collaborative work. Some of the research into educational uses of 3D virtual environments has engaged with how the learning opportunities they offer can be evaluated and has thus been experimenting with what needs to be evaluated to explore how learning takes place in virtual worlds and what methods can be used for the evaluation. Whilst some studies evaluate the design of the virtual world, its usability and its link to learning tasks (e.g. Chang et al. 2009, Deutschmann et al. 2009, Wiecha et al. 2010), others have sought to find out more about the interaction that takes place within virtual worlds. Peterson (2010), for example, focuses on learner participation patterns and interaction strategies in a language learning context, using qualitative methods including discourse analysis of learner transcripts (of text chat output in the target language) as the main research instrument, complemented by observation, field notes, pre- and post-study questionnaires and interviews. Alternatively, Lorenzo et al. (2012) compare collaborative work on a learning object in a virtual world with the same task in a conventional learning content management system. Other studies have sought to look more specifically at the learning processes that take place in virtual environments and in so doing have started to bring together theoretical frameworks from virtual world education with the psychological or cognitive aspects involved in learning (Henderson et al. 2012, Jarmon et al. (2009). Based on such approaches, especially the mixed methods approach adopted by Jarmon et al., this chapter reports on the pedagogical evaluation of the learning processes of trainee interpreters and clients of interpreting services (i.e. professionals who (may) communicate through interpreters in their everyday working lives) using a bespoke 3D Virtual Learning Environment.

    S Braun, P Orero (2010)Audio Description with Audio Subtitling – an emergent modality of audiovisual localisation, In: Perspectives: Studies in Translatology18(3)pp. 173-188 Routledge, Taylor & Francis Group

    Audio description (AD) has established itself as a media access service for blind and partially sighted people across a range of countries, for different media and types of audiovisual performance (e.g. film, TV, theatre, opera). In countries such as the UK and Spain, legislation has been implemented for the provision of AD on TV, and the European Parliament has requested that AD for digital TV be monitored in projects such as DTV4ALL (www.psp-dtv4all.org) in order to be able to develop adequate European accessibility policies. One of the drawbacks is that in their current form, AD services largely leave the visually impaired community excluded from access to foreign-language audiovisual products when they are subtitled rather than dubbed. To overcome this problem, audio subtitling (AST) has emerged as a solution. This article will characterise audio subtitling as a modality of audiovisual localisation which is positioned at the interface between subtitling, audio description and voice-over. It will argue that audio subtitles need to be delivered in combination with audio description and will analyse, system- atise and exemplify the current practice of audio description with audio subtitling using commercially available DVDs.

    S Braun, C Slater, R Gittins, PD Ritsos, JC Roberts (2013)Interpreting in Virtual Reality: designing and developing a 3D virtual world to prepare interpreters and their clients for professional practice, In: D Kiraly, S Hansen-Schirra, K Maksymski (eds.), New Prospects and Perspectives for Educating Language Mediators(5)pp. 93-120 Gunter Narr Verlag

    This paper reports on the conceptual design and development of an avatar-based 3D virtual environment in which trainee interpreters and their potential clients (e.g. students and professionals from the fields of law, business, tourism, medicine) can explore and simulate professional interpreting practice. The focus is on business and community interpreting and hence the short consecutive and liaison interpreting modes. The environment is a product of the European collaborate project IVY (Interpreting in Virtual Reality). The paper begins with a state-of-the-art overview of the current uses of ICT in interpreter training (section 2), with a view to showing how the IVY environment has evolved out of existing knowledge of these uses, before exploring how virtual worlds are already being used for pedagogical purposes in fields related to interpreting (section 3). Section 4 then shows how existing knowledge about learning in virtual worlds has fed into the conceptual design of the IVY environment and introduces that environment, its working modes and customised digital content. This is followed by an analysis of the initial evaluation feedback on the first environment prototype (section 5), a discussion of the main pedagogical implications (section 6) and concluding remarks (section 7). The more technical aspects of the IVY environment are described in Ritsos et al. (2012).

    Sabine Braun (2007)Designing and exploiting small multimedia corpora for autonomous learning and teaching, In: E Hidalgo, L Quereda, J Santana (eds.), Corpora in the Foreign Language Classroom. Selected papers from TaLC6. Language and Computers Vol. 1616pp. 31-46 Rodopi

    The use of corpora in the second-language learning context requires the availability of corpora which are pedagogically relevant with regard to choice of discourse, choice of media, annotation and size. I here describe a pedagogically motivated corpus design which supports a direct and efficient exploitation of the corpus by learners and teachers. One of the major guidelines is Widdowson's (2003) claim that the successful use of corpora requires a learner's (and teacher's) ability to 'authenticate' the corpus materials. In line with this, I argue for the development of small and pedagogically annotated corpora which enable us to combine two methods of analysis and exploitation to mutual benefit: a corpus-based approach (i.e. 'vertical reading' of e.g. concordances), which provides patterns of language use, and a discourse-based approach, which focuses on the analysis of the individual texts in the corpus and of linguistic means of expression in relation to their communicative (situational) and cultural embedding. To illustrate my points, I use a small multimedia corpus of spoken English which is currently being developed as a model corpus with pedagogical goals in mind.

    S Braun (2007)Audio Description from a discourse perspective: a socially relevant framework for research and training, In: Linguistica Antverpiensia NS6pp. 357-369 University Press Antwerp (UPA)

    The topic of this paper is Audio Description (AD) for blind and partially sighted people. I will outline a discourse-based approach to AD focussing on the role of mental modelling, local and global coherence, and different types of inferences (explicatures and implicatures). Applying these concepts to AD, I will discuss initial insights and outline questions for empirical research. My main aim is to show that a discourse-based approach to AD can provide an informed framework for research, training and practice.

    Maria Andreea Deleanu, Constantin Orasan, Sabine Braun (2024)Accessible Communication: a systematic review and comparative analysis of official English Easy-to-Understand (E2U) language guidelines, In: Proceedings of the 3rd Workshop on Tools and Resources for People with REAding DIfficulties (READI) pp. 70-92 ELRA and ICCL

    Easy-to-Understand (E2U) language varieties have been recognized by the United Nation's Convention on the Rights of Persons with Disabilities (2006) as a means to guarantee the fundamental right to Accessible Communication. Increased awareness has driven changes in European (European Commission, 2015, 2021; European Parliament, 2016) and International legislation (ODI, 2010), prompting public-sector and other institutions to offer domain-specific content into E2U language to prevent communicative exclusion of those facing cognitive barriers (COGA, 2017; Maaß, 2020; Perego, 2020). However, guidance on what it is that makes language actually 'easier to understand' is still fragmented and vague. For this reason, we carried out a systematic review of official guidelines for English Plain Language and Easy Language to identify the most effective lexical, syntactic and adaptation strategies that can reduce complexity in verbal discourse according to official bodies. This article will present the methods and preliminary results of the guidelines analysis.

    S Braun (2011)Creating coherence in Audio Description, In: Meta: Journal des Traducteurs/ Meta: Translator's Journal56(3)pp. 645-662 Les Presses de l'Université de Montréal

    As an emerging form of intermodal translation, audio description (AD) raises many new questions for Translation Studies and related disciplines. This paper will investigate the question of how the coherence of a multimodal source text such as a film can be re-created in audio description. Coherence in film characteristically emerges from links within and across different modes of expression (e.g. links between visual images, image-sound links and image-dialogue links). Audio describing a film is therefore not simply a matter of substituting visual images with verbal descriptions. It involves ‘translating’ some of these links into other appropriate types of links. Against this backdrop, this paper aims to examine the means available for the re-creation of coherence in an audio described version of a film, and the problems arising. To this end, the paper will take a fresh look at coherence, outlining a model of coherence which embraces verbal and multimodal texts and which highlights the important role of both source text author (viz. audio describer as translator) and target text recipients in creating coherence. This model will then be applied to a case study focussing on the re-creation of various types of intramodal and intermodal relations in AD.

    Tomasz Grzegorz Korybski, Elena Davitti, Constantin Orăsan, Sabine Braun (2022)A Semi-Automated Live Interlingual Communication Workflow Featuring Intralingual Respeaking: Evaluation and Benchmarking, In: LREC 2022: 13th International Conference on Language Resources and Evaluationpp. 4405-4413 European Language Resources Assoc-Elra

    In this paper, we present a semi-automated workflow for live interlingual speech-to-text communication which seeks to reduce the shortcomings of existing ASR systems: a human respeaker works with a speaker-dependent speech recognition software (e.g., Dragon Naturally Speaking) to deliver punctuated same-language output of superior quality than obtained using out-of-the-box automatic speech recognition of the original speech. This is fed into a machine translation engine (the EU's eTranslation) to produce live-caption ready text. We benchmark the quality of the output against the output of best-in-class (human) simultaneous interpreters working with the same source speeches from plenary sessions of the European Parliament. To evaluate the accuracy and facilitate the comparison between the two types of output, we use a tailored annotation approach based on the NTR model (Romero-Fresco and Pochhacker, 2017). We find that the semi-automated workflow combining intralingual respeaking and machine translation is capable of generating outputs that are similar in terms of accuracy and completeness to the outputs produced in the benchmarking workflow, although the small scale of our experiment requires caution in interpreting this result.

    S Braun (2006)ELISA–a pedagogically enriched corpus for language learning purposes, In: S Braun, K Kohn, J Mukherjee (eds.), Corpus Technology And Language Pedagogy: New Resources, New Tools, New Methodspp. 25-47 Lang

    The aim of this paper is to introduce a methodological solution for the design and exploitation of a corpus which is dedicated to pedagogical goals. In particular, I will argue for a pedagogically appropriate corpus annotation and query, and for the enrichment of such a corpus with additional materials (including corpus-based tasks and exercises). The solution will be illustrated with the help of ELISA, a small spoken corpus of English containing video interviews with native speakers. However, the methodology is transferable to the creation of pedagogically relevant corpora with other contents and for other languages.

    Muhammad Ahmed Saeed, Eloy Rodriguez Gonzalez, Tomasz Korybski, Elena Davitti, Sabine Braun (2023)Comparing Interface Designs to Improve RSI platforms: Insights from an Experimental Study, In: Proceedings of the International Conference HiT-IT 2023pp. 147-156

    Remote Simultaneous Interpreting (RSI) platforms enable interpreters to provide their services remotely and work from various locations. However , research shows that interpreters perceive interpreting via RSI platforms to be more challenging than on-site interpreting in terms of performance and working conditions [1]. While poor audio quality is a major concern for RSI [2,3], another issue that has been frequently highlighted is the impact of the interpreter's visual environment on various aspects of RSI. However, this aspect has received little attention in research. The study reported in this article investigates how various visual aids and methods of presenting visual information can aid interpreters and improve their user experience (UX). The study used an experimental design and tested 29 professional conference interpreters on different visual interface options, as well as eliciting their work habits, perceptions and working environments. The findings reveal a notable increase in the frequency of RSI since the beginning of the COVID-19 pandemic. Despite this increase, most participants still preferred on-site work. The predominant platform for RSI among the interpreters sampled was Zoom, which has a minimalist interface that contrasts with interpreter preferences for maximalist, information-rich be-spoke RSI interfaces. Overall, the study contributes to supporting the visual needs of interpreters in RSI.

    Sabine Braun, R Skinner, J Napier (2018)Here or there: Research on interpreting via video link.16 Gallaudet Press.

    The field of sign language interpreting is undergoing an exponential increase in the delivery of services through remote and video technologies. The nature of these technologies challenges established notions of interpreting as a situated, communicative event and of the interpreter as a participant. As a result, new perspectives and research are necessary for interpreters to thrive in this environment. This volume fills that gap and features interdisciplinary explorations of remote interpreting from spoken and signed language interpreting scholars who examine various issues from linguistic, sociological, physiological, and environmental perspectives. Here or There presents cutting edge, empirical research that informs the professional practice of remote interpreting, whether it be video relay service, video conference, or video remote interpreting. The research is augmented by the perspectives of stakeholders and deaf consumers on the quality of the interpreted work. Among the topics covered are professional attitudes and motivations, interpreting in specific contexts, and adaptation strategies. The contributors also address potential implications for relying on remote interpreting, discuss remote interpreter education, and offer recommendations for service providers.

    Sabine Braun (2015)Remote Interpreting, In: F Pöchhacker, N Grbic, P Mead, R Setton (eds.), Routledge Encyclopedia of Interpreting Studiespp. 346-348 Routledge

    The term ‘remote interpreting’ (RI) refers to the use of communication TECHNOLOGY for gaining access to an interpreter who is in another room, building, city or country and who is linked to the primary participants by telephone or videoconference. RI by telephone is nowadays often called TELEPHONE INTERPRETING or over-the-phone interpreting. RI by videoconference is often simply called remote interpreting when it refers to spoken-language interpreting. In SIGNED LANGUAGE INTERPRETING, the term VIDEO REMOTE INTERPRETING has become established. RI is best described as a modality or method of delivery. It has been used for SIMULTANEOUS INTERPRETING, CONSECUTIVE INTERPRETING and DIALOGUE INTERPRETING. This entry focuses on RI by videoconference in spoken-language interpreting.

    Kim Starr, Sabine Braun (2023)Omissions and inferential meaning-making in audio description, and implications for automating video content description, In: Universal access in the information society Springer

    There is broad consensus that audio description (AD) is a modality of intersemiotic translation, but there are different views in relation to how AD can be more precisely conceptualised. While Benecke (Audiodeskription als partielle Translation. Modell und Methode, LIT, Berlin, 2014) characterises AD as ‘partial translation’, Braun (T 28: 302–313, 2016) hypothesises that what audio describers appear to ‘omit’ from their descriptions can normally be inferred by the audience, drawing on narrative cues from dialogue, mise-en-scène, kinesis, music or sound effects. The study reported in this paper tested this hypothesis using a corpus of material created during the H2020 MeMAD project. The MeMAD project aimed to improve access to audiovisual (AV) content through a combination of human and computer-based methods of description. One of the MeMAD workstreams addressed human approaches to describing visually salient cues. This included an analysis of the potential impact of omissions in AD, which is the focus of this paper. Using a corpus of approximately 500 audio described film extracts we identified the visual elements that can be considered essential for the construction of the filmic narrative and then performed a qualitative analysis of the corresponding audio descriptions to determine how these elements are verbally represented and whether any omitted elements could be inferred from other cues that are accessible to visually impaired audiences. We then identified the most likely source of these inferences and the conditions upon which retrieval could be predicated, preparing the ground for future reception studies to test our hypotheses with target audiences. In this paper, we discuss the methodology used to determine where omissions occur in the analysed audio descriptions, consider worked examples from the MeMAD500 film corpus, and outline the findings of our study namely that various strategies are relevant to inferring omitted information, including the use of proximal and distal contextual cues, and reliance on the application of common knowledge and iconic scenarios. To conclude, consideration is given to overcoming significant omissions in human-generated AD, such as using extended AD formats, and mitigating similar gaps in machine-generated descriptions, where incorporating dialogue analysis and other supplementary data into the computer model could resolve many omissions.

    R Skinner, J Napier, Sabine Braun (2018)Interpreting via video link: Mapping of the field., In: J Napier, R Skinner, Sabine Braun (eds.), Here or there: research on interpreting via video link.(16)pp. 11-35 Gallaudet

    This special volume Here or There: Research on interpreting via video link aims to bring together a collection of international research on remote interpreting mediated by an audio-video link, covering both spoken language and sign-language interpreting experiences. There is still much to be learnt in the way we define and describe the needs of all stakeholders and how best to use the technology to enable interpreting services to function as intended. Like other areas of study we already see a number of discrepancies when it comes to interpreting by video link and we have yet to reach clear and conclusive answers. This chapter aims to give an overview of the emerging field of remote interpreting by video link and review the empirical research that has come from this sector.

    Sabine Braun (2015)Remote Interpreting, In: H Mikkelson, R Jourdenais (eds.), Routledge Handbook of Interpretingpp. 352-367 Routledge

    The development of communication technologies such as telephony, videoconferencing and web-conferencing in interpreter-mediated communication has led to alternative ways of delivering interpreting services. Several uses of these technologies can be distinguished in connection with interpreting. ‘Remote interpreting’ in the narrow sense often refers to their use to gain access to an interpreter in another location, but similar methods of interpreting are required for interpreting in virtual meetings in which the primary participants themselves are distributed across different sites. In spite of their different underlying motivations, these methods of interpreting all share elements of remote working from the interpreter’s point of view and will therefore be subsumed here under one heading. Although the practice of remote interpreting (in all its forms) is controversial among interpreters, the last two decades have seen an increase in this practice in all fields of interpreting. As such, it has also caught the attention of scholars, who have begun to investigate remote interpreting, for example, with a view to the quality of the interpreter’s performance and a range of psychological and physiological factors. This chapter will begin by explaining the key terms and concepts associated with remote interpreting and then give an overview of the historical development and current trends of remote interpreting in supra-national institutions, legal, healthcare and other settings, referring to current and emerging practice and to insights from research. This will be followed by the presentations of recommendations for practice and an outlook at future directions of this practice and for research.

    Eloy Rodríguez González, Muhammad Ahmed Saeed, Tomasz Korybski, Elena Davitti, Sabine Braun (2023)Assessing the impact of automatic speech recognition on remote simultaneous interpreting performance using the NTR Model, In: PROCEEDINGS of the International Workshop on Interpreting Technologies SAY IT AGAIN 2023

    The emergence of Simultaneous Interpreting Delivery Platforms (SIDPs) has opened up new opportunities for interpreters to provide cloud-based remote simultaneous interpreting (RSI) services. Similar to booth-based RSI, which has been shown to be more tiring than conventional simultaneous interpreting and more demanding in terms of information processing and mental modelling [11; 12], cloud-based RSI configurations are perceived as more stressful than conventional simultaneous interpreting and potentially detrimental to interpreting quality [2]. Computer-assisted interpreting (CAI) tools, including automatic speech recognition (ASR) [8], have been advocated as a means to support interpreters during cloud-based RSI assignments, but their effectiveness is under-explored. The study reported in this article experimentally investigated the impact of providing interpreters with access to an ASR-generated live transcript of the source speech while they were interpreting, examining its effect on their performance and overall user experience. As part of the experimental design, 16 professional conference interpreters performed a controlled interpreting test consisting of a warmup speech (not included in the analysis), and four speeches, i.e., two lexically dense speeches and two fast speeches, presented in two different interpreting conditions, i.e., with and without ASR support. This article presents initial quantitative findings from the analysis of the interpreters' performance, which was conducted using the NTR Model [17]. Overall, the findings reveal a reduction in the total number of interpreting errors in the ASR condition. However , this is accompanied by a loss in stylistic quality in the ASR condition.

    Jaleh Delfani, Constantin Orasan, Hadeel Saadany, Özlem Temizöz, Eleanor Taylor-Stilgoe, Diptesh Kanojia, Sabine Braun, Barbara C. Schouten Google Translate Error Analysis for Mental Healthcare Information: Evaluating Accuracy, Comprehensibility, and Implications for Multilingual Healthcare Communication, In: arXiv.org

    This study explores the use of Google Translate (GT) for translating mental healthcare (MHealth) information and evaluates its accuracy, comprehensibility, and implications for multilingual healthcare communication through analysing GT output in the MHealth domain from English to Persian, Arabic, Turkish, Romanian, and Spanish. Two datasets comprising MHealth information from the UK National Health Service website and information leaflets from The Royal College of Psychiatrists were used. Native speakers of the target languages manually assessed the GT translations, focusing on medical terminology accuracy, comprehensibility, and critical syntactic/semantic errors. GT output analysis revealed challenges in accurately translating medical terminology, particularly in Arabic, Romanian, and Persian. Fluency issues were prevalent across various languages, affecting comprehension, mainly in Arabic and Spanish. Critical errors arose in specific contexts, such as bullet-point formatting, specifically in Persian, Turkish, and Romanian. Although improvements are seen in longer-text translations, there remains a need to enhance accuracy in medical and mental health terminology and fluency, whilst also addressing formatting issues for a more seamless user experience. The findings highlight the need to use customised translation engines for Mhealth translation and the challenges when relying solely on machine-translated medical content, emphasising the crucial role of human reviewers in multilingual healthcare communication.

    Sabine Braun, Khetam Al Sharou, Özlem Temizöz (2023)Technology use in language-discordant interpersonal healthcare communication, In: Laura Gavioli, Cecilia Wadensjö (eds.), The Routledge Handbook of Public Service Interpreting Routledge

    Linguistically and culturally competent human interpreters play a crucial role in facilitating language-discordant interpersonal healthcare communication. Traditionally, interpreters work alongside patients and healthcare providers to provide in-person interpreting services. However, problems with access to professional interpreters, including time pressure and a lack of local availability of interpreters, have led to an exploration and implementation of alternative approaches to providing language support. They include the use of communication technologies to access professional interpreters and volunteers but also the application of various language and translation technologies. This chapter offers a critical review of four different approaches, all of which are conceptualised as different types of human-machine interaction: technology-mediated interpreting, crowdsourcing of volunteer language mediators via digital platforms, machine translation, and the use of translation apps populated with pre-translated phrases and sentences. Each approach will be considered in a separate section, beginning with a review of the relevant scholarly literature and main practical developments, followed by a discussion of critical issues and challenges arising. The focus is on dialogic communication and interaction. Technology-assisted methods of translating written texts are not included.

    Sabine Braun (2016)Videoconferencing as a tool for bilingual mediation, In: B Townlsey (eds.), Understanding Justice: An enquiry into interpreting in civil justice and mediationpp. 194-227 Middlesex University
    Sabine Braun (2019)Technology in interpreting, In: Routledge Encyclopedia of Translation Studies Routledge

    The spread of communication technologies has generated a demand for interpreting services in situations where participants are in different locations. Examples include virtual meetings, online conferences, video links between courts and prisons for pre-trial hearings, and phone calls between doctors and patients in tele-healthcare. Whilst interpreters working in these situations are often co-located with one of the participants, the same technologies have also enabled the rise of remote interpreting, where the interpreters are physically separated from all other participants in the communication. The different configurations of technology-mediated interpreting all share an element of remote working for the interpreter but differ with regard to a range of parameters, including: the location and distribution of participants; the modes of interpreting supported (consecutive, simultaneous, or both); the medium of communication (audio-only or audio-video); the technological basis or platform (hardware-based, i.e. telephone or videoconferencing systems, or software-based, i.e. cloud-based conferencing applications); and the connection type (satellite, ISDN, broadband internet, mobile network). In terms of the technology, telephone- and video-mediated interpreting are the two established modalities at the time of writing. They are mainly used for consecutive/dialogue interpreting, catering for bilingual public service and business settings. In settings where simultaneous interpreting is needed, telephone- and video-mediated solutions require additional equipment and/or functionality. Such solutions have been developed for conference interpreting and, to a lesser extent, court interpreting. They were initially hardware-based, using videoconferencing systems, but more recently, interpreting delivery platforms using cloud-based web conferencing applications have begun to emerge. These platforms focus on simultaneous interpreting for multilingual events. The modality of interpreting associated with them has become known as webcast interpreting or remote simultaneous interpreting. The terminology used to refer to different modalities and configurations of technology-mediated interpreting is not yet standardized. This entry uses distance interpreting as an umbrella term; telephone-mediated, video-mediated and webcast interpreting for its different modalities; and remote and teleconference interpreting for the two main configurations arising from the distribution of participants and interpreters.

    Sabine Braun, K Balogh (2015)Bilingual videoconferencing in legal proceedings: Findings from the AVIDICUS projects., In: Proceedings of the conference ‘Elektroniczny protokół – szansą na transparentny i szybkiproces’ (Electronic Protocol – a chance for transparent and fast trial)pp. 21-34 Polish Ministry of Justice
    S Braun, K Kohn (2005)Sprache(n) in der Wissensgesellschaft. Peter Lang
    Demitra Krystallidou, Sabine Braun (2022)Risk and Crisis Communication during COVID-19 in Linguistically and Culturally Diverse Communities, In: The Languages of COVID-19pp. 128-144 Routledge

    This chapter reports on a scoping review of the literature on risk and crisis communication during the COVID-19 pandemic in linguistically and culturally diverse communities. Three hundred studies were screened against inclusion and exclusion criteria. Forty studies were included in the review, and underwent thematic analysis in terms of availability, accessibility, acceptability and adaptability. Five themes were identified with regard to the role of technologies, top-down, bottom-up and hybrid approaches to risk and crisis communication in linguistically and culturally diverse communities, as well as gaps in the literature with regard to the quality and appropriateness of translated materials. The chapter concludes with a set of recommendations for commissioners of translation services, as well as professional translators and interpreters.

    Muhammad Ahmed Saeed, Eloy Rodriguez Gonzalez, Tomasz Grzegorz Korybski, Elena Davitti, Sabine Braun (2022)Connected yet Distant: An Experimental Study into the Visual Needs of the Interpreter in Remote Simultaneous Interpreting, In: 24th HCI International Conference (HCII 2022) Proceedings, Part III Springer

    Remote simultaneous interpreting (RSI) draws on Information and Communication Technologies to facilitate multilingual communication by connecting conference interpreters to in-presence, virtual or hybrid events. Early solutions for RSI involved interpreters working in interpreting booths with ISOstandardised equipment. However, in recent years, cloud-based solutions for RSI have emerged, with innovative Simultaneous Interpreting Delivery Platforms (SIDPs) at their core, enabling RSI delivery from anywhere. SIDPs recreate the interpreter's console and work environment (Braun 2019) as a bespoke software/videoconferencing platform with interpretation-focused features. Although initial evaluations of SIDPs were conducted before the Covid-19 pandemic (e.g., DG SCIC 2019), research on RSI (booth-based and software-based) remains limited. Pre-pandemic research shows that RSI is demanding in terms of information processing and mental modelling (Braun 2007; Moser-Mercer 2005), and suggests that the limited visual input available in RSI constitutes a particular problem (Mouzourakis 2006; Seeber et al. 2019). Besides, initial explorations of the cloud-based solutions suggest that there is room for improving the interfaces of widely used SIDPs (Bujan and Collard 2021; DG SCIC 2019). The experimental project presented in this paper investigates two aspects of SIDPs: the design of the interpreter interface and the integration of supporting technologies. Drawing on concepts and methods from user experience research and human-computer interaction, we explore what visual information is best suited to support the interpreting process and the interpreter-machine interaction, how this information is best presented in the interface, and how automatic speech recognition can be integrated into an RSI platform to aid/augment the interpreter's source-text comprehension.

    Sabine Braun (2015)Videoconference Interpreting, In: F Pöchhacker, N Grbic, P Mead, R Setton (eds.), Routledge Encyclopedia of Interpreting Studiespp. 437-439 Routledge
    Elena Davitti, Sabine Braun (2020)Analysing Interactional Phenomena in Video Remote Interpreting in Collaborative Settings: Implications for Interpreter Education, In: The Interpreter and Translator Trainer Taylor & Francis (Routledge)

    Video Remote Interpreting (VRI) is a modality of interpreting where the interpreter interacts with the other parties-at-talk through an audiovisual link without sharing the same physical interactional space. In dialogue settings, existing research on VRI has mostly drawn on the analysis of verbal behaviour to explore the complex dynamics of these ‘triadic’ exchanges. However, understanding the complexity of VRI requires a more holistic analysis of its dynamics in different contexts as a situated, embodied activity where resources other than talk (such as gaze, gestures, head and body movement) play a central role in the co-construction of the communicative event. This paper draws on extracts from a corpus of VRI encounters in collaborative contexts (e.g. nurse-patient interaction, customer services) to investigate how specific interactional phenomena which have been explored in traditional settings of dialogue interpreting (e.g. turn management, dyadic sequences, spatial management) unfold in VRI. In addition, the paper will identify the coping strategies implemented by interpreters to deal with various challenges. This fine-grained, microanalytical look at the data will complement the findings provided by research on VRI in legal/adversarial contexts and provide solid grounds to evaluate the impact of different moves. Its systematic integration into training will lead to a more holistic approach to VRI education.

    This paper reports on an empirical case study conducted to investigate the overall conditions and challenges of integrating corpus materials and corpus-based learning activities into Englishlanguage classes at a secondary school in Germany. Starting from the observation that in spite of the large amount of research into corpus-based language learning, hands-on work with corpora has remained an exception in secondary schools, the paper starts by outlining a set of pedagogical requirements for corpus integration and the approach which has formed the basis for designing the case study. Then the findings of the study are reported and discussed. As a result of the methodological challenges identified in the study, the author argues for a move from ’data-driven learning’ to needs-driven corpora, corpus activities and corpus methodologies.

    Tomasz Korybski, Elena Davitti, Constantin Orasan, Sabine Braun (2023)MATRIC Machine Translation and Respeaking in Interlingual Communication Unpublished
    PD Ritsos, R Gittins, S Braun, C Slater, JC Roberts (2013)Training Interpreters using Virtual Worlds, In: ML Gavrilova, CJK Tan, A Kuijper (eds.), Transactions on Computational Science XVIIILNCS 7pp. 21-40 Springer-Verlag Berlin Heidelberg

    With the rise in population migration there has been an increased need for professional interpreters who can bridge language barriers and operate in a variety of fields such as business, legal, social and medical. Interpreters require specialized training to cope with the idiosyncrasies of each eld and their potential clients need to be aware of professional parlance. We present `Project IVY'. In IVY, users can make a selection from over 30 interpreter training scenarios situated in the 3D virtual world. Users then interpret the oral interaction of two avatar actors. In addition to creating di erent 3D scenarios, we have developed an asset management system for the oral les and permit users (mentors of the training interpreters) to easily upload and customize the 3D environment and observe which scenario is being used by a student. In this article we present the design and development of the IVY Virtual Environment and the asset management system. Finally we make discussion over our plans for further development.

    Sabine Braun, Elena Davitti, Catherine Slater (2020)It s like being in bubbles : affordances and challenges of virtual learning environments for collaborative learning in interpreter education, In: The interpreter and translator trainer14(3)pp. 259-278 Routledge

    We report on a study evaluating the educational opportunities that highly multimodal and interactive Virtual Learning Environments (VLE) provide for collaborative learning in the context of interpreter education. The study was prompted by previous research into the use of VLEs in interpreter education, which showed positive results but which focused on preparatory or ancillary activities and/or individual interpreting practice. The study reported here, which was part of a larger project on evaluating the use of VLEs in educating interpreters and their potential clients, explored the affordances of a videoconferencing platform and a 3D virtual world for collaborative learning in the context of dialogue interpreting. The participants were 13 student-interpreters, who conducted role-play simulations in both environments. Through a mix of methods such as non-participant observation, reflective group discussions, linguistic analysis of the recorded simulations, and a user experience survey several dimensions of using the VLEs were explored including the linguistic/discursive dimension (interpreting), the interactional dimension (communication management between the participants), the ergonomic dimension (human-computer interaction) and the psychological dimension (user experience, sense of presence). Both VLEs were found to be capable of supporting situated and autonomous learning in the interpreting context, although differences arose regarding the reported user experience.

    S Braun (2014)Comparing traditional and remote interpreting in police settings: quality and impact factors, In: M Viezzi, C Falbo (eds.), Traduzione e interpretazione per la società e le istituzionipp. 161-176 Edizioni Università di Trieste

    Translating and interpreting for society and the institutions means meeting the new language needs characterising everyday life. As a result of growing mobility and constantly increasing migration flows, often institutions are required to communicate with people who speak languages ​​of lesser diffusion in Europe's multicultural and multilingual context. These needs are anche felt in the legal sector. The articles included in this volume show clearly That meeting language needs in the legal sector means guaranteeing citizens' rights and strengthening democracy in our societies. [Source: Editors]

    Sabine Braun (2018)Video-mediated interpreting in legal settings in England: Interpreters’ perceptions in their sociopolitical context, In: Translation and Interpreting Studies13(3)pp. 393-420 John Benjamins Publishing

    The increasing use of videoconferencing technology in legal proceedings has led to different configurations of video-mediated interpreting (VMI). Few studies have explored interpreter perceptions of VMI, each focusing on one country, configuration (e.g. interpreter-assisted video links between courts and remote participants) and setting (e.g. immigration). The study reported here is the first study drawing on multiple data sets, countries, settings and configurations to investigate interpreter perceptions of VMI. It compares perceptions in England with other countries, covering common configurations (e.g. court-prison video links, links to remote interpreters) and settings (e.g. police, court, immigration), and taking into account the sociopolitical context in which VMI has emerged. The aim is to gain systematic insights into the factors shaping the interpreters’ perceptions as a step towards improving VMI.

    S Braun (2003)Dolmetschen in der Videokonferenz. Kommunikative Kompentenz und Monitoringstrategien, In: S&K Braun (eds.), Kultur und Übersetzung: Methodologische Probleme des Kulturtransfers - mit Ausgewählten Beiträgen des Saarbrücker Symposiums 1999 (Jahrbuch Übersetzen und Dolmetschen 2/2001)pp. 3-32 Narr
    Sabine Braun (2019)Technology and interpreting, In: The Routledge Handbook of Translation and Technologypp. 271-288 Routledge

    This chapter provides an overview of technologies that have been used to deliver, support, enhance, or extend the reach of, human interpreting and a brief overview of the main developments in machine interpreting. In relation to human interpreting, the focus is on distance interpreting (technology-mediated interpreting), i.e. modalities of interpreting whereby interpreters are physically separated from some or all of their clients and use communication technologies such as tele-and videoconferencing to deliver their services. In addition, still in relation to human interpreting, the chapter outlines various types of technology-supported interpreting, showing how digital technologies have also begun to change the way in which onsite interpreting is performed (e.g. through simconsec interpreting). With regard to machine interpreting the chapter outlines the major milestones in the evolution of speech-to-text and speech-to-speech translation and current limitations. In addition to introducing and explaining the technologies themselves, the chapter explores how they have been adopted by the community of interpreters and their clients, what the main challenges are in this process, which approaches research has taken to illuminate different aspects of technology interfacing with interpreting, and which areas warrant further research.

    In response to increasing mobility and migration in Europe, the European Directive 2010/64/EU on strengthening the rights to interpretation and translation in criminal proceedings has highlighted the importance of quality in legal translation and interpreting. At the same time, the economic situation is putting pressure on public services and translation/interpreting service providers alike, jeopardizing quality standards and fair access to justice. With regard to interpreting, the use of videoconference technology is now being widely considered as a potential solution for gaining cost-effective and timely access to qualified legal interpreters. However, this gives rise to many questions, including: how technological mediation through videoconferencing affects the quality of interpreting; how this is related to the actual videoconference setting and the distribution of participants; and ultimately whether the different forms of video-mediated interpreting are sufficiently reliable for legal communication. It is against this backdrop that the AVIDICUS Project (2008-11), co-funded by the European Commission’s Directorate-General Justice, set out to research the quality and viability of video-mediated interpreting in criminal proceedings. This volume, which is based on the final AVIDICUS Symposium in 2011, presents a cross-section of the findings from AVIDICUS and complementary research initiatives, as well as recommendations for judicial services, legal practitioners and police officers, and legal interpreters.

    Katriina L. Whitaker, Demi Krystallidou, Emily D. Williams, Georgia Black, Cecilia Vindrola-Padros, Sabine Braun, Paramjit Gill (2021)Editorials Addressing language as a barrier to healthcare access and quality, In: British Journal of General Practice72(714)pp. 4-5 Royal College of General Practitioners

    International migration has increased rapidly over the past 20 years, with an estimated 281 million people living outside their country of birth. Similarly, migration to the UK has continued to rise over this period; current annual migration is estimated to be over 700,000 per year (net migration of over 300,000). With migration comes linguistic diversity, and in healthcare, this often translates into linguistic discordance between patients and healthcare professionals. This can result in communication difficulties that lead to lower quality of care and poor outcomes. COVID-19 has heightened inequalities in relation to language: communication barriers, defined as barriers in understanding or accessing key information on healthcare and challenges in reporting on health conditions, are known to have compounded risks for migrants in the context of COVID-19. Digitalisation of healthcare has further amplified inequalities in primary care for migrant groups.

    Sabine Braun, JL Taylor, J Miler-Cassino, Z Rybińska, K Balogh, E Hertog, Y vanden Bosch, D Rombouts (2012)Training in video-mediated interpreting in criminal proceedings: modules for interpreting students, legal interpreters and legal practitioners, In: Sabine Braun, J Taylor (eds.), Videoconference and Remote Interpreting in Criminal Proceedingspp. 233-288 Intersentia

    Because of the scarcity of training opportunities in legal interpreting, and the non-existence of training in video-mediated legal interpreting per se, both from the point of view of the legal interpreters themselves, and that of the legal professionals who work with interpreters, the AVIDICUS Project included as one of its core objectives to devise and pilot three training modules on video-mediated interpreting: one for legal practitioners, including the police; one for interpreters working in the legal services; and one for interpreting students. This chapter presents the three training modules, designed and developed by the AVIDICUS Project. Following a discussion of the background context to the need for training and the technological of such training, the module for student interpreters is presented, followed by the legal interpreters’ module, and finally the module aimed at legal practitioners and police officers.

    Michael Carl, Sabine Braun (2017)Translation, interpreting and new technologies, In: K Malmkjaer (eds.), The Routledge Handbook of Translation Studies and Linguisticspp. 374-390 Routledge

    The translation of written language, the translation of spoken language and interpreting have traditionally been separate fields of education and expertise, and the technologies that emulate and/or support those human activities have been developed and researched using different methodologies and by different groups of researchers. Although recent increase in synergy between these well-established fields has begun to blur the boundaries, this section will adhere to the three-fold distinction and begin by giving an overview of key concepts in relation to written-language translation and technology, including computer-assisted translation (CAT) and fully automatic machine translation (MT). This will be followed by an overview of spoken-language translation and technology, which will make a distinction between written translation products (speech-to-text translation, STT) and spoken translation products (speech-to-speech translation, SST). The key concepts of information and communications technology (ICT) supported interpreting, which is currently separate from the technological developments in written- and spoken-language translation, will be outlined in a third section and a fourth will provide an overview of current usages of translation and interpreting technologies.

    S Braun (2012)Recommendations for the use of video-mediated interpreting in criminal proceedings, In: S Braun, J Taylor (eds.), "Videoconference and Remote Interpreting in Criminal Proceedings"pp. 301-328 Intersentia

    Aims: Popular and professional media play a significant role in shaping the social construction of prescription medication. How people understand medication is largely determined by how it is represented in the texts that they read. Different cultural and national contexts create significant variation in these constructions. This research looks at the social constructions of the five mostly frequently discussed antidepressants in Chinese and British newspapers and medical journals during a 15-year period, which were fluoxetine, citalopram, sertraline, venlafaxine and duloxetine. We aimed to discover themes of importance in different national contexts and how Chinese and British people's understanding and attitudes towards their use of antidepressants have been influenced by such constructions. Methodology: This research makes use of a corpus linguistics tool called Sketch Engine to discover patterns of discussions on antidepressants. Four corpora were built for this research project:

    S Braun (2010)“These people I was taking care of their horses for, they owned Tennessee Walkers”: on ‘spokenness’ in English, its acceptance and pedagogical implications., In: M Albl-Mikasa, S Braun, S Kalina (eds.), Dimensionen der Zweitsprachenfoschung – Dimensions of Second Language Research. Festschrift für Kurt Kohn zum 65. Geburtstag Narr

    Spoken language is often perceived as a deviation from the norm. This chapter highlights some of the characteristic features of ‘spokenness’ and the rationale behind them. Using English as the exemplar case, it then reports the findings of a study that investigated how the perception and acceptance of such features is influenced by the medium and mode in which spoken language is encountered (face-to-face, video, transcript) and how this differs between native speakers and non-native speakers. At the end, the pedagogical implications of the study will be discussed.

    SABINE BRAUN, KIM STARR (2022)Automating Audio Description, In: Christopher Taylor, Elisa Perego (eds.), The Routledge Handbook of Audio Descriptionpp. 391-406 Routledge

    Audio description (AD) has established itself as a media accessibility service but its reliance on the specialised skills of audio describers poses challenges to broadening the service in response to changing legislation and exponential growth of audiovisual content across different media and platforms. At the same time, research on automating the description of images and video scenes has shown initial successes owing to advances in computer vision and machine learning. Although the machine's ability to capture and coherently describe the nuances and sequencing characteristic of audiovisual narratives is currently limited, the developments in computer vision have raised the question of whether automated or semi-automated methods of describing audiovisual content can be used to produce AD without compromising quality. This chapter analyses the state of the art and challenges of machine-generated image and video description and examines current approaches to advancing this field. It then reports on early practical initiatives and outlines future directions in this area. The focus is on complementarity and additionality, such as the use of automated methods to increase the availability of meaningful AD and the use of human knowledge about AD to advance such methods, as opposed to focussing on attempts to replace the human effort.

    Sabine Braun, Kim Starr, Jaleh Delfani, Liisa Tiittula, Jorma Laaksonen, Karel Braeckman, Dieter Van Rijsselbergen, Sasha Lagrillière, Lauri Saarikoski (2021)When Worlds Collide: AI-Created, Human-Mediated Video Description Services and the User Experience, In: HCI International 2021 - Late Breaking Papers: Cognition, Inclusion, Learning, and Culture. HCII 2021pp. 147-167 Springer

    This paper reports on a user-experience study undertaken as part of the H2020 project MeMAD (‘Methods for Managing Audiovisual Data: Combining Automatic Efficiency with Human Accuracy’), in which multimedia content describers from the television and archive industries tested Flow, an online platform, designed to assist the post-editing of automatically generated data, in order to enhance the production of archival descriptions of film content. Our study captured the participant experience using screen recordings, the User Experience Questionnaire (UEQ), a benchmarked interactive media questionnaire and focus group discussions, reporting a broadly positive post-editing environment. Users designated the platform’s role in the collation of machine-generated content descriptions, transcripts, named-entities (location, persons, organisations) and translated text as helpful and likely to enhance creative outputs in the longer term. Suggestions for improving the platform included the addition of specialist vocabulary functionality, shot-type detection, film-topic labelling, and automatic music recognition. The limitations of the study are, most notably, the current level of accuracy achieved in computer vision outputs (i.e. automated video descriptions of film material) which has been hindered by the lack of reliable and accurate training data, and the need for a more narratively oriented interface which allows describers to develop their storytelling techniques and build descriptions which fit within a platform-hosted storyboarding functionality. While this work has value in its own right, it can also be regarded as paving the way for the future (semi)automation of audio descriptions to assist audiences experiencing sight impairment, cognitive accessibility difficulties or for whom ‘visionless’ multimedia consumption is their preferred option.

    Sabine Braun (2016)The European AVIDICUS projects: Collaborating to assess the viability of video-mediated interpreting in legal proceedings, In: European Journal of Applied Linguistics4(1)pp. 173-180 Walter de Gruyter

    This paper reports on a long-term European project collaboration between academic researchers and non-academic institutions in Europe to investigate the quality and viability of video-mediated interpreting in legal proceedings (AVIDICUS: Assessment of Video-Mediated Interpreting in the Criminal Justice System).