Dr Tomasz Korybski PhD


About

Areas of specialism

Remote Simultaneous Interpreting, Interpreting and Technologies

University roles and responsibilities

  • Research Fellow

    My qualifications

    2013
    PhD in Applied Linguistics (UWE)
    University of the West of England

    Previous roles

    01 October 2016 - 30 December 2019
    Adjunct Professor
    Institute of Applied Linguistics, University of Warsaw

    Publications

    Elena Davitti, Tomasz Korybski, Sabine Braun (2025)The Routledge Handbook of Interpreting, Technology and AI, In: Video-Mediated Interpreting Routledge

    This handbook provides a comprehensive overview of the history, development, use, and study of the evolving relationship between interpreting and technology, addressing the challenges and opportunities brought by advances in AI and digital tools.Encompassing a variety of methods, systems, and devices applied to interpreting as a field of practice as well as a study discipline, this volume presents a synthesis of current thinking on the topic and an understanding of how technology alters, shapes, and enables the interpreting task. The handbook examines how interpreting has evolved through the integration of both purpose-built and adapted technologies that support, automate, or even replace (human) interpreting tasks and offers insights into their ethical, practical, and socio-economic implications. Addressing both signed and spoken language interpreting technologies, as well as technologies for language access and media accessibility, the book draws together expertise from varied areas of study and illustrates overlapping aspects of research.Authored by a range of practicing interpreters and academics from across five continents, this is the essential guide to interpreting and technology for both advanced students and researchers of interpreting and professional interpreters.

    Elena Davitti, Tomasz Korybski, Constantin Orasan, Sabine Braun (2025)Quality-related aspects, In: Elena Davitti, Tomasz Grzegorz Korybski, Sabine Braun (eds.), The Routledge Handbook of Interpreting, Technology and AIpp. 305-326 Routledge

    This chapter reviews quality as a multifaceted and highly debated yet central concept within interpreting studies. It highlights the crucial relevance of quality with regards to deepening our understanding of the influence of different technology-related modalities of interpreting on current and future practice. The chapter begins by revisiting foundational dimensions of quality in interpreting and examines the key challenges posed by the intersection of technology and interpreting. It then synthesises how quality can be evaluated and measured and considers the added complexities introduced by technology. To this end, this chapter provides a comprehensive review of the methods and perspectives used to assess quality in various technologised workflows, grouping them into three categories, namely, top-down, bottom-up, and automated methods. The chapter also demonstrates how technology integration into different interpreting practices makes the process of assessing quality even more intricate, yet increasingly needed, while also outlining the benefits and drawbacks of technology-driven quality assessment. The chapter concludes by discussing future developments, particularly the need for a hybrid approach to quality evaluation that balances scalability with deeper, more nuanced insights to address the complex, nuanced interplay between interpreting and technology.

    Elena Davitti, Annalisa Sandrelli, Pablo Romero-Fresco, Tomasz Korybski, Zoe Moores, Anna-Stiina Wallinheimo (2023)Shaping Multilingual Access Through Respeaking Technology (2020-2023, ES/T002530/1) Unpublished
    Tomasz Grzegorz Korybski, Elena Davitti, Constantin Orăsan, Sabine Braun (2022)A Semi-Automated Live Interlingual Communication Workflow Featuring Intralingual Respeaking: Evaluation and Benchmarking, In: LREC 2022: 13th International Conference on Language Resources and Evaluationpp. 4405-4413 European Language Resources Assoc-Elra

    In this paper, we present a semi-automated workflow for live interlingual speech-to-text communication which seeks to reduce the shortcomings of existing ASR systems: a human respeaker works with a speaker-dependent speech recognition software (e.g., Dragon Naturally Speaking) to deliver punctuated same-language output of superior quality than obtained using out-of-the-box automatic speech recognition of the original speech. This is fed into a machine translation engine (the EU's eTranslation) to produce live-caption ready text. We benchmark the quality of the output against the output of best-in-class (human) simultaneous interpreters working with the same source speeches from plenary sessions of the European Parliament. To evaluate the accuracy and facilitate the comparison between the two types of output, we use a tailored annotation approach based on the NTR model (Romero-Fresco and Pochhacker, 2017). We find that the semi-automated workflow combining intralingual respeaking and machine translation is capable of generating outputs that are similar in terms of accuracy and completeness to the outputs produced in the benchmarking workflow, although the small scale of our experiment requires caution in interpreting this result.

    Tomasz Korybski, Elena Davitti, Constantin Orasan, Sabine Braun (2023)MATRIC Machine Translation and Respeaking in Interlingual Communication Unpublished
    Muhammad Ahmed Saeed, Eloy Rodriguez Gonzalez, Tomasz Korybski, Elena Davitti, Sabine Braun (2023)Comparing Interface Designs to Improve RSI platforms: Insights from an Experimental Study, In: Proceedings of the International Conference HiT-IT 2023pp. 147-156

    Remote Simultaneous Interpreting (RSI) platforms enable interpreters to provide their services remotely and work from various locations. However , research shows that interpreters perceive interpreting via RSI platforms to be more challenging than on-site interpreting in terms of performance and working conditions [1]. While poor audio quality is a major concern for RSI [2,3], another issue that has been frequently highlighted is the impact of the interpreter's visual environment on various aspects of RSI. However, this aspect has received little attention in research. The study reported in this article investigates how various visual aids and methods of presenting visual information can aid interpreters and improve their user experience (UX). The study used an experimental design and tested 29 professional conference interpreters on different visual interface options, as well as eliciting their work habits, perceptions and working environments. The findings reveal a notable increase in the frequency of RSI since the beginning of the COVID-19 pandemic. Despite this increase, most participants still preferred on-site work. The predominant platform for RSI among the interpreters sampled was Zoom, which has a minimalist interface that contrasts with interpreter preferences for maximalist, information-rich be-spoke RSI interfaces. Overall, the study contributes to supporting the visual needs of interpreters in RSI.

    Eloy Rodríguez González, Muhammad Ahmed Saeed, Tomasz Korybski, Elena Davitti, Sabine Braun (2023)Assessing the impact of automatic speech recognition on remote simultaneous interpreting performance using the NTR Model, In: PROCEEDINGS of the International Workshop on Interpreting Technologies SAY IT AGAIN 2023

    The emergence of Simultaneous Interpreting Delivery Platforms (SIDPs) has opened up new opportunities for interpreters to provide cloud-based remote simultaneous interpreting (RSI) services. Similar to booth-based RSI, which has been shown to be more tiring than conventional simultaneous interpreting and more demanding in terms of information processing and mental modelling [11; 12], cloud-based RSI configurations are perceived as more stressful than conventional simultaneous interpreting and potentially detrimental to interpreting quality [2]. Computer-assisted interpreting (CAI) tools, including automatic speech recognition (ASR) [8], have been advocated as a means to support interpreters during cloud-based RSI assignments, but their effectiveness is under-explored. The study reported in this article experimentally investigated the impact of providing interpreters with access to an ASR-generated live transcript of the source speech while they were interpreting, examining its effect on their performance and overall user experience. As part of the experimental design, 16 professional conference interpreters performed a controlled interpreting test consisting of a warmup speech (not included in the analysis), and four speeches, i.e., two lexically dense speeches and two fast speeches, presented in two different interpreting conditions, i.e., with and without ASR support. This article presents initial quantitative findings from the analysis of the interpreters' performance, which was conducted using the NTR Model [17]. Overall, the findings reveal a reduction in the total number of interpreting errors in the ASR condition. However , this is accompanied by a loss in stylistic quality in the ASR condition.

    Muhammad Ahmed Saeed, Eloy Rodriguez Gonzalez, Tomasz Grzegorz Korybski, Elena Davitti, Sabine Braun (2022)Connected yet Distant: An Experimental Study into the Visual Needs of the Interpreter in Remote Simultaneous Interpreting, In: 24th HCI International Conference (HCII 2022) Proceedings, Part III Springer

    Remote simultaneous interpreting (RSI) draws on Information and Communication Technologies to facilitate multilingual communication by connecting conference interpreters to in-presence, virtual or hybrid events. Early solutions for RSI involved interpreters working in interpreting booths with ISOstandardised equipment. However, in recent years, cloud-based solutions for RSI have emerged, with innovative Simultaneous Interpreting Delivery Platforms (SIDPs) at their core, enabling RSI delivery from anywhere. SIDPs recreate the interpreter's console and work environment (Braun 2019) as a bespoke software/videoconferencing platform with interpretation-focused features. Although initial evaluations of SIDPs were conducted before the Covid-19 pandemic (e.g., DG SCIC 2019), research on RSI (booth-based and software-based) remains limited. Pre-pandemic research shows that RSI is demanding in terms of information processing and mental modelling (Braun 2007; Moser-Mercer 2005), and suggests that the limited visual input available in RSI constitutes a particular problem (Mouzourakis 2006; Seeber et al. 2019). Besides, initial explorations of the cloud-based solutions suggest that there is room for improving the interfaces of widely used SIDPs (Bujan and Collard 2021; DG SCIC 2019). The experimental project presented in this paper investigates two aspects of SIDPs: the design of the interpreter interface and the integration of supporting technologies. Drawing on concepts and methods from user experience research and human-computer interaction, we explore what visual information is best suited to support the interpreting process and the interpreter-machine interaction, how this information is best presented in the interface, and how automatic speech recognition can be integrated into an RSI platform to aid/augment the interpreter's source-text comprehension.