Dr Muhammad Ahmed Saeed
About
My research project
Exploring the visual interface in Remote Simultaneous InterpretingMuhammad Ahmed Saeed is an engineer by background and a PhD student at the University of Surrey. Having conducted quantitative research on the impact of HMI design on chemical engineers, he is currently looking into the visual needs of interpreters. His research interests include human-machine interaction (HMI), the use of information and communication technologies, and human performance analysis tools such as eye tracking, ECG, and wearables.
My research design includes:
- Analyzing interpreter requirements in relation to current RSI delivery platforms based on literature and focus group findings
- *Designing and piloting a user-centric RSI mockup platform
- *Collecting data from the experimentation process
- *Performing data analysis of 30 participants.
- *Conducting thematic analysis on 10 participant follow-up interviews
Supervisors
Muhammad Ahmed Saeed is an engineer by background and a PhD student at the University of Surrey. Having conducted quantitative research on the impact of HMI design on chemical engineers, he is currently looking into the visual needs of interpreters. His research interests include human-machine interaction (HMI), the use of information and communication technologies, and human performance analysis tools such as eye tracking, ECG, and wearables.
My research design includes:
- Analyzing interpreter requirements in relation to current RSI delivery platforms based on literature and focus group findings
- *Designing and piloting a user-centric RSI mockup platform
- *Collecting data from the experimentation process
- *Performing data analysis of 30 participants.
- *Conducting thematic analysis on 10 participant follow-up interviews
Publications
This study explores the challenges that interpreters face as a result of the immaturity of Remote Simultaneous Interpreting (RSI) platforms and how visual design impacts user experience by examining the visual demands of remote simultaneous interpreters. A survey and experimental study were conducted with 29 remote simultaneous interpreters to gather data on their visual needs and preferences. The results of this study can inform the design of RSI platforms and training programs to improve userfriendliness and mitigate negative impacts on the interpreter's experience.
Remote simultaneous interpreting (RSI) draws on Information and Communication Technologies (ICTs) to facilitate multilingual communication by connecting conference interpreters to in-presence, virtual or hybrid events. Early solutions for RSI involved interpreters working in interpreting booths with physical hardware. However, in recent years, cloud-based solutions for RSI have emerged, with innovative Simultaneous Interpreting Delivery Platforms (SIDPs) at their core, enabling RSI delivery from anywhere. Initial explorations of the cloud-based solutions suggest that there is room for improving many of the widely used SIDPs. This chapter outlines an ongoing experimental study that investigates two aspects of SIDPs: the design of the interpreter interface and the integration of automatic speech recognition (ASR) in the interface to aid/augment the interpreter’s source-text comprehension. Preliminary pilot study data suggests that interpreters have a preference towards cleaner interfaces with a better view of the speaker’s hand gestures and body language. Performance analysis of a subsample of three participants indicates that while the most experienced interpreter had a similar performance across different experimental conditions (i.e., presentation of source speech with/without ASR-generated transcript), differences were apparent for the other two interpreters
Remote simultaneous interpreting (RSI) draws on Information and Communication Technologies to facilitate multilingual communication by connecting conference interpreters to in-presence, virtual or hybrid events. Early solutions for RSI involved interpreters working in interpreting booths with ISO-standardised equipment. However, in recent years, cloud-based solutions for RSI have emerged, with innovative Simultaneous Interpreting Delivery Platforms (SIDPs) at their core, enabling RSI delivery from anywhere. SIDPs recreate the interpreter’s console and work environment (Braun 2019) as a bespoke software/videoconferencing platform with interpretation-focused features. Although initial evaluations of SIDPs were conducted before the Covid-19 pandemic (e.g., DG SCIC 2019), research on RSI (booth-based and software-based) remains limited. Pre-pandemic research shows that RSI is demanding in terms of information processing and mental modelling (Braun 2007; Moser-Mercer 2005), and suggests that the limited visual input available in RSI constitutes a particular problem (Mouzourakis 2006; Seeber et al. 2019). Besides, initial explorations of the cloud-based solutions suggest that there is room for improving the interfaces of widely used SIDPs (Buján and Collard 2021; DG SCIC 2019). The experimental project presented in this paper investigates two aspects of SIDPs: the design of the interpreter interface and the integration of supporting technologies. Drawing on concepts and methods from user experience research and human-computer interaction, we explore what visual information is best suited to support the interpreting process and the interpreter-machine interaction, how this information is best presented in the interface, and how automatic speech recognition can be integrated into an RSI platform to aid/augment the interpreter’s source-text comprehension.
n the modern-day study of system behaviour and analysis, understanding the response of how the system will behave in a certain condition is considered vital. Thus, keeping object behaviour as the primary objective and object detection as a core instrument this project intends to identify the object and track the motion of the desired object. This paper provides an overview of how a closed-loop control-based system was designed using a camera as visual input. The video signal is analyzed to detect distinct color and shape similarities to a desired object. Based on a high similarity index the Raspberry Pi controller generates a pulse width modulation signal to regulate the angular position of the rotary actuator to align the camera frame of view according to a desired object’s position. Post system design experimentation results and potential realized system faults are expressed within the conclusion.