Professor Minhua Eunice Ma
About
Biography
Professor Minhua Eunice Ma joined University of Surrey in June 2024 as Pro Vice-Chancellor Education. Prior to this, she was affiliated with the Medical Sciences Division at University of Oxford. Professor Ma has served in senior academic leadership roles previously, including as Deputy Vice-Chancellor and Provost at Falmouth University, and as Dean of the School of Computing & Digital Technologies at Staffordshire University. Her expertise spans across teaching quality assurance and research both in the UK and internationally.
Professor Ma plays a vital role in various quality assurance regulators in the UK and globally. She served on the Office for Students’ (OfS) Teaching Excellence Framework (TEF) Panel in 2023, as a review expert for QAA and an Advisory Board member for QAA Computing Subject Benchmark, Quality and Qualifications Ireland (QQI), and the Netherlands Quality Agency. She also serves on international research councils, including Horizon Europe, French National Research Agency, Swiss National Science Foundation, and the Academy of Finland.
Additionally, Professor Ma’s pioneering work in games technology and digital health has secured a total research funding of £12m from sources such as UKRI, EU, NHS, NESTA, UK government and charities. She has made significant contributions to the field of digital health with her research in serious games, focusing on applications of games technology, virtual and augmented reality in stroke rehabilitation, cystic fibrosis, autism, medical simulation, preventing gender-based violence, and adolescent mental health. Her work includes 145 peer-reviewed publications, 13 books, and she has supervised 24 PhD students with 17 successful completions.
Professor Ma is Editor-in-Chief for the Serious Games section of the Elsevier journal Entertainment Computing and Founding Chair of the International Joint Conference on Serious Games. Since 2010, she has been the elected UK representative for the International Federation for Information Processing Working Group (IFIP WG14.8) on Serious Games, shaping the future of serious games research on a global scale.
Publications
Ambient sound plays a critical part in all media related to the moving image, video games, and live performance. It defines its place and time, temporalizes it to towards a future goal and is key in creating audience immersion and belief in what we see. The process of recording, manipulating or designing audio elements is usually handled by competent professionals. Can a different approach be had to the way we design sound ambiences and what relationship and role does ambient sound have to media such as film and games? Using object-oriented programming environment, Max/MSP, a low-cost serious gaming interface was designed and implemented – the Ambience Designer. This rids the process of its esoteric nature and together with an especially crafted tabletop interface allows amateurs to design and interact with the ambient sounds of birds, wind and traffic for home movies and indie games. The Ambience Designer removes the esoteric ways of audio design in a Digital Audio Workstation (DAW) and use intuitive user input that connect with our every day subjective experience of sound - such as distance, placement, and intensity - in place of parameters that only professionals could understand and use. Future developments include moving the Ambience Designer to a commercial multi touch table/tablet such as Microsoft Surface or Apple iPad which will enable us to utilise more intuitive, multi-touch gestures such as tap, scroll, pan, rotate, and pinch. The Ambience Designer was evaluated among working professionals, amateurs and the general public and initial findings were promising. During the survey, participants also suggested some future applications of the Ambience Designer, such as a creative and educational tool for children or people with special needs, for therapeutic purposes, to trigger memories in elderly, for digital storytelling and post-production sound dubbing for picture.
In this paper we describe the use of 3D games technology in human anatomy education based on our MSc in Medical Visualisation and Human Anatomy teaching practice, i.e. students design and develop serious games for anatomy education using the Unity 3D game engine. Students are engaged in this process not only as consumers of serious games, but as authors and creators. The benefits of this constructionist learning approach are discussed. Five domains of learning are identified, in terms of what anatomy students, tutors, and final users (players) can learn through serious games and their development process. We also justify the 3D engine selected for serious game development and discuss main obstacles and challenges to the use of this constructionist approach to teach non-computing students. Finally, we recommend that the serious game construction approach can be adopted in other academic disciplines in higher education.
This study aims at critically reviewing recently published papers on the use of games for serious purposes to help young people's mental and moral development. The objective of this review is: (a) to present the empirical evidence of games for serious purposes as an effective vehicle to transfer moral value orientations and positive emotions; (b) to identify the explored area of the game impact and evaluate the effectiveness of a game impact from the previous studies; (c) to summarize different game assessments and study designs of the previous studies; (d) to define future research perspectives. After searching several influential databases, 26 relevant articles were included in the study. This review provided empirical evidences that games for serious purposes may improve young people's prosocial behaviour, empathy, emotion regulation, mental health and moral belief. Furthermore, these games can change people's attitude, affect people's behaviour and even influence people's psychological state. This review made a comprehensive summary of game assessment including in-and out off-game assessment and detailed analysis of study designs from the previous study. The current finding reveals that studies about prosocial games are of relatively good quality, and that there is great potential for the study of games regarding empathy and moral development. Besides, in accordance with Johnson and Hall's Job Demand-Control-Support (JDCS) model, a new and innovative way of classifying games is proposed: purpose-driven, action-driven, modedriven and game context-driven.
In this paper we present Interact---a mixed reality virtual survivor for Holocaust education. It was created to preserve the powerful and engaging experience of listening to, and interacting with, Holocaust survivors, allowing future generations of audience access to their unique stories. Interact demonstrates how advanced filming techniques, 3D graphics and natural language processing can be integrated and applied to specially-recorded testimonies to enable users to ask questions and receive answers from that virtualised individuals. This provides a new and rich interactive narratives of remembrance to engage with primary testimony. We discuss the design and development of Interact, and argue that this new form of mixed reality is promising media to overcome the uncanny valley.
Holographic immersive technology such as ‘Mixed Reality’ is nowadays extending in the cultural heritage sector to open new prospects to engage visitors in museums. This paper investigates the level of engagement in the museum space by conducting observations and time consuming at the Egyptian Museum in Cairo. An interactive mixed reality system named ‘MuseumEye’ was developed and used Microsoft HoloLens as a mixed reality head mounted display to boost the level of engagement with the exhibited antiques. This system was experienced by 171 of the Egyptian museum visitors and another observation was conducted to record their behaviours and the time they consumed next to each antique. Results of this study showed the time consumed to engage with holographic visuals and the exhibited has been increased 4 times compared to the time the visitors consumed before without using technological gadgets. The implications of these immersive technologies can be an important vehicle for driving the tourism industries towards achieving successful engaging experiences.
Designing computer games or adopting commercial off-the-shelf games to support learning and teaching has become a promising frontier of education, since games technology is inexpensive, widely available, fun and entertaining for people of all ages, especially for the generation that grow up in constant contact with digital media. As a subset of serious games, computer-based edutainment can be dated back to early 1960s and begins to flourish around 2002. In this chapter, we provide the reader an overview of the book with a perspective of future trends of serious games.
Since the Augmented Reality (AR) headset ‘Microsoft HoloLens’ released in 2016, the academic and the industrial community witnessed an obvious transformation and changes in the perception of AR applications. Despite this breakthrough, most of the HoloLens users have explicitly reported the narrow field of view (FOV) that crops the virtual augmentation from the viewer’s sight to a small window of 34° (Bimber & Bruns in PhoneGuide: Adaptive image classification for mobile museum guidance, 2011). This limitation can result in losing pre-made functions and visuals in the AR application. Therefore, this study introduced attempts to design a spatial UI representing a way around the narrow FOV that HoloLens suffers from. The UI was a crucial part of AR museum system which was evaluated by 9 experts in HCI, visual communication and museum engaging studies. Results showed a positive feedback on the usability of the system and users’ experience. This method can help HoloLens developers to extend their applications’ functionalities with avoiding missing content.
Numerous temporal relations of verbal actions have been analysed in terms of various grammatical means of expressing verbal temporalisation such as tense, aspect, duration and iteration. Here the temporal relations within verb semantics, particularly ordered pairs of verb entailment, are studied using Allen's interval-based temporal formalism. Their application to the compositional visual definitions in our intelligent storytelling system, CONFUCIUS, is presented, including the representation of procedural events, achievement events and lexical causatives. In applying these methods we consider both language modalities and visual modalities since CONFUCIUS is a multimodal system.
Various English verb classifications have been analyzed in terms of their syntactic and semantic properties, and conceptual components, such as syntactic valency, lexical semantics, and semantic/syntactic correlations. Here the visual semantics of verbs, particularly their visual roles, somatotopic effectors, and level-of-detail, is studied. We introduce the notion of visual valency and use it as a primary criterion to recategorize eventive verbs for language visualization (animation) in our intelligent multimodal storytelling system, CONFUCIUS. The visual valency approach is a framework for modelling deeper semantics of verbs. In our ontological system we consider both language and visual modalities since CONFUCIUS is a multimodal system.
Simulation motion of Virtual Reality (VR) objects and humans has experienced important developments in the last decade. However, realistic virtual human animation generation remains a major challenge, even if applications are numerous, from VR games to medical training. This paper proposes different methods for animating virtual humans, including blending simultaneous animations of various temporal relations with multiple animation channels, minimal visemes for lip synchronisation, and space sites of virtual human and 3D object models for object grasping and manipulation. We present our work in our natural language visualisation (animation) system, CONFUCIUS, and describe how the proposed approaches are employed in CONFUCIUS' animation engine.
This paper describes the development of a Virtual Reality (VR) based therapeutic training system aimed at encourage stroke patients with upper limb motor disorders to practice physical exercises. The system contains a series of physically-based VR games. Physically-based simulation provides realistic motion of virtual objects by modelling the behaviour of virtual objects and their responses to external force and torque based on physics laws. We present opportunities for applying physics simulation techniques in VR therapy and discuss their potential therapeutic benefits to motor rehabilitation. A framework for physically-based VR rehabilitation systems is described which consists of functional tasks and game scenarios designed to encourage patients' physical activity in highly motivating, physics-enriched virtual environments where factors such as gravity can be scaled to adapt to individual patient's abilities and in-game performance.
This paper details the development and testing of a serious-game based movement therapy aimed at encouraging stroke patients with upper limb motor disorders to practice physical exercises. The system contains a series of Virtual Reality (VR) games. A framework for VR movement therapy is described which consists of a number of serious games designed to encourage patients' physical activity in highly motivating, virtual environments where various factors such as size and gravity can be scaled to adapt to individual patient's abilities and in-game performance. Another goal of this study is to determine whether the provision of serious games based interventions improves motor outcome after stroke. A pilot study with 8 participants who have a first hemispheric stroke shows improvements on impairment measurement and functional measurement shortly after completion of the intervention and 6 weeks after the intervention. Despite its limitations the findings of this study support the effectiveness of serious games in the treatment of participants with hemiplegia. The study also raises awareness of the benefits of using serious games in movement therapy after stroke.
Producing plays, films or animations is a complex and expensive process involving various professionals and media. Our proposed software system, SceneMaker, aims to facilitate this creative process by automatically interpreting natural language film scripts and generating multimodal, animated scenes from them. During the generation of the story content, SceneMaker gives particular attention to emotional aspects and their reflection in fluency and manner of actions, body posture, facial expressions, speech, scene composition, timing, lighting, music and camera work. Related literature and software on Natural Language Processing, in particular textual affect sensing, affective embodied agents, visualisation of 3D scenes and digital cinematography are reviewed. In relation to other work, SceneMaker follows a genre-specific text-to-animation methodology which combines all relevant expressive modalities and is made accessible via web-based and mobile platforms. In conclusion, SceneMaker will enhance the communication of creative ideas providing quick pre-visualisations of scenes.
Aesthetically, games can be technically accomplished and beautifully crafted, with surreal worlds of fantasy or photorealistic recreations of people and places. There are already a number of video games that have taken influence from art movements, utilising it to reflect the atmosphere and narrative of the game. This paper explores the concept of video games as art and discusses to what extent existing video games have taken influence from art movements and the advantages of this. It also investigates the extent to which art concepts can influence their visual communication. Specifically, we utilise contemporary art as a means for creating recognisable game assets that will portray a sense of time, place or identity to the player; and discuss its impacts on the game design and creation. We demonstrate findings by conceptualising and producing game quality assets that incorporate the idea of taking influence from art movements/artists, and discuss how this can aid a game by generating a cohesive style, and by inspiring new methods of gameplay. We discovered that by utilizing contemporary art movements, game assets can be created which reflect a particular era further than the reality of that period of time.
There has been a significant amount of research recently into methods of protecting systems from buffer overflow attacks by detecting stack injected shell code. The majority of the research focuses on developing algorithms or signatures for detecting polymorphic and metamorphic payloads. However much of this problem has already been solved through the mainstream use of host based protection mechanisms e.g. Data Execution Prevention (DEP) and Address Space Randomization (ASLR). Many hackers are now using the more inventive attack methods e. g., return-to-libc, which do not inject shell code onto the stack and thus evade DEP and common shell code detection mechanisms. The purpose of this work is to propose a series of generic signatures that could be used to detect network born return-to-libc attacks. To this end we outline how we performed a return-to-libc network based attack, which bypasses DEP and common IDS signatures, before suggesting an efficient signature for detection of similar return-to-libc attacks.
Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation.
Current-generation Massively Multiplayer Online Games (MMOG), such as World of Warcraft, Eve Online, and Second Life are mainly built on distributed client-server architectures with server allocation based on sharding, static geographical partitioning, dynamic micro-cell scheme, or optimal server for placing a virtual region according to the geographical dispersion of players. This paper reviews various approaches on data replication and region partitioning. Management of areas of interest (field of vision) is discussed, which reduces processing load dramatically by updating players only with those events that occur within their area of interest. This can be managed either through static geographical partitioning on the basis of the assumption that players in one region do not see/interact with players in other regions, or behavioural modelling based on players' behaviours. The authors investigate data storage and synchronisation methods for MMOG databases, mainly on relational databases. Several attempts of peer to peer (P2P) architectures and protocols for MMOGs are reviewed, and critical issues such as cheat prevention on P2P MMOGs are highlighted.
A crowd simulator which creates autonomous characters' behaviour in crowds consists many components such as pathfinding, collision avoidance, character creation, behaviour system, and level of details. The majority of these involve different level of decision making in order to simulate autonomous agents' behaviour. Some components have a few different algorithms that can be adopted. For a simulator with a large number of autonomous agents, these components need to be efficient to contribute to the creation of a faster and cheaper game environment. Otherwise bottlenecks may occur and this can led to a poor representation. In this paper we investigate these areas, discuss and compare existing approaches in each component, and select the best combination on Xbox 360 through a series of experiments on our crowd simulator within the Microsoft XNA framework. We used the Xbox 360 console for accurate testing which is not affected by other processes running in the background. We also optimise the application to overcome bottleneck issues. Our simulator is able to handle a large number of automonous agents with a healthy frame rate of 60 FPS. Based on our implementation and testing results, some recommendations are provided in this paper, which will be useful for independent game developers who create games containing autonomous crowd for Xbox 360 using XNA framework.
This chapter discusses the applications and solutions of emerging Virtual Reality (VR) and video games technologies in the healthcare sector, e.g. physical therapy for motor rehabilitation, exposure therapy for psychological phobias, and pain relief. Section 2 reviews state-of-the-art interactive devices used in current VR systems and high-end games such as sensor-based and camera-based tracking devices, data gloves, and haptic force feedback devices. Section 3 investigates recent advances and key concepts in games technology, including dynamic simulation, flow theory, adaptive games, and their possible implementation in serious games for healthcare. Various serious games are described in this section: some were designed and developed for specific healthcare purposes, e.g. BreakAway (2009)’s Free Dive, HopeLab (2006)’s Re-Mission, and Ma et al. (2007)’s VR game series, others were utilising off-the-shelf games such as Nintendo Wii sports for physiotherapy. A couple of experiments of using VR systems and games for stroke rehabilitation are highlighted in section 4 as examples to showcase the benefits and impacts of these technologies to conventional clinic practice. Finally, section 5 points some future directions of applying emerging games technologies in healthcare, such as augmented reality, Wii-mote motion control system, and even full body motion capture and controller free games technology demonstrated recently on E3 2009 which have great potentials to treat motor disorders, combat obesity, and other healthcare applications.
As technological approaches continue to dominate provision of education in this modern age, effective methods and techniques should be employed in the development of the supporting systems. In this paper we discuss the use of Design Methodology Management (DMM) technology in the development of a formative e-assessment system to support the learning process. DMM promotes a framework type modular approach to system development thereby promoting flexibility and extensibility of the system. Most existing applications of design methodology management, particularly in the electrical design field, have focused on automation of the design process. Our main focus is on the structural representation of the system as well as the flow of data between its components. We first discuss design of the generic e-assessment framework and then describe how we used it in the context of a Data Analysis formative assessment.
The Third International Conference on Serious Games Development and Applications is this year organised as a satellite conference to IFIP-ICEC2012 in Bremen. SGDA 2012 appears in the sequence of the successes of the First International Workshop on Serious Games Development and Application held in Derby in 2010 and Second International Conference on Serious Games Development and Applications, held in Lisbon in 2011. The aim of SGDA is to collect and disseminate knowledge on serious games technologies, design and development; to provide practitioners and interdisciplinary communities with a peer-reviewed forum to discuss the state-of-the-art in serious games research, their ideas and theories, and innovative applications of serious games; to explain cultural, social and scientific phenomena by means of serious games; to concentrate on the interaction between theory and application; to share best practice and lessons learnt; to develop new methodologies in various application domains using games technologies; and to explore perspectives of future developments and innovative applications relevant to serious games and related areas.
The evolution of medical imaging technologies and computer graphics is leading to dramatic improvements for medical training, diagnosis and treatment, and patient understanding. This paper discusses how volumetric visualization and 3D scanning can be integrated with cadaveric dissection to deliver benefits in the key areas of clinician-patient communication and medical education. The specific area of medical application is a prevalent musculoskeletal disorder-iliotibial (IT) band syndrome. By combining knowledge from cadaveric dissection and volumetric visualization, a virtual laboratory was created using the Unity 3D game engine, as an interactive education tool for use in various settings. The system is designed to improve the experience of clinicians who had commented that their earlier training would have been enhanced by key features of the system, including accurate three-dimensional models generated from computed tomography, high resolution cryosection images of the Visible Human dataset, and surface anatomy generated from a white light scan of an athlete. The finding from the virtual laboratory concept is that knowledge gained through dissection helps enhance the value of the model by incorporating more detail of the distal attachments of the IT band. Experienced clinicians who regularly treat IT band syndrome were excited by the potential of the model and keen to make suggestions for future enhancement.
This book constitutes the refereed proceedings of the 4th International Conference on Serious Games Development and Applications, SGDA 2013, held in Trondheim, Norway, in September 2013. The 32 papers (23 full papers, 9 short papers/posters and 2 invited keynotes) presented were carefully reviewed and selected from various submissions. The papers are organized in topical sections on games for health, games for education and training, games for other purposes, game design and theories, gaming interface, policy matters.
The Digital Design Studio and NHS Education Scotland have developed ultra-high definition real-time interactive 3D anatomy of the head and neck for dental teaching, training and simulation purposes. In this paper we present an established workflow using state-of-the-art 3D laser scanning technology and software for design and construction of medical data and describe the workflow practices and protocols in the head and neck anatomy project. Anatomical data was acquired through topographical laser scanning of a destructively dissected cadaver. Each stage of model development was clinically validated to produce a normalised human dataset which was transformed into a real-time environment capable of large-scale 3D stereoscopic display in medical teaching labs across Scotland, whilst also supporting single users with laptops and PC. Specific functionality supported within the 3D Head and Neck viewer includes anatomical labelling, guillotine tools and selection tools to expand specific local regions of anatomy. The software environment allows thorough and meaningful investigation to take place of all major and minor anatomical structures and systems whilst providing the user with the means to record sessions and individual scenes for learning and training purposes. The model and software have also been adapted to permit interactive haptic simulation of the injection of a local anaesthetic.
This paper reviews 40 serious games designed for children with autism spectrum disorders (ASD) and these games/studies are classified into four categories; technology platform, computer graphics, gaming aspect and user interaction. Moreover, the paper discusses serious games designed for the improvement of communication skills and social behavior, social conversation, imaginative skills, sensory integration and learning accounts in ASD children. The children usually interact with these games by ordinary IO (input/output) e.g. keyboard and mouse or touchscreen tools. Previous researches show the effectiveness of playing serious games on mobiles or tablet devices in helping ASD children to express their feelings and improve the level of engagement with others. However, there are limitations in designing games for helping autistic children with sensory processing disorder (SPD), improving imaginative play, and teaching first aid. Further, there is not much research that addresses repetitive behavior in ASD children.
Human Nervous System is one of the basic yet most difficult topics in medical education. With millions of neurons from varied origins, pathways and types regulating different functions, it is the least understood system of human body. Among all the organs, the face has a dense nerve innervations in a compact area making it a challenge for students to gain the required anatomical expertise. The Sense project is an effort made to overcome this challenging task by enabling interactive learning with the help of an intuitive user interface. The ability to learn and test the anatomical knowledge based on a novel three-dimensional head with particle flow animations to visualize nerve impulse propagation makes this application stand out from the conventional and interactive methods of anatomy education. This paper presents the design and development of this application using the Unity 3D game engine. Possible future improvements for evaluation are also discussed.
This book constitutes the refereed proceedings of the 5th International Conference on Serious Games Development and Applications, SGDA 2014, held in Berlin, Germany, in October 2014. The 14 revised full papers presented together with 4 short papers were carefully reviewed and selected from 31 submissions. The focus of the papers was on the following: games for health, games for medical training, serious games for children, music and sound effects, games for other purposes, and game design and theories
There have been numerous datasets, 3D models and simulations developed over the years however it is clear that there is a need to provide an anatomically accurate, flexible, user driven virtual training environment that can potentially offer significant advantages over traditional teaching methods, techniques and practices. The ability to virtually train dental trainees to navigate and interact in a repeatable format, before directly engaging with the patients can measurably reduce error rates while significantly enhancing the learner experience. Accurate dental simulation with force feedback allows dental students to familiarize with clinical procedures and master practical skills with realistic tactual sensation. In this chapter, we review the state of art of using haptics in dental training and present the development and construction of a medically validated high-definition interactive 3D head and neck anatomical dataset with a haptic interface to support and enhance dental teaching across multiple training sites for NHS Education Scotland. Data acquisition from cadaveric specimens and 3D laser scanning of precision dissection is discussed, including techniques employed to build digital models capable of real-time interaction and display. Digital anatomical model construction is briefly described, including the necessity to clinically validate each stage of development that would ensure a normalised human data set whilst removing anatomical variance arising from individual donor cadaveric material. This complex digital model was transformed into a real-time environment capable of large-scale 3D stereo display in medical teaching labs across Scotland, whilst also offering the support for single users with laptops and PC. The 3D viewer environment also supports haptic interaction through a force feedback probe device (Phantom Omni) offering the ability for users to repeatedly practise giving dental anaesthesia injections into the gum. Specific tools supported include guillotine tools, picking and selection tools capable of expanding specific local regions of anatomy. Zoom camera functions and freeform rotation allows thorough and meaningful investigation to take place of all major and minor anatomical structures and systems whilst providing the user with the means to record sessions and individual scenes for learning and training purposes.
Sufferers of cystic fibrosis and other chronic lung diseases benefit from daily physiotherapy such as Positive Expiratory Pressure (PEP). For children, however, such repetitive daily exercises become a burden and may lead to confrontation with the family. Using a system comprised of a PEP mask, a computer-connected pressure monitor and a suite of games of varying types, a series of tests will determine with both objective statistics and subjective feedback how effective the system is at encouraging children and young adults to participate in daily therapy. With longer and more advanced games, coupled with unobtrusive data gathering functionality, we determine what effect long-term use of such a game system has on young sufferers. The study has shown that games based PEP physiotherapy is a desirable, viable alternative that can perform at least similarly to the existing approaches in terms of the amount of time children spend engaging in breathing exercises and with potentially many additional benefits including the capture of detailed data about the amount and quality of physiotherapy which is currently impossible with conventional, non-computerized methods.
There is a wide range of sensory therapies using sound, music and visual stimuli. Some focus on soothing or distracting stimuli such as natural sounds or classical music as analgesic, while other approaches emphasize the active performance of producing music as therapy. This paper proposes an immersive multi-sensory Exposure Therapy for people suffering from anxiety disorders, based on a rich, detailed surround-soundscape. This soundscape is composed to include the users' own idiosyncratic anxiety triggers as a form of habituation, and to provoke psychological catharsis, as a non-verbal, visceral and enveloping exposure. To accurately pinpoint the most effective sounds and to optimally compose the soundscape we will monitor the participants' physiological responses such as electroencephalography, respiration, electromyography, and heart rate during exposure. We hypothesize that such physiologically optimized sensory landscapes will aid the development of future immersive therapies for various psychological conditions, Sound is a major trigger of anxiety, and auditory hypersensitivity is an extremely problematic symptom. Exposure to stress-inducing sounds can free anxiety sufferers from entrenched avoidance behaviors, teaching physiological coping strategies and encouraging resolution of the psychological issues agitated by the sound.
Serious game is now a multi-billion dollar industry and is still growing steadily in many sectors. As a major subset of serious games, designing and developing Virtual Reality (VR), Augmented Reality (AR), and serious games or adopting off-the-shelf games to support medical education, rehabilitation, or promote health has become a promising frontier in the healthcare sector since 2004, because games technology is inexpensive, widely available, fun and entertaining for people of all ages, with various health conditions and different sensory, motor, and cognitive capabilities. In this chapter, we provide the reader an overview of the book with a perspective of future trends of VR, AR simulation and serious games for healthcare.
There is a tremendous interest among researchers for the development of virtual, augmented reality and games technologies due to their widespread applications in medicine and healthcare. To date the major applications of these technologies include medical simulation, telemedicine, medical and healthcare training, pain control, visualisation aid for surgery, rehabilitation in cases such as stroke, phobia, trauma therapies and addictive behaviours. Many recent studies have identified the benefits of using Virtual Reality, Augmented Reality or serious games in a variety of medical applications. This research volume on Virtual, Augmented Reality and Serious Games for Healthcare 1 offers an insightful introduction to the development and applications of virtual and augmented reality and digital games technologies in medical and clinical settings and healthcare in general. It is divided into six parts. Part I presents a selection of applications in medical education and management using virtual, augmented reality and visualisation techniques. Part II relates to nursing training, health literacy and healthy behaviour. Part III presents the applications of Virtual Reality in neuropsychology. Part IV includes a selection of applications in motor rehabilitation. Part V is aimed at therapeutic games for various diseases. The final part presents the applications of Virtual Reality in healing and restoration. This book is directed to healthcare professionals, scientists, researchers, professors and postgraduate students who wish to explore the applications of virtual, augmented reality and serious games in healthcare further.
This workshop investigates the mechanisms for behaviour change and influence, focusing on the definition of requirements for pervasive gameplay and interaction mechanics, procedures, actions, mechanisms, systems, story, etc.) with the purpose of informing, educating, reflecting and raising awareness. By connecting various experts such as designers, educators, developers, evaluators and researchers from both industry and academia, this workshop aims to enable participants share, discuss and learn about existing relevant mechanisms for pervasive learning in a Serious Game (SG) context. Research in SG, as a whole, faces two main challenges in understanding: the transition between the instructional design and actual game design implementation [1] and documenting an evidence-based mapping of game design patterns onto relevant pedagogical patterns [2]. From a practical perspective, this transition lacks methodology and requires a leap of faith from a prospective customer to the ability of a SG developer to deliver a game that will achieve the desired learning outcomes. This workshop aims to present and apply a preliminary exposition though a purpose-processing methodology to probe, from various SG design aspects, how SG design patterns map with pedagogical practices
Corrective surgery of face, also known as orthognathic surgery, is a complex procedure performed to correct the underlying facial deformities. In case of elective surgeries like these, patients need to make voluntary decisions whether or not undergo the surgery. Hence, it is very important for them to understand the intricacy of the techniques and potential side effects of the surgery before they sign the consent form. Conventional methods of patient education using leaflet-based instructions were found to be ineffective in providing them the required information. Sur-Face, named after surgery of face is a healthcare app exploring a new dimension in patient education with the help of interactive 3D visualizations and serious gaming elements on a mobile platform. It demonstrates the surgical process and it's after effects using high quality 3D animations. The aim of this study is to evaluate the efficacy of Sur-Face by comparing two methods of delivery of instructions: a mobile app with interactive 3D animations and an audio file containing only verbal instructions. To evaluate these methods, participant's ability to understand and retain the instructions was analyzed using a questionnaire. The null hypothesis was that there would be no difference between the two methods of instructions. On analysis, participants of the 'app' group performed significantly better (p
Interactive new media art and games belong to distinctive fields, but nevertheless share common grounds, tools, methodologies, challenges, and goals, such as the use of applications and devices for engaging multiple participants and players, and more recently electroencephalography (EEG)-based brain-computer interfaces (BCIs). At the same time, an increasing number of new neuroscientific studies explore the phenomenon of brain-to-brain coupling, the dynamics and processes of the interaction and synchronisation between multiple subjects and their brain activity. In this context, we discuss interactive works of new media art, computer and serious games that involve the interaction of the brain-activity, and hypothetically brain-to-brain coupling, between multiple performer/s, spectator/s, or participants/players. We also present Enheduanna - A Manifesto of Falling (2015), a new live brain-computer cinema performance, with the use of an experimental passive multi-brain BCI system under development. The aim is to explore brain-to-brain coupling between performer/s and spectator/s as means of controlling the audio-visual creative outputs.
Augmented Reality (AR) technology is one of the fastest growing areas in the computing field and it has pervaded many applications in the market including museums. However, there is a need for a survey exploring the effectiveness of augmented reality as a communication medium in museums. This paper reviews the development of Augmented Reality as a mass communication [1] tool in museums. We introduce a communication model which would work as a roadmap building AR guidance system with ensuring this system will be a successful method of communication with users. Besides, we propose a novel way to enhance the visitors' experience and learning by combining AR with games in museums.
Sophisticated three-dimensional animation and video compositing software enables the creation of complex multimedia instructional movies. However, if the design of such presentations does not take account of cognitive load and multimedia theories, then their effectiveness as learning aids will be compromised. We investigated the use of animated images versus still images by creating two versions of a 4-min multimedia presentation on vascular neuroeffector transmission. One version comprised narration and animations, whereas the other animation comprised narration and still images. Fifty-four undergraduate students from level 3 pharmacology and physiology undergraduate degrees participated. Half of the students watched the full animation, and the other half watched the stills only. Students watched the presentation once and then answered a short essay question. Answers were coded and marked blind. The "animation" group scored 3.7 (SE: 0.4; out of 11), whereas the "stills" group scored 3.2 (SE: 0.5). The difference was not statistically significant. Further analysis of bonus marks, awarded for appropriate terminology use, detected a significant difference in one class (pharmacology) who scored 0.6 (SE: 0.2) versus 0.1 (SE: 0.1) for the animation versus stills group, respectively (P = 0.04). However, when combined with the physiology group, the significance disappeared. Feedback from students was extremely positive and identified four main themes of interest. In conclusion, while increasing student satisfaction, we do not find strong evidence in favor of animated images over still images in this particular format. We also discuss the study design and offer suggestions for further investigations of this type.
‘Sur-face’ is an interactive mobile app illustrating different orthognathic surgeries and their potential complications. This study aimed to evaluate the efficacy of Sur-face by comparing two methods of delivering patient information on orthognathic surgeries and their related potential complications: a mobile app with interactive three-dimensional (3D) animations and a voice recording containing verbal instructions only. For each method, the participants’ acquired knowledge was assessed using a custom-designed questionnaire. Participants in the ‘app’ group performed significantly better (P
The child safety education is a very urgent task in China. It is necessary to teach children to recognize unsafe factors and learn how to escape from a dangerous situation. According to the psychological characteristics of children and constructivist learning theory, 3D serious game is an effective tool to assist child safety education in primary and secondary schools. In this chapter, we summarized some of our explorations in this field. We proposed a concept of the danger zone to simulate users’ risk-taking behavior, introduced Non-Player Character (NPC) to increase user engagement, and developed a cognitive model that could simulate the intelligent behavior of virtual agents. We also tested cases of escaping from waterside area and earthquake. Results showed that children enjoyed this new safety educational method. By playing the game, they will learn what a danger zone is and how to escape from the danger zone effectively.
In this chapter, we present Interact—a project which builds question-answering virtual humans based on pre-recorded video testimonies for Holocaust education. It was created to preserve the powerful and engaging experience of listening to, and interacting with, Holocaust survivors, allowing future generations of audience access to their unique stories. Interact demonstrates how advanced filming techniques, 3D graphics and natural language processing can be integrated and applied to specially recorded testimonies to enable users to ask questions and receive answers from virtualised individuals. This provides a new and rich interactive narrative of remembrance to engage with primary testimony. We briefly reviewed the literature of conversational natural language interfaces; discussed the design and development of Interact, including how we mapped the current proceedings of testimony and question-answering session to human-computer interaction and how we generated/predicted questions for each survivor using a lifeline chart; the 3D data capture process, generating 3D human; and natural language processing; and argued that this new form of mixed reality is a promising media to overcome the uncanny valley. Subjective and objective evaluation is also reported. The chapter is a longer version of a short paper presented at the ACM OzCHI conference (Ma et al., Interact: a mixed reality virtual survivor for holocaust testimonies. In: The proceedings of 27th annual meeting of the Australian Special Interest Group for Computer Human Interaction (OzCHI ‘15). ACM, New York, pp 250–254, 2015).
Domestic violence is a persistent and universal problem occurring in every culture and social group, with lack of empathy identified as a contributing factor. On average, one in three women and girls in the Caribbean experience domestic violence in their lifetime. In this paper we demonstrate the techniques used during the creation of a low-cost, violence prevention game titled None in Three, targeted at enhancing empathy and awareness among young people in Barbados and Grenada. A research trip was undertaken to gather photographic reference and to meet with young people. Methods to measure the emotional state of players and awareness of characters in-game were explored. Cost-saving measures such as asset store purchases were evaluated. Custom tools were created in order to speed up production, including a bespoke event editor for multiple-choice dialogue sequences, and the use of motion capture libraries and auto-rigging tools to speed up character animation workflows.
With the continued application of gaming for training and education, which has seen exponential growth over the past two decades, this book offers an insightful introduction to the current developments and applications of game technologies within educational settings, with cutting-edge academic research and industry insights, providing a greater understanding into current and future developments and advances within this field.Following on from the success of the first volume in 2011, researchers from around the world presents up-to-date research on a broad range of new and emerging topics such as serious games and emotion, games for music education and games for medical training, to gamification, bespoke serious games, and adaptation of commercial off-the shelf games for education and narrative design, giving readers a thorough understanding of the advances and current issues facing developers and designers regarding games for training and education. This second volume of Serious Games and Edutainment Applications offers further insights for researchers, designers and educators who are interested in using serious games for training and educational purposes, and gives game developers with detailed information on current topics and developments within this growing area.
This book constitutes the proceedings of the Third Joint International Conference on Serious Games, JCSG 2017, held in Valencia, Spain, in November 2017. This conference bundles the activities of the 8th International Conference on Serious Games Development and Applications, SGDA 2017, and the 7th Conference on Serious Games, GameDays 2017. The total of 23 full papers, 3 short papers, and 4 poster papers was carefully reviewed and selected from 44 submissions. The topics covered by the conference offered participants a valuable platform to discuss and learn about the latest developments, technologies and possibilities in the development and use of serious games with a special focus on how different fields can be combined to achieve the best possible results.
VR Surgery is an immersive virtual reality operating room experience for trainee surgeons in oral and maxillofacial surgery maxillofacial surgery. Using a combination of Oculus Rift head-mounted display, Leap Motion tracking devices, high-resolution stereoscopic 3D videos and 360-degree videos, this application allows a trainee to virtually participate in a surgical procedure and interact with the patient’s anatomy. VR Surgery is highly useful for surgical trainees as a visualisation aid and for senior surgeons as a practice-based learning tool. This chapter discusses the need for reforms in the existing surgical training methods and a brief review on simulation, serious games and virtual reality in surgical training. Following this, the principles of design and development of VR Surgery are presented.
In fires, people are easier to lose their mind. Panic will lead to irrational behavior and irreparable tragedy. It has great practical significance to make contingency plans for crowd evacuation in fires. However, existing studies about crowd simulation always paid much attention on the crowd density, but little attention on emotional contagion that may cause a panic. Based on settings about information space and information sharing, this paper proposes an emotional contagion model for crowd in panic situations. With the proposed model, a behavior mechanism is constructed for agents in the crowd and a prototype of system is developed for crowd simulation. Experiments are carried out to verify the proposed model. The results showed that the spread of panic not only related to the crowd density and the individual comfort level, but also related to people's prior knowledge of fire evacuation. The model provides a new way for safety education and evacuation management. It is possible to avoid and reduce unsafe factors in the crowd with the lowest cost.
The new commercial-grade Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) have led to a phenomenal development of applications across health, entertainment and the arts, while an increasing interest in multi-brain interaction has emerged. In the arts, there is already a number of works that involve the interaction of more than one participants with the use of EEG-based BCIs. However, the field of live brain-computer cinema and mixed-media performances is rather new, compared to installations and music performances that involve multi-brain BCIs. In this context, we present the particular challenges involved. We discuss Enheduanna - A Manifesto of Falling, the first demonstration of a live brain-computer cinema performance that enables the real-time brain-activity interaction of one performer and two audience members; and we take a cognitive perspective on the implementation of a new passive multi-brain EEG-based BCI system to realise our creative concept. This article also presents the preliminary results and future work.
The School of Creative Arts & Technologies at Ulster University (Magee) has brought together the subject of computing with creative technologies, cinematic arts (film), drama, dance, music and design in terms of research and education. We propose here the development of a flagship computer software platform, SceneMaker, acting as a digital laboratory workbench for integrating and experimenting with the computer processing of new theories and methods in these multidisciplinary fields. We discuss the architecture of SceneMaker and relevant technologies for processing within its component modules. SceneMaker will enable the automated production of multimodal animated scenes from film and drama scripts or screenplays. SceneMaker will highlight affective or emotional content in digital storytelling with particular focus on character body posture, facial expressions, speech, non-speech audio, scene composition, timing, lighting, music and cinematography. Applications of SceneMaker include automated simulation of productions and education and training of actors, screenwriters and directors.
This paper explores the User Experience (UX) of Augmented Reality applications in museums. UX as a concept is vital to effective visual communication and interpretation in museums, and to enhance usability during a museum tour. In the project 'MuseumEye', the augmentations generated were localized based on a hybrid system that combines of (SLAM) markerless tracking technology and the indoor Beacons or Bluetooth Low Energy (BLE). These augmentations include a combination of multimedia content and different levels of visual information that required for museum visitors. Using mobile devices to pilot this application, we developed a UX design model that has the ability to evaluate the user experience and usability of the application. This paper focuses on the multidisciplinary outcomes of the project from both a technical and museological perspective based on public responses. A field evaluation of the AR system was conducted after the UX model considered. Twenty-six participants were recruited in Leeds museum and another twenty participants in the Egyptian museum in Cairo. Results showed positive responses on experiencing the system after adopting the UX design model. This study contributes on synthesizing a UX design model for AR applications to reach the optimum levels of user interaction required that reflects ultimately on the entire museum experience.
The fields of neural prosthetic technologies and Brain-Computer Interfaces (BCIs) have witnessed in the past 15 years an unprecedented development, bringing together theories and methods from different scientific fields, digital media, and the arts. More in particular, artists have been amongst the pioneers of the design of relevant applications since their emergence in the 1960s, pushing the boundaries of applications in real-life contexts. With the new research, advancements, and since 2007, the new low-cost commercial-grade wireless devices, there is a new increasing number of computer games, interactive installations, and performances that involve the use of these interfaces, combining scientific, and creative methodologies. The vast majority of these works use the brain-activity of a single participant. However, earlier, as well as recent examples, involve the simultaneous interaction of more than one participants or performers with the use of Electroencephalography (EEG)-based multi-brain BCIs. In this frame, we discuss and evaluate "Enheduanna-A Manifesto of Falling," a live brain-computer cinema performance that enables for the first time the simultaneous real-time multi-brain interaction of more than two participants, including a performer and members of the audience, using a passive EEG-based BCI system in the context of a mixed-media performance. The performance was realised as a neuroscientific study conducted in a real-life setting. The raw EEG data of seven participants, one performer and two different members of the audience for each performance, were simultaneously recorded during three live events. The results reveal that the majority of the participants were able to successfully identify whether their brain-activity was interacting with the live video projections or not. A correlation has been found between their answers to the questionnaires, the elements of the performance that they identified as most special, and the audience's indicators of attention and emotional engagement. Also, the results obtained from the performer's data analysis are consistent with the recall of working memory representations and the increase of cognitive load. Thus, these results prove the efficiency of the interaction design, as well as the importance of the directing strategy, dramaturgy and narrative structure on the audience's perception, cognitive state, and engagement.
With the increasing number of emergencies, the crowd simulation technology has attracted wide attention in the recent years. Existing emergencies have shown that individuals are easy to be influenced by others' emotion during the evacuation. This will make it easier for people to aggregate together and increase security risks. Some of the existing evacuation models without considering emotion are therefore not suitable for describing crowd behaviors in emergencies. We propose a perception-based emotion contagion model and use multiagent technology to simulate crowd behaviors. Navigation points are introduced to guide the movement of the agents. Based on the proposed model, a prototype simulation system for crowd emotion contagion is developed. The comparative simulation experiments verify that the model can effectively deduct the evacuation time and crowd emotion contagion. The proposed model could be an assistant analysis method for crowd management in emergencies.
Purpose: Surgical training methods are evolving with the technological advancements, including the application of virtual reality (VR) and augmented reality. However, 28 to 40% of novice residents are not confident in performing a major surgical procedure. VR surgery, an immersive VR (iVR) experience, was developed using Oculus Rift and Leap Motion devices (Leap Motion, Inc, San Francisco, CA) to address this challenge. Our iVR is a multisensory, holistic surgical training application that demonstrates a maxillofacial surgical technique, the Le Fort I osteotomy. The main objective of the present study was to evaluate the effect of using VR surgery on the self-confidence and knowledge of surgical residents. Materials and Methods: A multisite, single-blind, parallel, randomized controlled trial (RCT) was performed. The participants were novice surgical residents with limited experience in performing the Le Fort I osteotomy. The primary outcome measures were the self-assessment scores of trainee confidence using a Likert scale and an objective assessment of the cognitive skills. Ninety-five residents from 7 dental schools were included in the RCT. The participants were randomly divided into a study group of 51 residents and a control group of 44. Participants in the study group used the VR surgery application on an Oculus Rift with Leap Motion device. The control group participants used similar content in a standard PowerPoint presentation on a laptop. Repeated measures multivariate analysis of variance was applied to the data to assess the overall effect of the intervention on the confidence of the residents. Results: The study group participants showed significantly greater perceived self-confidence levels compared with those in the control group (P=.034; alpha = 0.05). Novices in the first year of their training showed the greatest improvement in their confidence compared with those in their second and third year. Conclusions: iVR experiences improve the knowledge and self-confidence of the surgical residents. (C) 2017 American Association of Oral and Maxillofacial Surgeons
Virtual reality (VR) surgery using Oculus Rift and Leap Motion devices is a multi-sensory, holistic surgical training experience. A multimedia combination including 360° videos, three-dimensional interaction, and stereoscopic videos in VR has been developed to enable trainees to experience a realistic surgery environment. The innovation allows trainees to interact with the individual components of the maxillofacial anatomy and apply surgical instruments while watching close-up stereoscopic three-dimensional videos of the surgery. In this study, a novel training tool for Le Fort I osteotomy based on immersive virtual reality (iVR) was developed and validated. Seven consultant oral and maxillofacial surgeons evaluated the application for face and content validity. Using a structured assessment process, the surgeons commented on the content of the developed training tool, its realism and usability, and the applicability of VR surgery for orthognathic surgical training. The results confirmed the clinical applicability of VR for delivering training in orthognathic surgery. Modifications were suggested to improve the user experience and interactions with the surgical instruments. This training tool is ready for testing with surgical trainees.
Contemporary approaches in the development of humanoid robots continually neglect holistic nuances, particularly in ocular prosthetic design. The standard solid glass and acrylic eye construction techniques implemented in humanoid robot design present the observer with an inaccurate representation of the natural human eye by utilising hardened synthetic materials which prohibit pupillary dynamics. Precise eye emulation is an essential factor in the development of a greater realistic humanoid robot as misrepresentation in ocular form and function will appear distinctly prevalent during proximity face to face communication as eye contact is the primary form of interpersonal communicative processing. This paper explores a new material approach in the development of a more accurate humanoid robotic eye construction by employing natural compounds similar in structure to that found in the organic human eye to replace the traditional glass and acrylic modelling techniques. Furthermore, this paper identifies a gap in current ocular system design as no robotic eye model can accurately replicate all the natural operations of the human iris simultaneously in reaction to light and emotive responsivity. This paper offers a new system design approach to augment future humanoid robot eye construction towards achieving a greater accurate and naturalistic eye emulation.
Simulation of behaviors in emergencies is an interesting subject that helps to understand evacuation processes and to give out warnings for contingency plans. Individual and crowd behaviors in the earthquake are different from those under normal circumstances. Panic will spread in the crowd and cause chaos. Without considering emotion, most existing behavioral simulation methods analyze the movement of people from the point of view of mechanics. After summarizing existing studies, a new simulation method is discussed in this paper. First, 3D virtual scenes are constructed with the proposed platform. Second, an individual cognitive architecture, which integrates perception, motivation, behavior, emotion, and personality, is proposed. Typical behaviors are analyzed and individual evacuation animations are realized with data captured by motion capture devices. Quantitative descriptions are presented to describe emotional changes in individual evacuation. Facial expression animation is used to represent individuals' emotions. Finally, a crowd behavior model is designed on the basis of a social force model. Experiments are carried out to validate the proposed method. Results showed that individuals' behavior, emotional changes, and crowd aggregation can be well simulated. Users can learn evacuation processes from many angles. The method can be an intuitional approach to safety education and crowd management.
In recent years, the applications of mixed reality (MR) processing have become highly apparent in academia and the manufacturing industry with the release of innovative technologies such as the Microsoft HoloLens. However, crucial design issues with the HoloLens' restricted field of view (FOV) to a narrow window of 34 degrees inhibited the user's natural peripheral vision (Kress and Cummings, 2017). This visual limitation results in a loss of pre-set functions and projected visualisations in the AR application window. This paper presents an innovative methodology in designing a spatial user interface (UI), to minimise the adverse effects associated with the HoloLens' narrow FOV. The spatial UI is a crucial element towards developing a museum-based MR system, which was evaluated by nine experts in human-computer interaction (HCI), visual communication and museum studies. Results of this study indicate a positive user reaction towards the accessibility of the spatial UI system and enhancing the user experience. This approach can help current and future HoloLens developers to extend their application functions without visual restrictions and missing content.
Evidence demonstrates that exposure to prosocial video games can increase players' prosocial behaviour, pro social thoughts, and empathic responses. Prosocial gaming has also been used to reduce gender-based violence among young people, but the use of video games to this end as well as evaluations of their effectiveness are rare. The objective of this study was to assess the effectiveness of a context-specific, prosocial video game, Jesse, in increasing affective and cognitive responsiveness (empathy) towards victims of intimate partner violence (IPV) among children and adolescents (N = 172, age range 9-17 years, M = 12.27, SD = 2.26). A randomised controlled trial was conducted in seven schools in Barbados. Participants were randomly assigned to an experimental (prosocial video game) or control (standard school curriculum) condition. Experimental and control group enrolled 86 participants each. Girls and boys in the experimental condition, but not their counterparts in the control condition, recorded a significant increase in affective responsiveness after intervention. This change was sustained one week after game exposure. No significant effects were recorded for cognitive responsiveness. Findings suggest that Jesse is a promising new IPV prevention tool among girls and boys, which can be used in educational settings.
Augmented reality is a field with a versatile range of applications used in many fields including recreation and education. Continually developing technology spanning the last decade has drastically improved the viability for augmented reality projects now that most of the population possesses a mobile device capable of supporting the graphic rendering systems required for them. Education in particular has benefited from these technological advances as there are now many fields of research branching into how augmented reality can be used in schools. For the purposes of Holocaust education however, there has been remarkable little research into how Augmented Reality can be used to enhance its delivery or impact. The purpose of this study is to speculate regarding the following questions: How is augmented reality currently being used to enhance history education? Does the usage of augmented reality assist in developing long-term memories? Is augmented reality capable of conveying the emotional weight of historical events? Will augmented reality be appropriate for teaching a complex field such as the Holocaust? To address these, multiple studies have been analysed for their research methodologies and how their findings may assist with the development of Holocaust education.
In this research, the author descripted new narrative media known as Immersive Augmented Reality Environment (IARE) with HoloLens. Aarseth’s narrative model [17] and all available input design in IARE were reviewed and summarised. Based on these findings, The AR Journey, a HoloLens app aiming at interactive narrative for moral education purpose, was developed and assessed. Qualitative methods of interview and observation were used and the results were analysed. In general, narrative in IARE were proved to be valid for moral education purpose, and findings including valid narrative structure, input model, design guidelines were revealed.
Although hardware and software for Augmented Reality (AR) advanced rapidly in recent years, there is a paucity and gap on the design of immersive storytelling in augmented and virtual realities, especially in AR. In order to fill this gap, we designed and developed an immersive experience based on HoloLens for the National Holocaust Centre and Museum in the UK to tell visitors the Kindertransport story. We propose an interactive narrative strategy, an input model for Immersive Augmented Reality Environment (IARE), a pipeline for asset development, the design of character behavior and interactive props module and provide guidelines for developing immersive storytelling in AR. In addition, evaluations have been conducted in the lab and in situ at the National Holocaust Centre and Museum and participants' feedback were collected and analysed.
Many public services and entertainment industries utilise Mixed Reality (MR) devices to develop highly immersive and interactive applications. However, recent advancements in MR processing has prompted the tourist and events industry to invest and develop commercial applications. The museum environment provides an accessible platform for MR guidance systems by taking advantage of the ergonomic freedom of spatial holographical Head-mounted Displays (HMD). The application of MR systems in museums can enhance the typical visitor experience by amalgamating historical interactive visualisations simultaneously with related physical artefacts and displays. Current approaches in MR guidance research primarily focus on visitor engagement with specific content. This paper describes the design and development of a novel museum guidance system based on the immersion and presence theory . This approach examines the influence of interactivity, spatial mobility, and perceptual awareness of individuals within MR environments. The developmental framework of a prototype MR tour guide program named MuseumEye incorporates the sociological needs, behavioural patterns, and accessibility of the user. This study aims to create an alternative tour guidance system to enhance customer experience and reduce the number of human tour guides in museums. The data gathering procedure examines the functionality of the MuseumEye application in conjunction with pre-existing pharaonic exhibits in a museum environment. This methodology includes a qualitative questionnaire sampling 102 random visitors to the Egyptian Museum in Cairo. Results of this research study indicate a high rate of positive responses to the MR tour guide system, and the functionality of AR HMD in a museum environment. This outcome reinforces the suitability of the touring system to increase visitor experience in museums, galleries and cultural heritage sites.
The visualisation of historical information and storytelling in museums is a crucial process for transferring knowledge by directly and simplistically engaging the museum audience. Until recently, technological limitations meant museums were limited to 2D and 3D screen-based information displays. However, advancements in Mixed Reality (MR) devices permit the propagation of a virtual overlay that amalgamates both real-world and virtual environments into a single spectrum. These holographical devices project a 3D space around the user which can be augmented with virtual artefacts, thus potentially changing the traditional museum visitor experience. Few research studies focus on utilising this virtual space to generate objects that do not visually inhibit or distract the operator. Therefore, this article aims to introduce the Ambient Information Visualisation Concept (AIVC) as a new form of storytelling, which can enhance the communication and interactivity between museum visitors and exhibits by measuring and sustaining an optimum spatial environment around the user. Furthermore, this article investigates the perceptual influences of AIVC on the users' level of engagement in the museum. This article utilises the Microsoft HoloLens, which is one of the most cutting-edge imagining technologies available to date, in order to deploy the AIVC in a historical storytelling scene "The Battle" in the Egyptian department at The Manchester Museum. This research further seeks to measure the user acceptance of the MR prototype by adopting the Technology Acceptance Model (TAM). The operational approaches investigated in this study include personal innovativeness (PI), enjoyment (ENJ), usefulness (USF), ease of use (EOU), and willingness of future use (WFU). The population sampling methodology utilised 47 participants from the museum's daily visitors. Results of this research indicate that the WFU construct is the primary outcome of this study, followed by the usefulness factor. Further findings conclude that the majority of users found this technology highly engaging and easy to use. The combination of the proposed system and AIVC in museum storytelling has extensive applications in museums, galleries, and cultural heritage places to enhance the visitor experience.
Realistic humanoid robots (RHRs) with embodied artificial intelligence (EAI) have numerous applications in society as the human face is the most natural interface for communication and the human body the most effective form for traversing the manmade areas of the planet. Thus, developing RHRs with high degrees of human-likeness provides a life-like vessel for humans to physically and naturally interact with technology in a manner insurmountable to any other form of non-biological human emulation. This study outlines a human-robot interaction (HRI) experiment employing two automated RHRs with a contrasting appearance and personality. The selective sample group employed in this study is composed of 20 individuals, categorised by age and gender for a diverse statistical analysis. Galvanic skin response, facial expression analysis, and AI analytics permitted cross-analysis of biometric and AI data with participant testimonies to reify the results. This study concludes that younger test subjects preferred HRI with a younger-looking RHR and the more senior age group with an older looking RHR. Moreover, the female test group preferred HRI with an RHR with a younger appearance and male subjects with an older looking RHR. This research is useful for modelling the appearance and personality of RHRs with EAI for specific jobs such as care for the elderly and social companions for the young, isolated, and vulnerable.
A significant ongoing issue in realistic humanoid robotics (RHRs) is inaccurate speech to mouth synchronisation. Even the most advanced robotic systems cannot authentically emulate the natural movements of the human jaw, lips and tongue during verbal communication. These visual and functional irregularities have the potential to propagate the Uncanny Valley Effect (UVE) and reduce speech understanding in human-robot interaction (HRI). This paper outlines the development and testing of a novel Computer Aided Design (CAD) robotic mouth prototype with buccinator actuators for emulating the fluidic movements of the human mouth. The robotic mouth system incorporates a custom Machine Learning (ML) application that measures the acoustic qualities of speech synthesis (SS) and translates this data into servomotor triangulation for triggering jaw, lip and tongue positions. The objective of this study is to improve current robotic mouth design and provide engineers with a framework for increasing the authenticity, accuracy and communication capabilities of RHRs for HRI. The primary contributions of this study are the engineering of a robotic mouth prototype and the programming of a speech processing application that achieved a 79.4% syllable accuracy, 86.7% lip synchronisation accuracy and 0.1s speech to mouth articulation differential.
Mixed reality (MR) is a cutting-edge technology at the forefront of many new applications in the tourism and cultural heritage sector. This study aims to reshape the museum experience by creating a highly engaging and immersive museum experience for visitors combing real-time visual, audio information and computer-generated images with museum artefacts and customer displays. This research introduces a theoretical framework that assesses the potential of MR guidance system in usefulness, ease of use, enjoyment, interactivity, touring and future applications. The evaluation introduces the MuseumEye MR application in the Egyptian Museum, Cairo using mixed method surveys and a sample of 171 participants. The results of the questionnaire highlighted the importance of the mediating the role of the tour guide in enhancing the relationship between perceived usefulness, ease of use, multimedia, UI design, interactivity and the intention of use. Furthermore, the results of this study revealed the potential future use of MR in museums and ensured sustainability and engagement past the traditional visitor museum experience, which heightens the economic state of museums and cultural heritage sectors.
Over the years, the various mediums available for storytelling have progressively expanded, from spoken to written word, then to film, and now to Virtual Reality (VR) and Augmented Reality (AR). In 2016, the cutting-edge Head-Mounted Display (HMD) AR Microsoft HoloLens was released. However, though it has been several years, the quality of the user experience with narration using HMD-based AR technology has been rarely discussed. The present study explored interactive narrative in HMD-based AR regarding different user interfaces and their influence on users' presence, narrative engagement and reflection. Inspired by an existing exhibition at the National Holocaust Centre and Museum in the U.K., a HoloLens narrative application, entitled The AR Journey, was developed by the authors using two different interaction methods, Natural User Interface (NUI) and Graphical User Interface (GUI), which were used to perform an empirical study. As revealed from the results of the between-subject design experiment, NUI exhibited statistically significant advantages in creating presence for users without 3D Role Playing Game (RPG) experience, and GUI was superior in creating presence and increasing narrative engagement for users with 3D RPG experience. As indicated by the results of the interviews, the overall narrative experience in HMD-based AR was acceptable, and the branching narrative design was engaging. However, HoloLens hardware issues, as well as virtuality and reality mismatch, adversely affected user experience. Design guidelines were proposed according to the qualitative results.
Augmented reality (AR) is a new medium with the potential to revolutionize education in both schools and museums by offering methods of immersion and engagement that would not be attainable without technology. Utilizing augmented reality, museums have the capability to combine the atmosphere of their buildings and exhibits with interactive applications to create an immersive environment and change the way that audiences experience them and therefore providing the ability to perform additional historical perspective taking. Holocaust museums and memorials are candidates for augmented reality exhibits; however, using this technology for them is not without concerns due to the sensitive nature of the subject. Ethically, should audiences be immersed in a setting like the Holocaust? How is augmented reality currently being used within Holocaust museums and memorials? What measures should be taken to ensure that augmented reality experiences are purely educational and neither disrespectful to the victims nor cause secondary trauma? These are the questions that this chapter will seek to answer in order to further develop the field of augmented reality for Holocaust education. To achieve this, previous AR apps in Holocaust museums and memorials have been reviewed, and a series of studies on the usage of AR for Holocaust education have been examined to identify the ethical considerations that must be made and the ramifications of utilizing AR technology to recreate tragic periods of history.
Nowadays, both IVR (Immersive VR) and NIVR (Non-immersive VR) have already been adopted by filmmakers and used in the education of filmmaking, but few studies have shown their differences in supporting learning. This article aims to compare these two forms of technology as educational tools for learning filmmaking and give suggestions on how to choose between them. Two applications with the same purpose and content were developed using IVR and NIVR technologies respectively. An experiment of within subject design was implemented using these two versions as experimental material. 39 subjects participated in experiment and the quantitative measures include presence, motivation and usability. SPSS was used for data analysis and the statistical results, together with interview reports showed that both technologies led to positive learning experience while IVR had better performance in the presence (especially in the "sensory & realism" and "involvement" subscales) and intrinsic motivation (especially in the "enjoyment" subscale) while NIVR was more accessible to the public and may provide more complex and powerful functions with sophisticated GUI. In conclusion, both technologies are capable of supporting the learning of filmmaking effectively when chosen for proper educational missions.
Background The smartphone market is saturated with apps and games purporting to promote mental wellness. There has been a significant number of studies assessing the impact of these digital interventions. Motivation The majority of review papers solely focussed on the impact of strict rules and award systems of the apps. There is comparatively little attention paid to other game techniques designed to encourage creativity, a lusory attitude, and playful experiences. Results This gap is addressed in this paper in a consideration and analysis of a purposive selection of six mobile games marketed for wellbeing, our focus is on both external and internal motivations that these games offer. Our specific interest is how these games balance rule-based play with creativity. We find that ludic play is a highly-structured, rule-bound, goal-oriented play, in contrast to paedic play which a freeform, imaginative, and expressive. We argue that while ludic play is purposed towards the promotion of habit formation and generates feelings of accomplishment, it nonetheless relies heavily on extrinsic motivation to incentivise engagement. By contrast, paidic play, specifically role-playing, improvisation, and the imaginative co-creation of fictional game worlds, can be used effectively in these games to facilitate self-regulation, self-distancing, and therefore provides intrinsically-motivated engagement. In the context of games for mental wellbeing, ludic play challenges players to complete therapeutic exercises, while paidic play offers a welcoming refuge from real world pressures and the opportunity to try on alternate selves. Conclusion Our intention is not to value paidic play over ludic play, but to consider how these two play modalities can complement and counterbalance each other to generate more effective engagement.
Technologies like Head Mounted Display (HMD)-based Virtual Reality (VR) and Augmented Reality (AR) have made HMD-based immersive museums possible. To investigate the user acceptance, medium, and interaction experience for HMD-based immersive museums, an app entitled The Extended Journey has been designed, developed, and deployed on both VR headsets and AR headsets. Subsequently, a between-subjects design experiment with 62 participants was conducted to measure the user experience and learning outcome in HMD VR and HMD AR conditions. Quantitative results revealed that HMD VR museums had statistically significantly better immersion and empathy compared to HMD AR museums. Qualitative data indicated HMD-based immersive museums were embraced by most young participants while HMD VR had better user acceptance than HMD AR for immersive museums. The interview also demonstrated that the advantage of the HMD-based immersive museum over the traditional online museum is not only the sensory immersion from the medium itself but also the interactive narrative experience that the HMD medium facilitates, especially the natural interaction with the CG characters and the environment in the story.