
USING FACIALLY EXPRESSIVE ROBOTS TO INCREASE REALISM IN PATIENT SIMULATION A Dissertation Submitted to the Graduate School of the University of Notre Dame in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Maryam Moosaei Laurel D. Riek, Director Graduate Program in Computer Science and Engineering Notre Dame, Indiana July 2017 USING FACIALLY EXPRESSIVE ROBOTS TO INCREASE REALISM IN PATIENT SIMULATION Abstract by Maryam Moosaei Social robots are a category of robots that can have social interaction with humans. The presence of these robots is growing in different fields such as healthcare, entertain- ment, assisted living, rehabilitation, and education. Within these domains, human robot interaction (HRI) researchers have worked on enabling social robots to interact more natu- rally with people using different verbal and nonverbal channels. Considering that most of human-human interactions are non-verbal, it is worth considering how to enable robots to both recognize and synthesize non-verbal behavior. My research focuses on synthesizing facial expressions, a type of nonverbal behavior, on social robots. During my research, I developed several new algorithms for synthe- sizing facial expressions on robots and avatars, and experimentally explored how these expressions were perceived. I also developed a new control system for operators which automates synthesizing expressions on a robotic head. Additionally, I worked on building a new robotic head, capable of expressing a wide range of expressions, which will serve as a next generation of patient simulators. My work explores the application of facially expressive robots in patient simulation. Robotic Patient Simulators (RPS) systems are the most frequent application of humanoid robots. They are human-sized robots that can breathe, bleed, react to medication, and convey hundreds of different biosignals. RPSs enable clinical learners to safely practice Maryam Moosaei clinical skills without harming real patients, and can include: patient communication, pa- tient condition assessment, and procedural practice. Although commonly used in clinical education, one capability of RPSs is in need of at- tention: enabling face-to-face interactions between RPSs and learners. Despite the impor- tance of patients’ facial expressions in making diagnostic decisions, commercially avail- able RPS systems are not currently equipped with expressive faces. They have static faces that cannot express any visual pathological signs or distress. Therefore, clinicians can fall into the poor habit of not paying attention to a patient’s face, which can result in them going down an incorrect diagnostic path. One motivation behind my research is to make RPSs more realistic by enabling them to convey realistic, clinically-relevant, patient-driven facial expressions to clinical trainees. We have designed a new type of RPS with a wider range of expressivity, including the ability to express pain, neurological impairment, and other pathologies in its face. By de- veloping expressive RPSs, our work serves as a next generation educational tool for clinical learners to practice face-to-face communication, diagnosis, and treatment with patients in simulation. As the application of robots in healthcare continues to grow, expressive robots are another tool that can aid the clinical workforce by considerably improving the realism and delity of patient simulation. During the course of my research, I completed four main projects and designed several new algorithms to synthesize different expressions and pathologies on RPSs. First, I designed a framework for generalizing synthesis of facial expressions on robots and avatars with different degrees of freedom. Then, I implemented this framework as an ROS-based module and used it for performing facial expression synthesis on different robots and virtual avatars. Currently, researchers cannot transfer the facial expression syn- thesis software developed for a particular robot to another robot, due to having different hardware. Using ROS, I developed a general solution that is one of the first attempts to address this problem. Maryam Moosaei Second, I used this framework to synthesize patient-driven facial expressions of pain on virtual avatars, and studied the effect of an avatar’s gender on pain perception. We found that participants were able to distinguish pain from commonly conflated expressions (anger and disgust), and showed that patient-driven pain synthesis can be a valuable alter- native to laborious key-frame animation techniques, This was one of the first attempts to perform automatic synthesis of patient driven expressions on avatars without the need of an animation expert. Automatic patient driven facial expression synthesis will reduce the time and cost needed for operators of RPS systems. Third, I synthesized patient-driven facial expressions of pain on a humanoid robot and its virtual model, with the objective of exploring how having a clinical education can affect pain detection accuracy. We found that clinicians have lower overall accuracy in detecting synthesized pain in comparison with lay participants. This supported other findings in the literature, showing that there is a need to improve clinical learners’ skills in decoding ex- pressions in patients. Furthermore, we explored the effect of embodiment (robot, avatar) on pain perception by both clinicians and lay participants. We found that all participants are overall less accurate in detecting pain from a humanoid robot in comparison to a compa- rable virtual avatar. Considering these effects are important when using expressive robots and avatars as an educational tool for clinical learners. Fourth, I developed a computational (mask) model of atypical facial expressions such as stroke and Bell’s Palsy. We used our developed computational model to perform masked synthesis on a virtual avatar and ran an experiment to evaluate the similarity of the syn- thesized expressions to real patients. Additionally, we used our developed mask model to design a shared control system for controlling an expressive robotic face. My work has multiple contributions for both the HRI and healthcare communities. First, I explored a novel application of facially expressive robots, patient simulation, which is a relatively unexplored area in the HRI literature. Second, I designed a generalized framework for synthesizing facial expressions on robots and avatars. My third contribu- Maryam Moosaei tion was to design new methods for synthesizing naturalistic patient-driven expressions and pathologies on RPSs. Experiments validating my approach showed that the embodi- ment or gender of the RPS can affect perception of their expressions. Fourth, I developed a computational model of atypical expressions and used this model to synthesize Bell’s palsy on a virtual avatar. This model can be used as part of an educational tool to help clinicians improve their diagnosis of conditions like Bell’s palsy, stroke, and Parkinson’s disease. Finally, I used this mask model to develop a shared control system for controlling a robotic face. This control model automates the job of controlling an RPS’s expressions for simulation operators. My work enables the HRI community to explore new applications of social robots and expand their presence in our daily lives. Moreover, my work will inform the design and development of the next generation of PRSs that are able to express visual signs of diseases, distress, and pathologies. By providing better technology, we can improve how clinicians are being trained and this will eventually improve patient outcomes. CONTENTS FIGURES . vi TABLES . viii ACKNOWLEDGMENTS . ix CHAPTER 1: INTRODUCTION . .1 1.1 Motivation and Scope . .1 1.2 Contribution . .3 1.3 Publications . .4 1.4 Ethical Procedures . .5 1.5 Dissertation Overview . .5 CHAPTER 2: BACKGROUND . .7 2.1 Emotional Space . .9 2.2 Common Techniques for Synthesizing Facial Expressions . 11 2.2.1 Facial Feature Tracking and Extraction . 11 2.2.2 Keyframe Synthesis . 13 2.2.3 Performance Driven Synthesis . 14 2.3 Synthesizing Facial Expressions on Patient Simulators . 17 2.4 Chapter Summary . 18 CHAPTER 3: EVALUATING FACIAL EXPRESSION SYNTHESIS ON ROBOTS 20 3.1 Background . 20 3.1.1 Subjective Evaluation . 22 3.1.2 Expert Evaluation . 25 3.1.3 Computational Evaluation . 26 3.2 Proposed Method Based on Computing Subjective and Computational Eval- uation Methods . 27 3.3 Challenges in Synthesis Evaluation . 29 3.4 Chapter Summary . 31 ii CHAPTER 4: PAIN SYNTHESIS ON VIRTUAL AVATARS . 32 4.1 Background . 34 4.2 Methodology . 35 4.2.1 Source Video Acquisition . 36 4.2.2 Feature Extraction . 37 4.2.3 Avatar Model Creation . 37 4.2.4 Stimuli Creation and Labeling . 39 4.2.5 Main Experiment . 41 4.3 Results . 42 4.3.1 Summary of Key Findings . 42 4.3.2 Regression Method . 42 4.4 Discussion . 46 4.5 Chapter Summary . 47 4.6 Acknowledgment of Work Distribution . 48 CHAPTER 5: PAIN SYNTHESIS ON ROBOTS . 49 5.1 Introduction . 49 5.1.1 Our Work and Contribution . 49 5.2 Methodology . 52 5.2.1 Background . 52 5.2.2 Overview of Our Work . 54 5.2.3 Source Video Acquisition . 55 5.2.4 Feature Extraction . 57 5.2.5 Humanoid Robot . 57 5.2.6 Avatar Model Creation . 58 5.2.7 Stimuli Creation and Labeling . 58 5.2.8 Pilot Study . 60 5.2.9 Main Experiment . 62 5.3 Results . 64 5.3.1 Regression Method . 64 5.3.2 ANOVA Method . 67 5.4 Discussion . 69 5.5 Chapter Summary . 72 5.6 Acknowledgment of Work Distribution . 73 CHAPTER 6: DESIGNING AND CONTROLLING AN EXPRESSIVE ROBOT . 74 6.1 Proposed Method . 75 6.1.1 ROS Implementation . 76 6.1.2 An Example of Our Method . 78 6.2 Validation . 82 6.2.1 Simulation-based Evaluation . 82 6.2.2 Physical Robot Evaluation . 85 6.3 General Discussion . 87 iii 6.4 Chapter Summary . 88 6.4.1 Acknowledgment of Work Distribution . 89 CHAPTER 7: MASKED SYNTHESIS . 90 7.1 Background . 91 7.2 Impact . 95 7.3 Research Questions . 96 7.4 Acquire Videos of Patients with Stroke and BP and Extract Facial Features Using CLMs .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages159 Page
-
File Size-