A Brain-Machine Interface for Assistive Robotic Control
Total Page:16
File Type:pdf, Size:1020Kb
BOSTON UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES Dissertation A BRAIN-MACHINE INTERFACE FOR ASSISTIVE ROBOTIC CONTROL by BYRON V. GALBRAITH B.S., University of Illinois at Chicago, Chicago Illinois, 2006 M.S., Marquette University and the Medical College of Wisconsin, Milwaukee Wisconsin, 2010 Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2016 © 2016 Byron V. Galbraith All rights reserved Approved by First Reader _________________________________________________________ Frank H. Guenther, Ph.D. Professor, Department of Speech, Language, and Hearing Sciences Second Reader _________________________________________________________ Massimiliano Versace, Ph.D. Research Assistant Professor, Center for Computational Neuroscience and Neural Technology Third Reader _________________________________________________________ Deniz Erdogmus, Ph.D. Associate Professor, Electrical and Computer Engineering Northeastern University, College of Engineering ACKNOWLEDGMENTS I would like to thank the following people for their contribution to the completion of this dissertation. My advisors, Frank Guenther and Max Versace, provided invaluable guidance, mentorship, and perspective during the course of my research. My committee chair, Dan Bullock, offered support and assistance throughout my time in the Cognitive and Neural Systems program. My officemates and colleagues at Boston University assisted, encouraged, and motivated me to work harder. I would like to especially thank my wife, Karen, for her patience and support through the entire process and my children, Zephan and Tamzin, for the joy they brought me every day. This work was supported in part by the Center of Excellence for Learning in Education, Science, and Technology, a National Science Foundation Science of Learning Center (NSF SMA-0835976). iv A BRAIN-MACHINE INTERFACE FOR ASSISTIVE ROBOTIC CONTROL BYRON V. GALBRAITH Boston University Graduate School of Arts and Sciences, 2016 Major Professor: Frank H. Guenther, Ph.D. Professor, Department of Speech, Language, and Hearing Sciences ABSTRACT Brain-machine interfaces (BMIs) are the only currently viable means of communication for many individuals suffering from locked-in syndrome (LIS) – profound paralysis that results in severely limited or total loss of voluntary motor control. By inferring user intent from task-modulated neurological signals and then translating those intentions into actions, BMIs can enable LIS patients increased autonomy. Significant effort has been devoted to developing BMIs over the last three decades, but only recently have the combined advances in hardware, software, and methodology provided a setting to realize the translation of this research from the lab into practical, real-world applications. Non-invasive methods, such as those based on the electroencephalogram (EEG), offer the only feasible solution for practical use at the moment, but suffer from limited communication rates and susceptibility to environmental noise. Maximization of the efficacy of each decoded intention, therefore, is critical. This thesis addresses the challenge of implementing a BMI intended for practical use with a focus on an autonomous assistive robot application. First an adaptive EEG- based BMI strategy is developed that relies upon code-modulated visual evoked potentials (c-VEPs) to infer user intent. As voluntary gaze control is typically not v available to LIS patients, c-VEP decoding methods under both gaze-dependent and gaze- independent scenarios are explored. Adaptive decoding strategies in both offline and online task conditions are evaluated, and a novel approach to assess ongoing online BMI performance is introduced. Next, an adaptive neural network-based system for assistive robot control is presented that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. Exploratory learning, or “learning by doing,” is an unsupervised method in which the robot is able to build an internal model for motor planning and coordination based on real-time sensory inputs received during exploration. Finally, a software platform intended for practical BMI application use is developed and evaluated. Using online c-VEP methods, users control a simple 2D cursor control game, a basic augmentative and alternative communication tool, and an assistive robot, both manually and via high-level goal-oriented commands. vi TABLE OF CONTENTS ACKNOWLEDGMENTS ................................................................................................. iv ABSTRACT ........................................................................................................................ v TABLE OF CONTENTS .................................................................................................. vii LIST OF TABLES ............................................................................................................. xi LIST OF FIGURES .......................................................................................................... xii LIST OF ABBREVIATIONS .......................................................................................... xiv 1. INTRODUCTION .......................................................................................................... 1 1.1. Problem Statement ................................................................................................... 1 1.2. Contribution ............................................................................................................. 1 1.3. Organization ............................................................................................................. 2 2. AN ADAPTIVE CODE-MODULATED VISUAL BCI METHOD FOR PRACTICAL APPLICATIONS ................................................................................................................ 3 2.1. Introduction .............................................................................................................. 3 2.1.1 Brain-Computer Interfaces................................................................................. 3 2.1.2. Visual Evoked Potentials .................................................................................. 4 2.1.3. Gaze Independence ........................................................................................... 5 2.2. Methods and Materials ............................................................................................. 7 2.2.1. Data Acquisition ............................................................................................... 7 vii 2.2.2. Experimental Design ......................................................................................... 8 2.2.3. Procedure ........................................................................................................ 12 2.2.3.1. Training .................................................................................................... 13 2.2.3.2. Testing...................................................................................................... 15 2.2.4. Cognitive Workload ........................................................................................ 17 2.2.5. Analysis........................................................................................................... 18 2.2.5.1. Signal Preprocessing ................................................................................ 18 2.2.5.2. Feature Extraction .................................................................................... 19 2.2.5.3. Classification............................................................................................ 21 2.2.5.4. Confidence Thresholding ......................................................................... 23 2.2.5.5. Reliability ................................................................................................. 24 2.3. Results .................................................................................................................... 27 2.3.1. Spatial Filters .................................................................................................. 27 2.3.4. Filters and Templates ...................................................................................... 31 2.3.4. Confidence Metrics ......................................................................................... 35 2.3.5. Adaptive Online Performance......................................................................... 36 2.3.6. Cognitive Workload ........................................................................................ 39 2.4. Discussion .............................................................................................................. 40 2.4.1. Qualitative Online Feedback........................................................................... 41 2.4.2. Gaze-Dependent Task Performance ............................................................... 42 2.4.3. Gaze-Independent Task Performance ............................................................. 44 2.4.4. Adaptive Online Decoding ............................................................................. 48 viii 2.5. Conclusion ............................................................................................................. 49 3. A NEURAL NETWORK-BASED EXPLORATORY LEARNING AND MOTOR PLANNING SYSTEM FOR CO-ROBOTS ..................................................................... 51