NEURAL REPRESENTATION OF FORCE, GRASP, AND VOLITIONAL STATE IN INTRACORTICAL BRAIN-COMPUTER INTERFACE UERS WITH TETRAPLEGIA

BY

ANISHA RASTOGI

Submitted in partial fulfillment of the requirements For the degree of doctor of philosophy

Dissertation Advisor: Dr. A. Bolu Ajiboye

Department of Biomedical Engineering

CASE WESTERN RESERVE UNIVERSITY

January, 2021

CASE WESTERN RESERVE UNIVERSITY SCHOOL OF GRADUATE STUDIES

We hereby approve the thesis/dissertation of

Anisha Rastogi

candidate for the degree of Doctor of Philosophy*

Committee Chair Dr. Dominique Durand

Committee Member Dr. A. Bolu Ajiboye

Committee Member Dr. Dawn M. Taylor

Committee Member Dr. Jonathan P. Miller

Date of Defense 22 June 2020

*We also certify that written approval has been obtained for any proprietary material contained therein

Dedication

To Bill Kochevar, for dedicating his last days to bettering the world for people with disabilities. This research is a way to honor your faith in a brighter future.

i

Table of Contents Dedication ...... i Table of Contents ...... ii List of Figures ...... vi List of Tables ...... viii List of Abbreviations ...... ix Abstract ...... xii Chapter 1: Introduction ...... 1 Introduction to Brain Computer Interfaces ...... 1 The Implications of Tetraplegia ...... 1 Brain Computer Interface System Overview ...... 1 Recording Modalities Used in BCIs ...... 2 Review of iBCI Research ...... 6 Brain Computer Interfaces as a Tool for ...... 7 Brain Computer Interfaces to Restore Reaching and Grasping ...... 10 Brain Computer Interfaces that Incorporate Force Control ...... 13 Brain Computer Interfaces that Incorporate Volitional State Information ...... 16 Dissertation Specific Aims and Hypotheses ...... 18 Aim 1: Characterize the extent to which neural activity is modulated by observed, imagined, and attempted hand grasping forces...... 19 Aim 2: Determine whether force-related neural activity depends on the hand grasps used to produce forces ...... 20 Chapter 2: The evolution of intracortical brain-machine interfaces and trends toward restorative devices: A comprehensive review ...... 22 Introduction ...... 22 Intracortical Electrode Technology ...... 25 History of Intracortical Electrodes ...... 26 Current Electrode Technology and Developments ...... 27 Limitations in Electrode Longevity and Durability ...... 29 Efforts to improve intracortical electrode longevity ...... 30 Targeted Brain Areas and Neural Information Content ...... 37 Motor Cortex ...... 37 Other cortical targets for iBMI investigations ...... 38 Signal information content from targeted brain areas...... 42 iBMI Signal Pre-Processing and Feature Extraction ...... 43

ii

Extraction of Neural Features ...... 44 Open-Loop Signal Characterization ...... 49 Real-time Signal Decoding ...... 50 Decoding methods in early clinical iBMI applications ...... 51 Decoders employed in iBMI systems ...... 52 Decoder optimization ...... 54 iBMI Applications: the End-effector ...... 59 Computer cursor control for typing and tablet use ...... 60 Intracortical brain control in three-dimensional virtual environment ...... 61 Robotic arm control ...... 62 Brain controlled FES...... 63 Bi-directional iBMI with sensory feedback ...... 66 Critical role of somatosensory feedback ...... 66 Sensory restoration options ...... 67 Intracortical Microstimulation in the human somatosensory cortex ...... 69 Future work to develop bi-directional intracortical BMIs ...... 70 Conclusion ...... 71 Tables ...... 73 Chapter 3: Neural representation of observed, imagined, and attempted grasping force in motor cortex of individuals with chronic tetraplegia ...... 85 Abstract...... 86 Introduction ...... 86 Results ...... 89 Characterization of Individual Features ...... 89 Neural Population Analyses...... 95 Discussion ...... 100 Force representation persists in motor cortex after tetraplegia ...... 100 Volitional state modulates neural activity to a greater extent than force ...... 101 Limitations of open-loop task ...... 106 Force representation differences across participants ...... 107 Implications for iBCI development ...... 108 Methods ...... 109 Study permissions and participants ...... 109 Neural recordings and feature extraction ...... 110 Behavioral task ...... 111

iii

Effects of audio vs. audiovisual cues ...... 113 Assessment of kinetic vs. kinematic activity ...... 114 Characterization of individual features ...... 114 Neural population analysis and decoding ...... 115 Data availability ...... 118 Code availability ...... 118 Acknowledgements ...... 118 Author Contributions ...... 119 Competing Interests ...... 119 Supplemental Materials ...... 120 Supplemental Methods ...... 120 Supplemental Results ...... 124 Chapter 4: The neural representation of force across grasp types in motor cortex of humans with tetraplegia ...... 147 Abstract...... 147 Significance Statement ...... 148 Introduction ...... 149 Materials and Methods ...... 151 Study permissions and participants ...... 151 Participant Task ...... 152 Neural Recordings ...... 154 Characterization of Individual Neural Feature Tuning ...... 155 Neural Population Analysis and Decoding ...... 156 Results ...... 163 Characterization of Individual Neural Features ...... 163 Neural Population Analysis and Decoding ...... 168 Discussion ...... 179 Force information persists across multiple hand grasps in individuals with tetraplegia...... 180 Hand posture affects force representation and force classification accuracy...... 182 Hand grasp is represented to a greater degree than force at the level of the neural population ...... 184 Implications for Force Decoding ...... 187 Concluding Remarks ...... 189 Supplemental Materials ...... 190 Chapter 5: Discussion and Conclusions ...... 197

iv

Research Contributions ...... 197 General Contributions ...... 197 Force and Volitional State ...... 198 Force and Grasp ...... 199 Future Directions for Neuroscience: Enhancing Force-Related Representation in Motor Cortex ...... 200 Characterizing Deafferentation-Induced Changes in Force Representation ...... 201 Characterizing the Representation of Additional Kinetic Parameters ...... 203 Future Directions for Neurorehabilitation: Implications for Closed-Loop Force Decoding ...... 205 Appendix: Electrode Impedance and Signal Quality ...... 207 References ...... 211

v

List of Figures

Figure 1-1. Components of a BCI system. …………………………………………………... 2

Figure 2-1. Block diagram of an iBMI system. ……………………………………………… 24

Figure 3-1. Data collection scheme for research sessions. ……………………………….. 90

Figure 3-2. Single features are tuned to force and volitional state. ………………………. 94

Figure 3-3. Overall tuning of neural features. ………………………………………………. 94

Figure 3-4. Feature population activity patterns. …………………………………………… 97

Figure 3-5. Feature ensemble CSIM force and volitional state decoding accuracies as a function of window length. ……………………………………………………………………. 98

Figure 3-6. Time-dependent feature ensemble CSIM force and volitional state decoding accuracies. …………………………………………………………………………………….. 99

Figure 3-7. Feature ensemble volitional state go-phase confusion matrices. ………….. 100

Supplemental Figure 3-S1. Feature activity by volitional state and cue in participant

T8. …………………………………………………………………………………………….. 128

Supplemental Figure 3-S2. Mean correlations between audio-only (a) and audiovisual (av) trials. ………………………………………………………………………………………...... 130

Supplemental Figure 3-S3. CSIM neural population data for Session 2 (participant

T8). ……………………………………………………………………………………………. 131

Supplemental Figure 3-S4. Correlation between kinetic and kinematic activity. ………. 134

Supplemental Figure 3-S5. Session-averaged kinesthetic force imagery questionnaire (KFIQ) scores. ……………………………………………………………………………….. 135

Supplemental Figure 3-S6. Single features are tuned to force and volitional state in participant T5. ………………………………………………………………………………... 137

Supplemental Figure 3-S7. Single features are tuned to force and volitional state in participant T9. ………………………………………………………………………………... 138

Supplemental Figure 3-S8. Comparison of spike band power (SBP), threshold crossing (TC), and single unit (SU) features from a force-tuned channel. ………………………… 140

Supplemental Figure 3-S9. Comparison of spike band power (SBP), threshold crossing (TC), and single unit (SU) features from a channel tuned to volitional state and an interaction between force and volitional state. ……………………………………………. 142

vi

Supplemental Figure 3-S10. Comparison of spike band power (SBP), threshold crossing (TC), and single unit (SU) features from a channel tuned independently to both force and volitional state. ……………………………………………………………………………….. 144

Supplemental Figure 3-S11. Comparison of spike band power (SBP), threshold crossing (TC), and single unit (SU) features from a channel tuned to force, volitional state, and an interaction between these parameters. ……………………………………………………. 146

Figure 4-1. Data collection scheme for research sessions. ……………………………… 165

Figure 4-2. Exemplary threshold crossing (TC) and spike band power (SBP) features tuned to task parameters of interest in participant T8. ……………………………………. 166

Figure 4-3. Summary of neural feature population tuning to force and grasp. ………… 167

Figure 4-4. Simulated models of independent and interacting (grasp-dependent) neural representations of force. ……………………………………………………………………. 170

Figure 4-5. Neural population-level activity patterns. …………………………………….. 172

Figure 4-6. Time-dependent classification accuracies for force (rows 1-2) and grasp (row 3). ……………………………………………………………………………………………… 175

Figure 4-7. Go-phase confusion matrices. ………………………………………………… 178

Figure 4-8. Go-phase force classification accuracy for novel (test) grasps. ………….... 179

Supplemental Figure 4-2.1. Exemplary threshold crossing (TC) and spike band power (SBP) features tuned to task parameters of interest in participant T5, presented as in Figure 4-2. ……………………………………………………………………………………. 190

Supplemental Figure 4-5.1. Neural population-level activity patterns for all sessions, presented as in Figure 5 A-B. ………………………………………………………………. 191

Supplemental Figure 4-6.1. Time-dependent classification accuracies for individual force levels and grasp types. ……………………………………………………………………… 192

Supplemental Figure 4-6.2. Time-dependent grasp classification accuracies by number of grasps attempted per session in participant T8. ………………………………………….. 193

Supplemental Figure 4-7.1. Statistics for go-phase force and grasp classifications accuracies. …………………………………………………………………………………… 194

Supplemental Figure 4-7.2. Statistics for go-phase force classification accuracies within individual grasp types. ………………………………………………………………………. 195

Supplemental Figure 4-7.3. Statistics for go-phase force classification accuracies within individual force levels. ………………………………………………………………………. 196

Figure A-1. Electrode impedances for participant T8, Session 1. ……………………….. 208

Figure A-2. Electrode signal characteristics. ……………………………………………… 209

Figure A-3. Electrode signal to noise ratios. ………………………………………………. 209

vii

List of Tables

Table 1-1. Typical EEG, ECoG, and intracortical electrode characteristics. ……………... 3

Table 2-1. iBMI clinical trial study participants. ……………………………………..……… 73

Table 2-2. Neural signal feature types. ……………………………………………………… 77

Table 2-3. Neural signal and feature characterization studies. …………………………… 78

Table 2-4. Decoder design and optimization studies. ……………………………………… 80

Table 2-5. iBMI end-effector studies. ………………………………………………………... 82

Supplemental Table 3-S1. Session information. ………………………………………….. 125

Supplemental Table 3-S2. Supplemental session information. …………………………. 128

Table 4-1. Session information. ……………………………………………………………. 153

viii

List of Abbreviations

ADL activity of daily living

AIP anterior intraparietal area

AIS ASIA Impairment Score

ALS amyotrophic lateral sclerosis

ASIA American Spinal Injury Association

ANOVA analysis of variance

ARAT Action Research Arm Test

BCI brain-computer interface

BMI brain-machine interface

BOLD Blood oxygen-level dependent

C4-C6 Cervical spinal cord levels 4-6

CAR common average reference

CSIM continuous similarity

CSIMS continuous similarity space

DBS deep brain stimulation

DCML dorsal column/medial lemniscus pathway

DNN deep neural network

DOF degree of freedom dPC demixed principal component dPCA demixed principal component analysis

ECoG

EEG electroencephalography

EMG electromyography

FES functional electrical stimulation

ix

FDA Food and Drug Administration fMRI functional magnetic resonance imaging

GP-DKF Gaussian process regression discriminative Kalman filter iBCI intracortical brain-computer interface iBMI intracortical brain-machine interface

ICMS intracortical microstimulation

KFIQ kinesthetic force imagery questionnaire

KVIQ kinesthetic and visual imagery questionnaire

LDA linear discriminant analysis

LFP local field potential

M1 primary motor cortex

MEA microelectrode array

MI-style Michigan-style

MOCA multiple offset correction algorithm

MRI magnetic resonance imaging

MWP mean wavelet power

NHP non-human primate

OLE optimal linear estimation

PC principal component

PCA principal component analysis

PET positron emission topography

PLM piecewise linear model

PM premotor cortex

PMd dorsal premotor cortex

PMv ventral premotor cortex

PPC posterior parietal cortex

x

PSTH peristimulus time histogram

Pt platinum rCBF regional cerebral blood flow

ReFIT-KF recalibrated feedback intention trained Kalman filter rLDA regularized linear discriminant analysis

RMS root mean square

S1 primary somatosensory cortex sEEG stereoelectroencephalography

SBP spike band power

SCI spinal cord injury

SIROF sputtered iridium oxide film

SMA supplementary motor area

SMG supramarginal gyrus

SNR signal to noise ratio

SPL superior parietal lobule

SSIM spike train similarity

SSIMS spike train similarity space

SU sorted single unit

SVM support vector machine

TC threshold crossing; thresholded multi-unit activity t-SNE t-distributed stochastic neighbor embedding

UEA Utah electrode array

VoS volitional state

xi

Neural Representation of Force, Grasp, and Volitional State in

Intracortical Brain-Computer Interface Users with Tetraplegia

BY

ANISHA RASTOGI

Abstract

Intracortical brain-computer interfaces (iBCIs) can restore functional upper limb movements to individuals with chronic tetraplegia by converting neural signals into the movement of an external effector, such as a robotic limb or paralyzed arm reanimated through functional electrical stimulation (FES). While most human-operated iBCIs have extracted intended kinematic parameters such as position and velocity from the motor cortex, restoring natural grasping and object interaction capabilities also requires the control of kinetic parameters such as force. The development of hybrid iBCIs that incorporate both movement kinematics and kinetics requires an understanding of how kinetic information is represented within the human motor cortex, and how this representation is affected by additional motor and non-motor parameters during intended actions. Here, we investigated in a person with chronic tetraplegia how motor cortical neural activity was modulated by three discrete force levels over a range of volitional states (i.e., observed, imagined, and attempted forces) and hand and arm postures.

Four major findings emerged from this work. First, this study showed quantitative, electrophysiological evidence that force-related activity persists in motor cortex, even after tetraplegia, in three individuals. Second, it showed that neural force representation depends on volitional state. Attempted forces were represented to a greater degree, and were more accurately predicted from neural data, than observed

xii and imagined forces. Third, we found that attempted forces had both grasp-independent and grasp-dependent neural representations. Specifically, while attempted forces could be predicted from the neural activity at levels above chance across up to five hand and arm postures in two participants, these arm and hand postures significantly impacted the accuracy of attempted force predictions. Finally, this study showed that force-related information was represented to a lesser degree than other parameters, including volitional state and grasp, possibly due to deafferentation-induced changes in somatosensory feedback.

This study provides relevant information towards the development of a hybrid kinetic and kinematic iBCI. While the study results indicate that incorporating force control into iBCIs is feasible, force iBCIs will likely need to account for the impacts of additional motor and non-motor parameters in order to maximize performance.

xiii

Chapter 1: Introduction

Introduction to Brain Computer Interfaces

The Implications of Tetraplegia

An estimated 149,000 – 219,000 individuals in the United States are affected by tetraplegia, or the paralysis of four limbs (NSCISC, 2020). Tetraplegia results from many etiologies, including traumatic spinal cord injuries (SCI) due to motor vehicle accidents, violence, falls, and recreational activities; iatrogenic injuries during surgery and radiation therapy; and non-traumatic disease processes including amyotrophic lateral sclerosis

(ALS), infections, ischemic events, tumors, inflammatory processes such as multiple sclerosis, and developmental disorders such as cerebral palsy (McDonald and

Sadowsky, 2002). Regardless of etiology, tetraplegia severely limits one’s ability to independently perform activities of daily living (ADLs) such as eating, toileting, bathing, and dressing (Collinger et al., 2013a; Abrams and Ganguly, 2015). In particular, the loss of upper limb function disables individuals from reaching for and grasping objects within the environment. Out of several studies that evaluated the functional recovery preferences in individuals with tetraplegia, restoration of arm and hand function was consistently ranked as the first priority in for these individuals (Simpson et al., 2012).

Brain Computer Interface System Overview

Brain-computer interfaces (BCIs) are devices that translate electrical activity from the cerebral cortex into intended movement commands for external effectors, including computer cursors (Leuthardt et al., 2004; Kubler et al., 2005; Hochberg et al., 2006; Kim et al., 2008; Schalk et al., 2008; Kim et al., 2011; Jarosiewicz et al., 2015), prosthetic limbs (Hochberg et al., 2012; Collinger et al., 2013b; Wodlinger et al., 2015), and functional electrical stimulation (FES) to re-animate paralyzed arm muscles (Bouton et

1 al., 2016; Ajiboye et al., 2017). These devices essentially bypass the spinal cord by enabling direct cortical control of the paralyzed limb or a prosthetic device. As such,

BCIs have emerged as a promising technology to restore motor function – including arm and hand function – to individuals with tetraplegia.

The basic components of a BCI system (Figure 1-1) include the user, who generates neural activity when thinking about or attempting to perform a motor action

(Leuthardt et al., 2009). These neural signals are recorded by BCI electrodes, filtered to remove noise, and processed to extract informative features from the filtered signals.

The BCI then uses a decoder to mathematically map neural features to intended motor commands, which are in turn used to control the motor output of the external effector.

The BCI user typically receives visual or other sensory feedback regarding the external effector’s performance, which allows the user to adjust their neural activity in real time while operating the BCI (Leuthardt et al., 2009). These basic BCI components, including the recorded neural activity, neural features, decoding algorithms, and the control of external effectors, are described in further detail in Chapter 2.

Figure 1-1. Components of a BCI system.

Recording Modalities Used in BCIs

BCIs acquire neural activity from the user using a variety of methods. The three most common modalities include electroencephalography (EEG) (Wolpaw et al., 2002;

He et al., 2015; Abiri et al., 2019), electrocorticography (ECoG) (Moran, 2010; Schalk

2 and Leuthardt, 2011; Volkova et al., 2019), and intracortical recordings from penetrating electrodes inserted directly into the brain parenchyma (Bensmaia and Miller, 2014;

Brandman et al., 2017; Bockbrader, 2019). These modalities each have their own advantages and disadvantages and all exhibit some trade-off between surgical invasiveness and signal quality, as summarized in Table 1-1.

EEG ECoG Intracortical Electrode Epidural or Scalp Brain parenchyma Location subdural Spatial Resolution 3.0 cm 0.125 cm 100 um

Signal Amplitude 10-20 uV 50-100 uV ~100+ uV

Signal Bandwidth 0-50 Hz 0-500 Hz 0-1000’s of Hz Degrees of 3 3 10 Freedom Achieved

Table 1-1. Typical EEG, ECoG, and intracortical electrode characteristics.

EEG is a technique in which electrodes are placed on scalp to record synchronous electrical activity from thousands of cortical neurons over an area of several centimeters (Milan and Carmena, 2010). This technique, which is non-invasive

(and thus low-risk), inexpensive, and highly portable, is currently the most widely used recording modality in BCI systems (Leuthardt et al., 2009; Abiri et al., 2019), and has enabled up to three dimensions of control over virtual (McFarland et al., 2010; Doud et al., 2011) and physical (LaFleur et al., 2013) effectors. However, EEG’s main drawbacks include its relatively low spatial resolution, low bandwidth, and susceptibility to artifacts

(Leuthardt et al., 2009) Due to these limitations, EEG signals lack specific information about motor parameters such as hand position and velocity, and instead exhibit non- specific changes (e.g., decreased signal amplitude) in response to executed or imagined

3 movements. Furthermore, since the detected signals reflect the activity of neurons integrated over centimeters of cortex, multiple degrees of independent control can only be achieved by executing or imagining gross movements of multiple body parts

(McFarland et al., 2010; Doud et al., 2011; LaFleur et al., 2013).

ECoG mitigates many of these drawbacks by placing electrode grids directly onto the cortical surface or on top of the dura mater. The resulting signals are more robust than those generated by EEG, due to their increased amplitude, bandwidth, and spatial resolution (Table 1-1), and have been shown to contain specific information about motor parameters such as arm, hand, and finger movements (Leuthardt et al., 2004; Schalk et al., 2007; Pistohl et al., 2008; Miller et al., 2009; Yanagisawa et al., 2011; Chestek et al.,

2013; Chen et al., 2014b). While ECoG-based BCIs have achieved up to three dimensions of computer cursor control (Wang et al., 2013; Degenhart et al., 2018), further increases in degrees of freedom have remained constrained by the size of ECoG electrodes, which while smaller than EEG electrodes, still record the summation of neural activity over millimeters of cortex. Additionally, ECoG trades increased signal robustness for surgical invasiveness (Moran, 2010).

Intracortical penetrating microelectrodes, which are placed directly into the brain parenchyma (typically in motor cortex) to record single unit activity from ensembles of neurons, provide the greatest temporal and spatial resolution of all the recording modalities (Table 1-1) (Leuthardt et al., 2009; Milan and Carmena, 2010). Since intracortical signals reflect the activity of individual neurons within the vicinity of the electrodes, they contain abundant and detailed information about motor parameters, such as hand and arm movements (Vargas-Irwin et al., 2010), hand postures

(Schaffelhofer et al., 2015), and movements pertaining to different body parts (Willett et al., 2020). This rich parameter space translates into the most possibilities for restoring upper limb function to individuals with neurological and motor disabilities. For example,

4 intracortical BCIs (iBCIs) have achieved up to 10 degrees of control over a robotic arm

(Collinger et al., 2013b; Wodlinger et al., 2015) and control of a paralyzed arm via functional electrical stimulation (FES) (Bouton et al., 2016; Ajiboye et al., 2017), which has enabled individuals with tetraplegia to perform self-feeding and other functional tasks.

As with ECoG electrodes, intracortical microelectrodes trade high signal resolution for invasiveness, which increases surgical risk to potential BCI users and results in local neural and vascular damage around the insertion site (Leuthardt et al.,

2009). Encouragingly, within 18 human participants with chronic intracortical implants, few specific surgical or electrode-related complications were noted, suggesting that this technology’s safety profile could be comparable to those of other implanted devices in current use, such as deep brain stimulation (DBS) electrodes (Bullard et al., 2019).

Another challenge results from inflammation-mediated gliosis of the implanted microelectrodes, which in turn leads to the attenuation or loss of recorded single units over time (Ryu and Shenoy, 2009; Milan and Carmena, 2010). Despite these signal longevity issues, multi-dimensional iBCI control of external effectors has been achieved years after implant in multiple participants (Hochberg et al., 2012; Ajiboye et al., 2017;

Pandarinath et al., 2017). Further strides to increase signal longevity are currently in progress. These include efforts to develop next-generation intracortical microelectrode arrays with improved biocompatibility (Jorfi et al., 2015; Bedell and Capadona, 2018); and efforts to utilize recorded local field potentials (LFPs) (Mehring et al., 2003; Bansal et al., 2011; Bansal et al., 2012; Flint et al., 2013), multiunit activity (Stark and Abeles,

2007; Chestek et al., 2011; Bansal et al., 2012; Christie et al., 2015; Trautmann et al.,

2019), and other intracortical signals with greater stability than single unit activity.

From a functional standpoint, iBCIs provide great promise for restoring multi- dimensional control of the upper limb in individual with tetraplegia. Thus, the remainder

5 of this dissertation will focus primarily on iBCIs, though it will also cite relevant work that utilizes other recording modalities.

Review of iBCI Research

Though iBCI development is mainly associated with neuroprosthetic applications to restore motor function, its scientific origins stem from a large body of behavioral neurophysiology investigations that elucidated the relationship between intracortical activity and various kinetic (force- and muscle-related) and kinematic (movement- related) motor parameters (Hatsopoulos and Donoghue, 2009; Hatsopoulos and

Suminski, 2011), including force and torque (Evarts, 1968, 1969), position

(Georgopoulos et al., 1984), (Moran and Schwartz, 1999), acceleration (Stark et al.,

2007b), distance (Fu et al., 1993), movement direction (Georgopoulos et al., 1982;

Georgopoulos et al., 1986). Over the decades, these neuroscientific advancements have provided several foundational insights – including the field’s knowledge of neurally represented motor parameters and the way these parameters are encoded – that have informed the development of iBCIs. Currently, the results of neuroscience research and iBCI development are synergistic: While neuroscientific advancements inform rehabilitative possibilities and decoder design for iBCIs, iBCI research, in turn, often results in new questions about the underlying neural dynamics of movement control.

Therefore, the next several sections of this dissertation review the history of iBCIs both as a tool for neuroscience and as a neurorehabilitative means to restore the volitional control of various motor parameters to individuals with tetraplegia. The neuroscientific underpinnings of iBCIs are reviewed in general terms first, in order to provide a foundation for discussions of how motor parameters are encoded within, and can be decoded from, the cerebral cortex. In subsequent sections, we review the extent

6 to which some of these motor parameters – including reach, grasp, and force – have been incorporated into iBCIs, with the understanding that all progress has been informed by preceding or concurrent neuroscientific investigations.

Brain Computer Interfaces as a Tool for Neuroscience

A History of Intracortical Neuroscience

Intracortical BCIs have their roots in behavioral neurophysiology studies, the earliest of which recorded the activity of single neurons in non-human primates (NHPs) using individual metal wires (Jorfi et al., 2015). These early studies correlated neural activity within primary motor cortex (M1) to joint forces and torques (Evarts, 1968, 1969) and demonstrated that it was possible to voluntarily modulate individual M1 neurons in response to visual biofeedback and operant conditioning (Fetz, 1969). Later investigations, which used groups of microwires to record simultaneously from handfuls of neurons in M1 and other motor-related areas, were able to predict kinematic

(movement-related) and kinetic (force-related) time courses of wrist movements

(Humphrey, 1970), intended movement directions (Georgopoulos et al., 1982;

Georgopoulos et al., 1986; Georgopoulos et al., 1988), movement velocities (Moran and

Schwartz, 1999), and hand translation and rotation information (Wang et al., 2010) from the resulting neural activity.

Electrode technologies soon evolved to recording large ensembles of neurons from small areas of cortex with microelectrode arrays (MEAs), the most widely used of which is the Utah Electrode Array (UEA) (Campbell et al., 1991; Maynard et al., 1997).

Using MEAs, investigators were able to predict multiple motor parameters, such as two- and three-dimensional hand position (Wessberg et al., 2000); reaching and grasping movements (Carmena et al., 2003); position and velocity (Truccolo et al., 2008); and multiple joint angles of the hand, wrist, and arm (Vargas-Irwin et al., 2010) from high-

7 dimensional neural recordings. Additionally, as studies progressed from studying individual neuronal responses to groups of neurons recorded simultaneously, the field began to glean more nuanced information about motor parameters, such as movement direction, from the temporal relationships between neurons (Humphrey, 1970; Maynard et al., 1997). In light of these advancements, the field came to a consensus that M1 and other motor related areas including premotor (PM) and parietal cortices represented motor parameters in a highly distributed manner, with individual neurons modulated by multiple parameters of interest (Hatsopoulos and Donoghue, 2009; Wodlinger et al.,

2015; Kobak et al., 2016; Schroeder and Chestek, 2016).

As more groups began to investigate the time-varying characteristics of neural activity – known as the neural dynamics – that underlie movement, it became evident that a low number of output control dimensions (i.e., motor parameters) were represented within the high-dimensional neural space (Schroeder and Chestek, 2016).

Thus, in addition to (or in lieu of) studying single-neuron responses to motor parameters, the field has shifted towards extracting the low-dimensional dynamics of neural ensemble activity through a variety of dimensionality reduction techniques (Churchland et al., 2012; Vargas-Irwin et al., 2015; Kobak et al., 2016; Pandarinath et al., 2018). This dynamical systems perspective (Shenoy et al., 2013) has further elucidated how the motor system initiates and executes movements. For example, (Churchland et al., 2012) demonstrated that reaching trajectories have a rotational structure within a low- dimensional neural manifold, suggesting that non-periodic movements (such as reaching) may be controlled in a similar manner as period movements (such as walking), for example, by a neural pattern generator. Additional studies have demonstrated that reach kinematics (Aghagolzadeh and Truccolo, 2016), information about movement preparation (Kaufman et al., 2014), and other task parameters such as stimuli and

8 decisions (Kobak et al., 2016) are well represented within the low-dimensional dynamics of motor cortex. Furthermore, decoding from these low-dimensional trajectories can lead to higher prediction performance than decoding directly from high-dimensional neural activity (Aghagolzadeh and Truccolo, 2016).

Bridging the Gap between Neuroscience and Neurorehabilitation

Neuroscientific investigations that underlie iBCI research typically belong to one of two categories: 1) open-loop characterization of neural activity in relation to motor parameters, or 2) closed-loop control of these parameters. Most of the aforementioned works fall largely under the former (open-loop) category, as they either characterize single-neuron or population-level responses to motor parameters (Evarts, 1968;

Humphrey, 1970; Georgopoulos et al., 1986; Moran and Schwartz, 1999; Wang et al.,

2010; Churchland et al., 2012) or predict motor parameters from neural activity

(Wessberg et al., 2000; Truccolo et al., 2008; Vargas-Irwin et al., 2010) without providing the user feedback about their motor performance. In contrast, neurorehabilitative iBCIs studies that attempt to restore control of motor parameters are part of the closed-loop category, because they provide users sensory (usually visual) feedback about their control over an end effector.

Closed-loop iBCI control of parameters such as reaching, grasping, and force has been achieved to varying degrees. We first review progress in restoring kinematic reach and grasp control and then consider progress on incorporating other motor and non-motor variables, including force and volitional state, respectively, into iBCIs.

9

Brain Computer Interfaces to Restore Reaching and Grasping

The majority of closed-loop iBCI developments have focused on restoring control over kinematic (movement-related) variables, mostly reach – and to a lesser extent, grasp – in non-human primates and human participants (Bensmaia and Miller, 2014).

These efforts stem from the seminal work of Georgopoulos and colleagues

(Georgopoulos et al., 1982; Georgopoulos et al., 1984; Georgopoulos et al., 1986;

Georgopoulos et al., 1988) and other works that followed (Fu et al., 1993; Fu et al.,

1995; Moran and Schwartz, 1999; Wessberg et al., 2000; Mehring et al., 2003; Paninski et al., 2004; Rickert et al., 2005), which proposed that populations of neurons in motor cortex encode high-level kinematic variables such as movement directions, hand positions, and velocities in a global coordinate frame. iBCIs have since successfully decoded reach- and grasp-related movements from a variety of motor-related areas of the cerebral cortex, including M1 (Serruya et al., 2002; Taylor et al., 2002a; Carmena et al., 2003; Velliste et al., 2008; Moritz and Fetz, 2011; Gilja et al., 2012a; Chestek et al.,

2013; Schaffelhofer et al., 2015; Leo et al., 2016), PM (Carmena et al., 2003;

Santhanam et al., 2006; Carpaneto et al., 2011; Townsend et al., 2011; Carpaneto et al.,

2012; Gilja et al., 2012a; Pistohl et al., 2012; Bleichner et al., 2014; Schaffelhofer et al.,

2015), supplementary motor area (SMA) (Carmena et al., 2003), primary somatosensory cortex (S1) (Moritz and Fetz, 2011; Chestek et al., 2013), and posterior parietal cortex

(PPC) (Carmena et al., 2003; Mulliken et al., 2008; Klaes et al., 2015).

One of the first studies to achieve closed-loop iBCI control trained NHPs to volitionally modulate the activity of individual M1 neurons using visual feedback and reward (Fetz, 1969). This study was followed by several closed-loop iBCI developments, including the cortical control of a one-dimensional robotic arm by rats to acquire water droplets (Chapin et al., 1999) and the control of two-dimensional (Serruya et al., 2002)

10 and three-dimensional (Taylor et al., 2002a) cursor movements by NHPs. In all of these early closed-loop studies, movement information was decoded in real time from neural signals generated during natural movement execution. Subsequent iBCI studies enabled

NHPs to achieve multi-dimensional control of prosthetic limbs solely from neural activity, i.e., in the absence of executed arm movements (Carmena et al., 2003; Velliste et al.,

2008). Carmena and colleagues extracted a number of motor parameters – including three-dimensional hand position, three-dimensional and velocity, and one-dimensional gripping force – from a variety of cortical areas (M1, PMd, S1, SMA, PPC, and ipsilateral

M1) in order to control a robotic actuator in real time. Later, Velliste and colleagues trained NHPs to achieve five dimensions of robotic arm control, including three degrees of freedom (DOFs) at the shoulder, one at the elbow, and one corresponding to hand grip.

Investigations in human participants with tetraplegia, which occurred concurrently with later non-human primate studies, progressed from achieving simple one- dimensional modulation of individual neurons (Kennedy and Bakay, 1998) to two- dimensional cursor control (Kennedy et al., 2000; Kennedy et al., 2004; Kim et al., 2008;

Kim et al., 2011; Simeral et al., 2011; Hochberg et al., 2012; Jarosiewicz et al., 2015;

Pandarinath et al., 2017). These virtual kinematic studies were followed by additional works that achieved multi-dimensional cortical control of end effectors in the physical space, including prosthetic arms (Hochberg et al., 2012; Collinger et al., 2013b;

Wodlinger et al., 2015) and paralyzed arm and hand muscles via functional electrical stimulation (FES) (Bouton et al., 2016; Ajiboye et al., 2017). These studies enabled individuals with tetraplegia to perform volitional and reach-and-grasp movements in order to achieve functional tasks, including object transfers (Collinger et al., 2013b;

Wodlinger et al., 2015) and self-feeding (Hochberg et al., 2012; Ajiboye et al., 2017). In

11 particular, Ajiboye and colleagues developed the first combined FES+iBCI system to restore both reaching and grasping capabilities to an individual with tetraplegia.

iBCIs specifically aimed at restoring grasping capabilities evolved as an extension of iBCI systems that enabled multidimensional control of arm movements.

While most of these iBCI systems have achieved closed-loop control of only one hand- related dimension, such as grasp aperture (Velliste et al., 2008; Hochberg et al., 2012;

Collinger et al., 2013b; Ajiboye et al., 2017) or grasp force (Carmena et al., 2003), one system has achieved simultaneous control of up to four hand posture dimensions in an individual with tetraplegia (Wodlinger et al., 2015). An additional, combined FES+iBCI system has enabled a second individual with tetraplegia to volitionally switch between six different motions of his paralyzed wrist and hand (Bouton et al., 2016). This latter system evolved in part from multiple open-loop studies that investigated the neural representation of various hand postures in non-human primates (Carpaneto et al., 2011;

Townsend et al., 2011; Carpaneto et al., 2012; Hao et al., 2013; Hao et al., 2014;

Schaffelhofer et al., 2015) and human participants (Graimann et al., 2003; Chestek et al., 2013; Klaes et al., 2015; Xie et al., 2015; Flint et al., 2017).

Though previous efforts in individuals with tetraplegia have restored some ability to grasp objects, truly dexterous object manipulation capabilities require more simultaneous degrees of freedom than would be necessary for a system that merely switches between discrete grasp states. Ideally, such as system would afford simultaneous control of at least 2-3 DOFs per digit, plus additional DOFs to control wrist orientation (Bensmaia and Miller, 2014). Open-loop studies in non-human primates

(Vargas-Irwin et al., 2010; Aggarwal et al., 2013) and human participants (Miller et al.,

2009; Wang et al., 2009; Shin et al., 2010; Egan et al., 2012) have demonstrated that many of these DOFs are represented in motor cortex and could thus be predicted

12 simultaneously from neural activity. However, extending simultaneous control to all of these dimensions, in addition to those corresponding to the arm, poses a formidable challenge for iBCI systems. Recent work has investigated the feasibility of describing hand shape with reduced dimensionality (Vinjamuri et al., 2011; Wodlinger et al., 2015), which could be an effective compromise between increasing the functionality of the hand and keeping the dimensionality of the system manageable.

Brain Computer Interfaces that Incorporate Force Control

As discussed previously, the majority of neurorehabilitative iBCI investigations have sought to restore upper limb function by providing users with volitional control over kinematic (movement-related) parameters. These efforts have contributed much to the development of iBCIs, as they have restored up to 10-dimensional hand and arm control and have enabled individuals with tetraplegia to perform functional tasks with either prosthetic arms (Hochberg et al., 2012; Collinger et al., 2013b; Wodlinger et al., 2015) or with their own paralyzed limbs (Bouton et al., 2016; Ajiboye et al., 2017). However, while the aforementioned kinematic iBCIs have enabled some degree of object interaction for users, functional grasping and object interaction capabilities needed for ADLs such as feeding, grooming, and dressing require the user to control both the kinematics and the kinetics (forces and muscle activations) of movement (Chib et al., 2009; Flint et al.,

2014; Casadio et al., 2015). The field has thus shifted towards investigating force and other kinetic variables as a control signal for these systems. As with all iBCI research, these investigations stem from a large body of behavioral neurophysiology studies in non-human primates. A brief review of the neuroscientific and neurorehabilitative history of kinetic iBCI control is presented here.

13

Neural Representation of Kinetic Parameters

Early work by Evarts and others demonstrated that neural activity in the motor cortex is correlated to both force and the rate of change of force (Evarts, 1968, 1969;

Humphrey, 1970; Smith et al., 1975; Hepp-Reymond et al., 1978; Thach, 1978; Evarts et al., 1983; Georgopoulos et al., 1983; Georgopoulos et al., 1992). These studies provided a number of foundational insights about how forces are represented in NHP motor cortex, enumerated as follows. First, individual neuron responses to static and dynamic forces exhibit varying morphologies, including phasic (time-varying) and tonic (time- constant) firing rates (Kalaska et al., 1989; Wannier et al., 1991; Maier et al., 1993).

Second, within a certain range of force outputs, a monotonic relationship exists between static forces and the activity of individual neurons (Evarts, 1969; Hepp-Reymond et al.,

1978; Thach, 1978; Cheney and Fetz, 1980; Evarts et al., 1983). Third, even though muscle activity is correlated with both the magnitude and direction of intended forces,

M1 encodes the force direction to a much greater extent than force magnitude – which suggests that these two aspects of force are controlled independently (Kalaska and

Hyde, 1985; Kalaska et al., 1989; Taira et al., 1996; Boline and Ashe, 2005). Finally, both kinetic and kinematic parameters can be predicted from similar populations of neurons, indicating that these parameters have overlapping neural representations

(Thach, 1978; Kalaska et al., 1989; Carmena et al., 2003; Sergio et al., 2005; Suminski et al., 2011; Chhatbar and Francis, 2013; Milekovic et al., 2015; Intveld et al., 2018).

Subsequent studies attempted to elucidate the representation of forces in human cortex. These investigations began with non-invasive imaging studies performed using functional magnetic resonance imaging (fMRI) (Thickbroom et al., 1998; Cramer et al.,

2002; Ehrsson et al., 2002; Kuhtz-Buschbeck et al., 2008; Ward et al., 2008) and positron emission topography (PET) (Dettmers et al., 1995), which indirectly assess cortical activity as a function of blood flow via the blood oxygen-level dependent (BOLD)

14 signal and the regional cerebral blood flow (rCBF) signal, respectively (Wolpaw et al.,

2002; Eliassen et al., 2008). Within these non-invasive studies, BOLD and rCBF signals exhibited a monotonically increasing relationship with increasing static force levels, which agreed with the results of previous investigations in NHPs.

Kinetic Decoding for iBCI Control

The first attempts to decode kinetic parameters from neural activity were achieved in NHPs studies, which successfully reconstructed grip force (Eliassen et al.,

2008; Chen et al., 2014a) and elbow and shoulder torques (Fagg et al., 2009) from neural activity. Additional studies were able to predict electromyography (EMG) activity of arm muscles direction from M1 activity (Morrow and Miller, 2003; Santucci et al.,

2005; Pohlmeyer et al., 2007). Subsequent work successfully achieved cortical control temporarily paralyzed arm muscles in NHPs, via FES stimulation (Moritz et al., 2008;

Pohlmeyer et al., 2009; Ethier et al., 2012). Concurrent electrophysiological studies in humans were able to decode discrete forces offline, with varying levels of accuracy, from stereotactic EEG (sEEG) (Murphy et al., 2016), EEG (Wang et al., 2017), ECoG

(Degenhart et al., 2011; Paek et al., 2015), and intracortical (Downey et al., 2018) signals. Notably, Downey and colleagues were the first to predict intended forces from a human with tetraplegia at the intracortical level.

To date, few studies in humans have achieved closed-loop BCI control of force outputs. In one such study, an EEG-based BCI successfully discriminated three force levels (low, high, and rest) generated during force imagery and provided users with audio feedback indicating the imagined force predicted during each trial (Wang et al.,

2017). On the other hand, intracortical BCIs have essentially bypassed the kinetic control problem by decoding kinematic variables from cortex, and then mapping these decoder outputs to empirically-derived, patient-specific muscle stimulation patterns

15 designed to achieve the decoded kinematics (Bouton et al., 2016; Ajiboye et al., 2017).

While this approach does restore sufficient “force” control to enable individuals with tetraplegia to perform basic grasp and object transfers, more nuanced kinetic control is needed to enable participants to perform task that require manual precision. This nuanced control, in turn, should be informed by detailed investigations of how kinetic variables are represented specifically in individuals with tetraplegia, in whom deafferentation-induced cortical reorganization has occurred (Green et al., 1999;

Lacourse et al., 1999). Such investigations are the subject of this dissertation.

Brain Computer Interfaces that Incorporate Volitional State

Information

In addition to exhibiting activity during kinematic and kinetic outputs such as reaches, grasps, and forces, motor cortex has been shown to activate in the absence of movement – for example, during movement preparation, mental rehearsal, or motor imagery (Sanes and Donoghue, 2000). We refer to these cognitive activities – in addition to movement execution, observation, and other motor-adjacent actions – as the volitional states of movement.

The first studies to investigate volitional states revealed the presence of “mirror neurons” (di Pellegrino et al., 1992; Rizzolatti and Craighero, 2004), which change their firing rates during both movement execution and movement observation (Gallese et al.,

1996; Umilta et al., 2001; Kohler et al., 2002; Keysers et al., 2003; Tkach et al., 2007) or audition (Kohler et al., 2002; Keysers et al., 2003), in non-human primates. Such neurons are typically located within premotor and parietal cortices (Rizzolatti and

Craighero, 2004), but have also been identified in other motor areas such as M1 (di

Pellegrino et al., 1992). Subsequent and electrophysiological studies indicated the presence of a homologous, “mirror-like” network of neural activity in human

16 motor cortex (Rizzolatti and Craighero, 2004). Several of these studies elicited common patterns of neural activity during various volitional states, including observed, imagined, attempted, executed, simulated, and verbalized movements (Grezes and Decety, 2001;

Filimon et al., 2007; Filimon et al., 2015; Vargas-Irwin et al., 2018), which suggests the presence of a “core” network of neurons that are activated across multiple volitional states (Jeannerod, 2001). From a functional standpoint, this “core” network poses a crucial advantage to individuals with tetraplegia, who are incapable of executing movements, because these individuals can nonetheless generate neural signals from which intended motor parameters can be decoded. Indeed, multiple in human participants have achieved closed-loop iBCI control of hand, wrist, and arm movements by imagining (as opposed to executing) these actions (Hochberg et al., 2006; Felton et al., 2007; Hinterberger et al., 2008; Schalk et al., 2008; Miller et al., 2010; Hochberg et al., 2012).

The same studies that supported the concept this “core” network also demonstrated differences between the neural activation patterns elicited by various volitional states. For example, action observation and imagery typically elicit weaker, less widespread neural responses than executed (Grezes and Decety, 2001; Filimon et al., 2007; Miller et al., 2010; Filimon et al., 2015) or attempted (Vargas-Irwin et al., 2018) actions. In particular, Vargas-Irwin and colleagues found that observed, imagined, and attempted arm reaches elicited overlapping but unique, statistically distinct intracortical activity patterns in two individuals with tetraplegia. Moreover, while this study successfully detected reach-related information during all volitional states, the most accurate decoding results were achieved during attempted (as opposed to observed or imagined) reaches. These results suggest that non-motor parameters such as volitional

17 state may directly influence how robustly motor parameters are represented within motor cortex, and should thus be considered during iBCI development.

Dissertation Specific Aims and Hypotheses

As previously stated, both kinematic and kinetic parameters are necessary to restore functional control of the upper limb to individuals with tetraplegia (Chib et al.,

2009; Flint et al., 2014; Casadio et al., 2015). However, even though the motor cortex encodes both kinematics and kinetics, only kinematic variables such as movement directions and velocities have been consistently incorporated into human iBCIs

(Bensmaia and Miller, 2014). Furthermore, while kinematic neural representation in individuals with tetraplegia has been thoroughly vetted, as evidenced by the existence of high-performance kinematic iBCIs in humans (Hochberg et al., 2012; Collinger et al.,

2013b; Wodlinger et al., 2015; Ajiboye et al., 2017; Pandarinath et al., 2017), the neural representation of kinetic parameters within this patient population, and the degree to which this representation is affected by additional movement-related parameters, has yet to be fully explored. Such investigations are necessary to inform the feasibility of incorporating force control into these systems.

The main hypotheses of this dissertation are that force-related information is neurally represented within the motor cortex in individuals with tetraplegia, and that additional parameters like hand grasp and volitional state affect this representation. These hypotheses lead to two specific aims.

18

Aim 1: Characterize the extent to which neural activity is modulated by observed, imagined, and attempted hand grasping forces.

Rationale: This study elucidates the representation of both forces and volitional states within the motor cortex, at the resolution of multi-unit intracortical activity, in individuals with tetraplegia. Existing literature has demonstrated that volitional state

(VoS) affects the representation of kinematic motor parameters, such as arm reaches and grasps. Specifically, attempted and executed movements recruit stronger, more widespread neural activation patterns than observed or imagined movements in both able-bodied individuals (Grezes and Decety, 2001; Filimon et al., 2007; Miller et al.,

2010; Filimon et al., 2015) and in individuals with tetraplegia (Vargas-Irwin et al., 2018).

Furthermore, in two individuals with tetraplegia, attempted actions were decoded more accurately than observed and imagined actions, which supports the idea that a user’s volitional state could influence closed-loop iBCI performance. However, only a handful of studies have investigated how volitional state affects kinetic neural representation

(Cramer et al., 2005; Murphy et al., 2016). While these kinetic studies are generally in agreement with their kinematic counterparts in that attempted and executed forces recruit stronger neural activations than imagined forces, the trends are less clear in individuals who are paralyzed (Cramer et al., 2005). Furthermore, while open-loop force decoding was achieved in a single individual with tetraplegia (Downey et al., 2018), the extent to which force is neurally represented within these individuals remains largely unresolved.

Therefore, Aim 1 was addressed by characterizing intracortical responses to observed, imagined, and attempted static force levels. This was accomplished by evaluating the tuning properties of individual neural features using ANOVA-based statistical methods, characterizing the response of the neural population using a

19 dimensionality reduction technique called continuous similarity (CSIM) analysis, and predicting discrete force levels from the neural activity during individual volitional states.

Aim 2: Determine whether force-related neural activity depends on the hand grasps used to produce forces

Rationale: During a motor action, forces are often produced in the context other parameters related to the task, such as grasp postures used to generate these forces

(Murphy et al, 2016). As discussed previously, a body of work has revealed that neurons in the motor cortex have “mixed selectivity” to both kinematic and kinetic parameters

(Thach, 1978; Kalaska et al., 1989; Carmena et al., 2003; Sergio et al., 2005; Suminski et al., 2011; Chhatbar and Francis, 2013; Milekovic et al., 2015; Intveld et al., 2018), including force and grasp. Currently, there is a lack of consensus regarding how these various parameters influence each other. While studies suggest that forces and grasps are encoded independently within the motor cortex (Chib et al., 2009; Hendrix et al.,

2009; Pistohl et al., 2012; Intveld et al., 2018), others present evidence that force representation and decoding performance is grasp-dependent (Hepp-Reymond et al.,

1999; Degenhart et al., 2011; Murphy et al., 2016). A clear understanding of how forces and grasps interact within the neural space in individuals with tetraplegia, and how such interactions affect force decoding performance, is necessary for the next phase of iBCI development.

Therefore, Aim 2 was addressed in a manner similar to Aim 1, by evaluating feature- and population-level neural responses to discrete force levels attempted with up to five arm and hand postures. Population-level responses were assessed with a dimensionality reduction algorithm called demixed principal component analysis (dPCA), which performs an ANOVA-like decomposition of the data into task-dependent sources

20 of variance (namely, force and grasp). The resulting task-dependent dimensions of neural activity were used to predict intended forces and grasps from the data offline.

21

Chapter 2: The evolution of intracortical brain-machine interfaces and trends toward restorative devices: A comprehensive review

Note that this chapter is modified from a review article in preparation, requested by the

Journal of :

Cole, N.M.*, Rastogi, A.*, Hutchinson, B.C., Alfaro, K.E., Conlan, E.C., Krall, J.T.,

and Ajiboye, A.B. The evolution of intracortical brain-machine interfaces and

trends toward restorative devices: a comprehensive review.

*Both authors contributed equally to this work.

The article is reproduced below.

Introduction

Tetraplegia caused by spinal cord injury, stroke, or motor neuron disease leads to loss of independence due to the inability to physically interact with many objects or complete activities of daily living related to self-care. Intracortical brain machine interfaces (iBMIs) are being developed and optimized for restoring function to those affected by loss of limb movement and the ability to speak. The key motivations for iBMI clinical research studies are to improve the functionality, longevity, robustness, and generalizability of these systems in order to eventually create systems that can be deployed for home use. Clinical intervention with an iBMI system can restore speech to a person affected by locked-in syndrome, enable computer control of applications and

22 the user’s physical environment, or restore arm and hand function, all of which greatly increase the user’s independence and improve their quality of life.

An iBMI incorporates four key elements: (1) Collecting neural signals from the user; (2) Translating the signals into control parameters; (3) Using the control parameters to actuate an end-effector; and (4) A feedback mechanism to provide closed-loop control for the user. The iBMI system can include a wide variety of components for each of these key elements, thus changing how the user interacts with the system, how the system interprets the user’s intentions, and how the control parameters relate to the end-effector. Starting with the user, a closed-loop iBMI system typically records neural signals from an intracortical microelectrode array and these signals are processed in the computer. After initial preprocessing, the selected neural features (firing rates, local field potentials, etc.) are fed into the decoding algorithm which transforms neural feature modulation into control variables (cursor velocity, three- dimensional end-point velocity, etc.). These decoded movements go into the controller for the end-effector leading to movement or environment interaction. In some systems, the controller may also incorporate additional processing to convert the decoded movement variables into physical control variables or to provide movement assistance based on sensor feedback from the end effector. The resulting actions of the end- effector are fed back to the user in some way. Typically, feedback is simply visual; the user can see the apparatus move, or where the cursor is on the screen. Intracortical interfacing also allows for the incorporation of stimulating somatosensory feedback, where sensor information is fed back to the computer, processed into stimulation parameters, and then stimulated through another intracortical microelectrode. All sensory information is then integrated with the user’s volition, leading to the neural control signal modulation which is sensed by the recording electrode. This entire

23 process, including optional components, is diagrammed in Figure 2-1. The following review seeks to explore the field of brain-machine interfaces with regard to research investigations for each of the numerous components of the iBMI system, thus presenting a comprehensive review of the entire iBMI system.

Figure 2-1. Block diagram of an iBMI system. Items listed in boxes are physical components and bold terms are key, non-physical components of the system. The controller is not always a non-physical component that exists within the computer, so it is also denoted as a physical component.

Clinical research relating to iBMIs is expanding rapidly with investigators focusing their efforts on each of the components and of the complex iBMI system. Many articles review certain aspects of iBMIs, such as improving the electrode interface (Jorfi et al.,

2015; Wellman et al., 2018), signal decoding (Kao et al., 2014; Brandman et al., 2017), movement restoration (Bockbrader, 2019), and creating more versatile systems

(Andersen et al., 2014). The scope of the present review includes each of these technical aspects of iBMI systems with an explicit focus on clinical intracortical electrode

24 interfaces. Additionally, a recently published review of intracortical and deep brain electrode complications includes a comprehensive list of all study participants implanted with intracortical microelectrode arrays and their implant durations (Bullard et al., 2019).

Here we reproduce that list of study participants in a table with additional details, such as location of implant, electrode characteristics, and the publications associated with each participant (see Table 2-1: iBMI clinical trial study participants).

The present manuscript is organized by the components of an iBMI, starting with intracortical electrode technology, then discussing the typically implanted brain areas, moving into how the signals are processed and characterized, leading up to closed-loop control. Closed-loop control is broken down into real-time signal decoding, an overview of iBMI end-effectors, and discussing the future directions of bi-directional iBMI systems that incorporate sensory feedback via intracortical stimulation. The goal for each section is to present the historical development of each component, common practices in current investigations, and the future direction of research with reference to system improvements and applications of the intracortical brain-machine interface.

Intracortical Electrode Technology

The first component of an intracortical brain-computer interface is the sensor- electrode, which is implanted into the cortical tissue and typically records single or multiunit activity from individual neurons. Intracortical recording electrodes typically are used in iBMIs to record activity in motor cortex. Intracortical recording technologies fall under three broad categories: microwires, planar silicon electrodes, and microelectrode arrays (Schwartz et al., 2006; Jorfi et al., 2015). Here, we present the history of intracortical recording technologies as they pertain to clinical studies and the current efforts being made to improve the longevity and biocompatibility of the electrodes in clinical implementations of iBMI systems.

25

History of Intracortical Electrodes

Early intracortical sensors consisted of metal microwires constructed from silver- silver chloride (Renshaw et al., 1940), stainless steel (Grundfest and Campbell, 1942;

Grundfest et al., 1950), tungsten (Hubel, 1957), and eventually platinum-iridium alloys coated with insulating materials such as glass or Parylene-C (Wolbarsht et al., 1960;

Salcman and Bak, 1973, 1976). Individual microwires were inserted into cortical tissue and typically yielded recordings from one neuron at a time. Despite the inherent limitations of sequentially recording from single units, microwire technology nonetheless enabled decades of research in non-human primates, which elucidated key features of voluntary motor control (Evarts, 1966, 1968; Bizzi and Schiller, 1970; Georgopoulos et al., 1982; Georgopoulos et al., 1986; Hochberg et al., 2006).

Additional electrode technologies eventually followed to improve the chronicity of neural recordings. The first silicon electrode (Wise et al., 1970), which ultimately evolved into the Michigan (MI)-style silicon planar microelectrode (BeMent et al., 1986; Wise,

2005), enabled multiple studies that utilized chronic intracortical recordings (Hetke et al.,

1994; Kipke et al., 2003; Vetter et al., 2004). Another attempted strategy was to include neurotrophic factors within the electrode design. For example, neurotrophic cone electrodes, in which a Teflon-insulated gold wire was placed in a glass cone with a segment of sciatic nerve (neurotrophic growth factor in clinical translation) (Kennedy,

1989; Kennedy et al., 1992), enabled chronic neural recordings for up to 15 months in the cortex of non-human primates (NHPs) (Kennedy and Bakay, 1997). These neurotrophic electrodes were subsequently used to record the first intracortical neural signals in human participants with tetraplegia (Kennedy and Bakay, 1998; Kennedy et al., 2000; Kennedy et al., 2004).

26

Further developments enabled simultaneous recording of populations of neurons from multiple electrode channels, which in turn allowed for the study of temporal relations between multiple neurons and yielded more reliable prediction of motor parameters from population activity (Humphrey, 1970; Maynard et al., 1997). The first attempts to obtain simultaneous multichannel recordings began with the manual assembly of beds of needles with various strategies to align the wires (Humphrey, 1970;

Schmidt et al., 1988; Williams et al., 1999). Improvements in fabrication techniques and semiconducting materials led to the microassembly of silicon-based MI-style electrode probes (Wise, 2005) and to the development of the Utah Electrode Array (UAE)

(Campbell et al., 1991), which remains the only chronic, high-density, penetrating recording electrode with FDA approval (Lu et al., 2012; Jorfi et al., 2015). Unlike manually assembled and micro-assembled electrode designs, UAEs are monolithic structures manufactured by etching penetrating electrode shanks from an initial silicon substrate (Campbell et al., 1991). Early versions of the UAE yielded robust chronic recordings in animal models (Maynard et al., 1997; Suner et al., 2005) and subsequently subverted neurotrophic cone electrodes as the standard technology used to record intracortical activity in humans with tetraplegia (Hochberg et al., 2006; Hochberg et al.,

2012; Collinger et al., 2013b; Aflalo et al., 2015; Wodlinger et al., 2015; Bouton et al.,

2016; Ajiboye et al., 2017).

For further details on the history of intracortical electrode technology, readers are referred to an excellent and comprehensive review by (Jorfi et al., 2015).

Current Electrode Technology and Developments

Current intracortical electrodes consist of conductive core wires that are insulated with materials such as glass, parylene, teflon, or polyimide along the shank. Recording sites are created by exposing the insulation at the tip of the shank (Kozai, 2018).

27

Electrode technology has advanced considerably with both classes of arrays with regards to materials, microfabrication techniques, and packaging, as detailed in reviews by Kozai and Wellman (Kozai, 2018; Wellman et al., 2018). A brief description of MI- style planar electrodes and microelectrode arrays is included here.

MI-style Planar Microelectrodes

Initially developed as a single electrode technology (Wise et al., 1970), planar microelectrodes have since evolved to incorporate multiple recording sites along single or multiple recording shanks (Wise, 2005). More recently, flexible versions of the planar arrays are being developed, including those that can be folded, rolled, or stacked into three-dimensional configurations (Wellman et al., 2018); as well as those that contain active recording sites on multiple sides of individual recording shanks (Seymour et al.,

2011). Currently, MI-style microelectrodes are being used solely in animal models such as NHPs (Jorfi et al., 2015).

Microelectrode Arrays

Microelectrode arrays (MEAs) such as the UEA are a cornerstone for intracortical studies in human participants. MEAs consist of multiple silicon shanks, 1-1.5 millimeters in length. The electrode shanks protrude from a flat platform, which enables a one-step pneumatic insertion of all recording channels into the cortical tissue (Lu et al., 2012). The original UEA consisted of 100 platinum-tipped silicon shanks that protruded from a flat,

4x4 millimeter platform (Campbell et al., 1991; Rousche and Normann, 1998). Recent modifications have aimed to improve the insulation of the shanks (Hsu et al., 2009), to improve charge transfer via swapping out different metals (Negi et al., 2010), and to improve electrical isolation of individual channels via wafer scale fabrication (Bhandari et al., 2010). Additionally, various array configurations and structures have been

28 developed, including slant arrays (Branner et al., 2001), arrays with ultra-high aspect ratios (Bhandari et al., 2010), and high-density arrays (Wark et al., 2013).

Limitations in Electrode Longevity and Durability

Advances in penetrating intracortical array technologies have enabled studies to advance understanding of single unit activity and have enabled humans to cortically control BMI systems for years after initial array implant (Simeral et al., 2011). Despite the capabilities of current electrode technologies, the acute and chronic stability of neural recordings is limited. In the acute setting, recorded neural waveform amplitudes often vary from day to day, or even within the same day (Chestek et al., 2009; Chestek et al.,

2011; Gilja et al., 2011). This variation is thought to arise from micromotion of electrodes relative to neural tissue (Bayly et al., 2005; Suner et al., 2005; Gilletti and Muthuswamy,

2006; Santhanam et al., 2007), changes in extracellular impedance (Moffitt and

McIntyre, 2005), and participant-specific issues such as behavioral shifts (Chestek et al.,

2007) and learning (Jackson et al., 2006; Ganguly and Carmena, 2009). Acute waveform changes could potentially be mitigated with a number of strategies, including the implementation of adaptive algorithms that statistically track and account for waveform changes (Vetter et al., 2003; Gilja and Moore, 2007; Dickey et al., 2009;

Hwang and Andersen, 2009; Gilja et al., 2011; Li et al., 2011), daily recalibration (Lu et al., 2012), and subtle adaptations in behavior (Chase et al., 2009; Nuyujukian et al.,

2011).

Chronic instabilities in neural recordings require more permanent solutions. Over a period of months to years, recorded single unit amplitudes gradually decline until the waveforms eventually fade into the noise floor (Liu et al., 2006; Ludwig et al., 2006;

Freire et al., 2011; Gilja et al., 2011). This progressive loss of neural signal is partially due to engineering and material failures (Jorfi et al., 2015), such as mechanical damage

29 to electrodes during or after insertion (Ward et al., 2009; Prasad and Sanchez, 2012;

Barrese et al., 2013), accelerated corrosion of metal contacts due to reactive oxidative species in vitro {Merrill, 2010 #567;Patrick, 2011 #568;Potter-Baker, 2014 #570;Potter,

2013 #571;Prasad, 2014 #569;Prasad, 2012 #565}, and the eventual corrosion of biocompatible electrode coatings due to exposure to electrolytic environments (Sharma et al., 2010). However, the underlying cause of chronic recording instabilities is the neuroinflammatory response to implanted intracortical devices (Lu et al., 2012; Jorfi et al., 2015; Kozai et al., 2015). Briefly, following the initial injury due to intracortical device implantation, the coagulation cascade induces wound closure and recruits inflammatory cells to the injury site. These inflammatory cells, which include resident microglia and blood-borne macrophages, phagocytose cellular debris, adsorb onto the implant, and release inflammatory and cytotoxic factors that damage healthy bystander cells. Gliosis follows, in which astrocytes proliferate at the insertion site to form a sheath that encapsulates the electrode (Jorfi et al., 2015). This entire process creates a barrier between neurons and the active recording sites on electrodes, increases tissue impedance, and leads to neuronal loss, dysfunction, and even migration away from the recording sites (Rousche and Normann, 1998; Kozai et al., 2015). Furthermore, the neuroinflammatory response compromises the integrity of the intracortical implant by increasing reactive oxidative species near the electrodes (Potter et al., 2013; Potter-

Baker et al., 2014). In summary, the neuroinflammatory response compromises the electrode-tissue interface, which in turn limits the functionality of chronically implanted intracortical electrodes months to years after implantation into neural tissue.

Efforts to improve intracortical electrode longevity

The long-term viability of iBMI systems relies on chronically stable neural recordings from a large population of neurons, ideally over a period of multiple years to

30 decades. A number of research groups are currently investigating various strategies to improve signal longevity to either circumvent or counteract the effects of the neuroinflammatory response to implanted electrodes. Examples of improvements that circumvent the immune response include the use of coatings to lower or alter the impedance of recording electrodes (Keefer et al., 2008; Alba et al., 2015; Jorfi et al.,

2015; Kozai et al., 2016; Atmaramani et al., 2019) and the development of high-density electrode arrays to increase initial unit counts (Lehew and Nicolelis, 2008; Gillis et al.,

2018). However, the vast majority of technological developments have sought to incorporate strategies to minimize, modify, or mitigate the immune response to intracortical implants (Lu et al., 2012). These include pharmacological immunomodulatory approaches, modification of electrode geometry and modulus to minimize electrode micromotion, modification of electrode insertion techniques to minimize insertion trauma, integration of the electrode surface with surrounding neurons to counteract immune-mediated neuronal loss, and modification of electrode materials to improve resistance to immune-mediated electrode corrosion (Jorfi et al., 2015; Bedell and Capadona, 2018).

Immunomodulatory Approaches

Immunomodulatory approaches to prolong intracortical signal longevity involve pharmacological strategies that directly counteract the immune response. This is achieved either by administering antibiotics and anti-inflammatory drugs or by targeting specific cellular processes that promote neuroinflammation (Bedell and Capadona,

2018). Antibiotics and anti-inflammatory drugs can either be administered systemically via peripheral injection or ingestion, or locally via direct delivery to the site of implantation. In animal models, systemic administration of steroids and antibiotics can decrease astrocytosis around the electrode site; however, these pharmaceuticals

31 produce medical complications when chronically administered (Shain et al., 2003;

Rennaker et al., 2007; Girbovan et al., 2012). Therefore, a number of studies have devised technologies to locally administer antibiotics and anti-inflammatories to the electrode-tissue interface. These technologies include the incorporation of pharmaceuticals into electrode coatings, drug delivery via sustained release mechanisms, and drug delivery via microfluidic devices (Bedell and Capadona, 2018).

The results of multiple in vivo studies with these specialized electrodes suggest that chronic neuroinflammation still occurs, regardless of reduced inflammatory cell adhesion to electrode surfaces (Jorfi et al., 2015). Therefore, rather than preventing inflammatory cell adhesion altogether, another approach is to control cell phenotype near the implant, and in particular to promote the adhesion of neurons rather than inflammatory cells on the implant surface. This control has been achieved by altering the roughness of electrode surfaces to promote neuron adhesion and by presenting biological motifs to encourage neurons to adhere to synthetic electrode materials (Jorfi et al., 2015). A more detailed discussion of current efforts to improve the integration of inserted electrodes with the biological milieu can be found in the review by Jorfi and colleagues.

Modification of Electrode Stiffness

A number of groups have investigated how the stiffness of intracortical electrodes affects post-surgical electrode micromotion, which is the motion of implanted electrode shanks relative to the neural tissue due to breathing, blood flow, and other factors

(Goldstein and Salcman, 1973; Szarowski et al., 2003; Subbaroyan et al., 2005; Harris et al., 2011; Ware et al., 2012; Stiller et al., 2018). Electrode micromotion compromises signal stability by altering the number of neurons sampled at individual electrode sites over time (Gilja et al., 2011) and by inducing chronic motion-related tissue damage that

32 propagates the neuroinflammatory response near the implant (Jorfi et al., 2015).

Previous work has demonstrated that the stiffness of electrode shanks is positively correlated with micromotion-induced tissue damage and neuroinflammation (Stiller et al.,

2018). Unfortunately, traditional, silicon-based microelectrodes such as those found in

Utah arrays are extremely stiff compared to the surrounding neural tissue. While their high stiffness does facilitate the insertion of traditional microelectrodes into the cortical tissue (Bjornsson et al., 2006), it also makes the electrode shanks particularly susceptible to micromotion (Goldstein and Salcman, 1973; Sridharan et al., 2013;

Sridharan et al., 2015). Therefore, to reduce chronic neuroinflammation induced by electrode micromotion, recent studies have attempted to decrease the stiffness of implanted electrodes. Since device stiffness is a function of electrode geometry and material modulus (Karumbaiah et al., 2013; Stiller et al., 2018), many groups have opted to develop microelectrodes with smaller shanks or with more compliant materials (Lu et al., 2012; Jorfi et al., 2015).

Smaller intracortical microelectrodes reduce the mechanical strain on surrounding tissue and elicit less chronic neuroinflammation, as compared to larger implants [110,111]. Recent work suggests that shanks less than 25 microns in diameter elicit only a minimal immune response near the implant (Seymour and Kipke, 2007;

Kozai et al., 2010; Skousen et al., 2011). Accordingly, the Chestek and Kipke groups at the University of Michigan have developed 16-channel arrays of ultra-small, flexible, carbon fiber electrodes, each of which are only nanometers in diameter (Kozai et al.,

2012; Patel et al., 2015; Patel et al., 2016). Here, carbon fibers were used in lieu of glass, metal, or silicon, due to the constraints in sizing, flexibility, strength, and biocompatibility posed by these more traditional materials (Kozai et al., 2012). Existing work has demonstrated that carbon fiber electrodes elicit a reduced chronic tissue

33 response as compared to silicon probes (Kozai et al., 2012; Patel et al., 2016). Thus far, the chronic recording stability of these electrodes has been validated in animal models for up to 16 weeks, though further work is needed to assess their viability for BMI applications (Patel et al., 2016).

In addition to reducing electrode size, several research groups have attempted to create electrodes whose material properties are more compliant, and thus more similar to those of the brain’s mechanical properties (Jorfi et al., 2015). Broadly, these newer electrode designs fall under two categories: those made from “off-the-shelf,” compliant polymeric materials with constant mechanical modulus, and those constructed with mechanically-adaptive substances that are initially stiff but that soften in vivo. While microelectrodes constructed with “off-the-shelf” compliant materials do reduce micromotion-induced neuroinflammation, they are prone to buckling during insertion into the cortical tissue due to their reduced stiffness. To prevent buckling, a number of temporarily and permanently implanted insertion aides for compliant microelectrodes have been developed (Jorfi et al., 2015). Temporarily implanted insertion aids, such as shuttles that are detached after implantation (Kozai and Kipke, 2009) and shuttles constructed from biodegradable materials (Jorfi et al., 2015), pose advantages over permanent insertion aides because they provide needed support and stiffness during implantation without compromising the mechanical benefits that the compliant electrodes achieve in vivo.

A number of groups have attempted to circumvent the need for delivery aids entirely, through the development of mechanically-adaptive electrodes that are stiff during insertion but soften in vivo. One such electrode was recently evaluated in vivo during a 16 week implantation period, after which inflammatory cell activation was nearly completely attenuated and no neuronal loss was observed (Nguyen et al., 2014). Further

34 studies are needed to quantify the degree to which changing device stiffness affects electrode recording quality. For additional details on upcoming progress related to electrode design, readers are referred to reviews by (Jorfi et al., 2015) and by (Wellman et al., 2018).

Modification of Electrode Insertion Technique

The insertion of any electrode into cortex results in surgical trauma. This trauma stems both from the compression of cortex in response to electrode insertion, which can lead to traumatic brain injury, low electrode yields, and decreased recording longevity

(Rennaker et al., 2005); and disruption of the neurovasculature, which perpetuates the chronic neuroinflammatory response (Kozai et al., 2010). Several factors have been shown to influence insertion-induced trauma, including insertion speed (Normann et al.,

1999; Bjornsson et al., 2006; Johnson et al., 2007), technique (Rousche and Normann,

1992; Rennaker et al., 2005), and location (Bjornsson et al., 2006; Kozai et al., 2010). In particular, since the brain is a viscoelastic material, low insertion speeds typically lead to increased cortical compression (Normann et al., 1999) and vascular damage (Bjornsson et al., 2006), while high insertion speeds enable the brain to behave more like a rigid object and thus resist deformation-induced tissue damage. Accordingly, when compared to manual electrode insertion, pneumatic insertion of multielectrode arrays has been shown to enhance electrode recording longevity because this automated technique is capable of rapid electrode insertion, on the order of 10 meters per second (Rousche and

Normann, 1992; Normann et al., 1999; Rennaker et al., 2005). Successful implantation has also been shown to depend on Insertion location relative to subsurface neurovascular structures. Insertions at least 49 microns away from major surface vessels reduce the chances of piercing subsurface neurovascular structures (Kozai et al., 2010).

35

Thus, the accepted procedure for implanting microelectrode arrays into cortex involves high-speed, pneumatic insertion of electrodes in cortical locations that are visibly removed from surface vasculature. Since these strategies mitigate most chronic inflammatory responses stemming from surgical implantation, modifications to electrode insertion techniques have not been a major research focus. However, one group has pursued the use of two-photon microscopy in vivo to obtain pre-surgical, three- dimensional images of sub-surface neurovasculature in order to minimize blood vessel disruption below the surface of the brain. The selection of preferred electrode insertion points based on these three-dimensional images reduced neurovascular damage by

82.8 +/- 14.3% (Kozai et al., 2010).

Modification of Electrode Materials to Improve Corrosion Resistance

Due to chronic exposure to the biological environment, both the metal contacts and the insulating coatings of implanted microelectrodes can degrade over time (Lu et al., 2012; Jorfi et al., 2015). While metal recording contacts corrode much more slowly than stimulating electrodes, their rate of corrosion has been shown to increase due to the presence of reactive oxidative species in vitro (Potter et al., 2013; Potter-Baker et al.,

2014), which provides a critical link to the neuroinflammatory response (Patrick et al.,

2011). Corrosion rates are also material-dependent; for example, tungsten microwires corrode more quickly than electrodes constructed from platinum/iridium or titanium (Jorfi et al., 2015). Accordingly, the development of titanium-based Michigan-style electrodes is underway (McCarthy et al., 2011).

However, most work to improve corrosion resistance has focused on improving the integrity of insulating electrode coatings. Traditionally, intracortical electrodes were coated and sealed with biocompatible polymers such as varnish, epoxy, parylene, glass, teflon, polyimide, and silicones (Lu et al., 2012). Since these materials were initially

36 intended to be used as dielectrics in dry, non-corrosive environments (Jorfi et al., 2015), insulating electrode coatings do eventually degrade in vivo. This has prompted the

Cogan group at the University of Texas in Austin to develop silicon carbide coatings that are more resistant to degradation (Cogan et al., 2003; Deku et al., 2018a; Deku et al.,

2018b; Joshi-Imre et al., 2019).

Targeted Brain Areas and Neural Information Content

Motor Cortex

The most common location of implantation for intracortical recordings is primary motor cortex (M1). Primary motor cortex has somatotopic organization where different areas of the body are represented in various degrees and locations along M1, more commonly known as the motor homunculus. One of the first studies that explored the use of M1 utilized the signals to help a locked-in patient communicate (Kennedy and

Bakay, 1998). Here, fMRI was used to locate the areas of activity while the participant imagined making hand movements. This data was used to guide implantation of a single neurotrophic electrode in the hand area of the right motor cortex. The participant was then able to volitionally increase her neural activity which was visually fed back to her.

Multi-unit recordings showed distinct on and off periods, indicating that the participant was successfully able to modulate her neural activity in a binary fashion (Kennedy and

Bakay, 1998). This premise has been built upon so that more complex information can be decoded from motor cortex which allows recording and stimulation of motor cortex to have many applications (discussed further in section 3.3). Most recently, (Willett et al.,

2020), explored on how different areas of the body are represented in the hand knob area of motor cortex at the single neuron level. Using microelectrode arrays, they found

37 that face, head, arm, and leg movements were all represented in the hand knob area of motor cortex, indicating that movement across the whole body could be decoded from a single area of motor cortex.

Other cortical targets for iBMI investigations

In addition to motor cortex, there are a few other key areas of the brain which have been used in iBMI investigations: anterior intraparietal area (AIP), ventral premotor

(PMv), superior medial gyrus (SMG), area 5D, and primary somatosensory cortex (S1).

These areas offer possible alternate sources of neural information that could be utilized for more complex movement restoration.

Anterior intraparietal area (AIP)

AIP is a key component of the cortical grasping network (Davare et al., 2011).

AIP’s main function in this network is to extract and integrate grasp- relevant information received from the dorsal and ventral streams of the cortical visual system, or the “where” and “what pathways respectively. Lesions in AIP result in deficits in hand pre-shaping and mismatch of hand orientation to the object being grasped (Davare et al., 2011).

Neurons in AIP are characterized by their high selectivity to hand manipulations, and their motor-dominant, visual-dominant, or visual-motor preference (Sakata et al., 1997).

Motor dominant neurons are active whenever a movement is performed, independent of whether the movement is performed in the light or the dark. Visual dominant neurons are only active when a movement is performed in the light. Visual-motor neurons are less active in the dark than in the light. Many visual- dominant and visual-motor neurons are preferentially activated by particular objects (based on shape, size, and orientation of the object) (Murata et al., 2000). Increases in cortical grey matter in AIP have been observed in NHPs after learning to use a new tool (Quallo et al., 2009).

38

Several research groups have leveraged this information from AIP for the applications of movement restoration, learning, and memory. In a 2015 study, researchers implanted microelectrode arrays into the general posterior parietal cortex

(PPC) in putative AIP and Brodmann’s area 5 to decode non-grasp related hand shapes from human participants (Klaes et al., 2015). They were able to discriminate between visual and motor aspects of hand shapes from the neural recordings, which could be used for the application of neuroprosthetic devices. One investigation explored the motor capacity of AIP via microelectrode implantation and they found that AIP neurons are turned for intended direction and had selectivity for either the right limb, left limb, or both

(Aflalo et al., 2015). Another investigation further explored how different motor parameters such as body side, body part, and cognitive strategy were represented in

AIP (Zhang et al., 2017). They found that at the single neuron level, neurons that encode for different body parts are functionally segregated within AIP (with some areas of overlap where neurons can have mixed selectivity). Body side and cognitive strategy were determined to be dependent on the body part, and neural tuning was determined by the specific body part that was being engaged (Zhang et al., 2017). Most recently,

AIP was used to explore learning strategies for movement tasks where single neurons were tuned to reach a target goal (Sakellaridi et al., 2019). By employing a strategy in which the signals were used to guide a cursor to the intended and unintended direction, they found that the underlying neural mechanism for learning in AIP neurons comes from leveraging existing neuronal connections from previously learned tasks (Sakellaridi et al., 2019). AIP neurons also tune to memory tasks in two ways: memory and confidence

(Rutishauser et al., 2018). Memory neurons were selective to new or old stimuli whereas confidence neurons were selective to whether the participant was certain that an image presented to them had fallen into the new or old stimuli category (Rutishauser et al.,

2018).

39

Area 5d

An area that is closely located to AIP is area 5d, or the dorsal subdivision of

Brodmann’s Area 5 in the PPC. Area 5 in general has been known to be associated with sensorimotor integration with the specific role of encoding target related information.

Area 5 has also been shown to have distinct connections to the parietal reach region and it is thought that the information being sent from the PPR transforms gaze-related information into limb orientation information. This can transform the frame of reference when reaching for an object in the non-human primate brain (Andersen and Buneo,

2002; Bremner and Andersen, 2012). Further recordings in NHP have shown that area 5 integrates information from many sensory modalities such as vision and proprioception

(McGuire and Sabes, 2011). Therefore, area 5 is specifically important for integrating various types of sensory information to transform the frame of reference in the brain from an eye centered view to an effector centered view.

While area 5d, or the superior parietal lobule (SPL) in humans, appears to contain useful movement related information, several studies have tried to record from this area in clinical studies in conjunction with AIP with limited results (Zhang et al.,

2017; Rutishauser et al., 2018). For example, In one study, they disregarded the data from their implant in area 5d due to its low neural yield, stating that AIP led to four times the amount of recorded units than that of 5d (Klaes et al., 2015).

Somatosensory cortex (S1)

Two studies have reported recording from primary somatosensory cortex (S1)

(Frot et al., 2013; Downey et al., 2016). S1 is a well-researched area because it is the main brain area that receives somatosensory information from the body. While various studies have explored stimulating in S1, few have explored recording from this area.

One study recorded from S1 with the goal of using the data for a vision-guided robotic

40 assistance and BMI system to restore upper extremity movement (Downey et al., 2016).

They found that the signals from S1 were not sufficient to control a robotic arm alone, but the participant was able to have sufficient control with movement assistance.

However, performance overall was less than that of the participant with an implant in M1.

The second study explored nociceptive signals to S1 in epileptic patients, which proved that nociceptive stimuli did have cortical representation in S1 (Frot et al., 2013). Further discussion on somatomotor and somatosensory integration will be explored later in this review. (See Section 7: Bidirectional iBMI)

Supramarginal gyrus (SMG) and ventral premotor area (PMv)

The supramarginal gyrus (SMG) and ventral premotor area (PMv) have also been implanted in a clinical investigation (Armenta Salas et al., 2018), although the report was focused on intracranial microstimulation in S1 and no reports of recording from the other implanted areas have been published at this time. However, previous literature indicates that these two areas provide useful somatosensory information

(Rizzolatti et al., 1996; Rushworth et al., 2001; Russ et al., 2003). Neurons in PMv are particularly interesting because they display mirror properties in NHP studies. Mirror neurons discharge not only when the task is being performed, but also when the task is observed (Rizzolatti et al., 1996). Currently, there are no studies that use intracortical electrodes to explore the functional role of SMG in an iBMI application, but there have been several studies implicating SMG’s role in memory (Russ et al., 2003) and motor attention (Rushworth et al., 2001). Both SMG and PMv have the potential to be new targets in future investigations for clinical iBMI systems based on current exploration in

NHPs and could play a crucial role in a bi-directional iBMI system given that a new study participant has recording electrodes implanted in these areas (Armenta Salas et al.,

2018).

41

Future translation of pre-clinical investigations

Several other areas have been studied via intracortical recordings in NHPs that offer interesting avenues for future clinical iBMI investigation. For example, AIP sends object information extracted from the visual system to motor cortical area F5, where a movement plan is selected and forwarded to M1 for execution (Menz et al., 2015). The recurrent feedback in the AIP-F5 circuit allows for online fine-tuning of grasp (Davare et al., 2011). Studies comparing decoding performance between AIP-F5 found higher decoding accuracy of grip types from F5 (Townsend et al., 2011; Lehmann and

Scherberger, 2013). However, decoding of grip type together with object orientation or position in space was higher in AIP. Additionally, one study decoded 27 degrees of freedom (DOF) of the arm and hand from AIP, F5, and M1, showing the highest decoding accuracy from M1, followed by F5, then AIP (Menz et al., 2015).

Signal information content from targeted brain areas

Most commonly, iBMI systems translate neural modulation in the primary motor cortex (M1) into kinematic information to control various end-effectors (Hochberg et al.,

2006; Hochberg et al., 2012; Collinger et al., 2013b; Ajiboye et al., 2017; Stavisky et al.,

2018; Vargas-Irwin et al., 2018). Previous studies have shown that neurons recorded from the hand and arm area of M1 tune to movement, direction, and position. However, there is debate as to whether the neural tuning is due to kinematic information or visual feedback of the end-effector’s position and how tuning is different for able-bodied participants versus participants with paralysis. In able-bodied study participants, neural tuning is most likely due to arm movement and proprioception as seen in a comparison of NHPs with and without restricted arm movement (Stavisky et al., 2018).

Kinetic information is also important for dexterous hand control in order to adjust grasp force for object manipulation. Typically, an iBMI system incorporates pre-

42 programed grasp movements to allow the user to pick up and move objects (Ethier et al.,

2012), but would need additional neural information to decode different grasp forces.

Early investigations determined that pyramidal tract neurons in the Horsley-Clarke coordinates, A12 and L17, increased their discharge frequency with heavier loads in

NHPs (Evarts, 1968). Another study used an iBMI system to classify grasp forces to handle four similarly shaped objects with a robotic arm (Downey et al., 2018). Clinical iBMI investigations are beginning to incorporate force decoding for physical end- effectors (Friedenberg et al., 2017).

Lastly, another parameter that is important to the evolution of BMI systems is volitional state. The neural tuning differences during observing, imagining, and attempting movement can affect how movement is classified by a decoder (Vargas-Irwin et al., 2018). Movement rehearsal, and motor imagery for goal, trajectory and movement types can be decoded from neural signals from the PPC in humans (Aflalo et al., 2015).

Volitional state and distinct limbs on specific sides of the body have also been decoded from AIP in humans (Zhang et al., 2017). Incorporating kinetic variables, volitional state, and limb representation into future iBMI systems could allow users to interact with their environment using physical end-effectors for object manipulation with both hands in a naturalistic way. iBMI Signal Pre-Processing and Feature Extraction

In order for an iBMI system to utilize the information encoded within different brain areas, neural activity from these brain areas must be recorded, and then intended control parameters must be extracted from the recorded neural data. As described previously, current microelectrode arrays such as the Utah array are capable of recording extracellular action potentials from groups of individual neurons, termed single units. Single unit action potentials range from tens to hundreds of microvolts in amplitude

43 and are typically less than two milliseconds in duration (Schwartz et al., 2006; Obien et al., 2014). Prior to use in an iBMI, the neural activity recorded on each electrode channel is amplified, sampled at a high frequency (typically 30,000 Hz), and either high-pass or band-pass filtered (typically between 250-5,000 Hz) to capture the high-frequency components that are inherent to action potentials (Hochberg et al., 2012; Aflalo et al.,

2015; Gilja et al., 2015; Bouton et al., 2016; Ajiboye et al., 2017; Pandarinath et al.,

2017). The neural signals are also often spatially filtered using a common average reference or Laplacian filter to reduce environmental noise (Schwartz et al., 2006; 2012).

Signal amplification and digitization is typically achieved in hardware, while additional processing steps are typically executed in software.

The following sections define the features extracted from neural data, discuss the advantages of certain feature types for closed-loop iBMI applications, and how signal characterization using dimensionality reduction techniques such as principal component analysis (PCA) and neural network modeling can further the understanding of how motor control can be achieved using an intracortical interface.

Extraction of Neural Features

Subsequent to signal pre-processing, neural features are extracted from the recorded neural activity. These neural features broadly fall under three categories: sorted single units, multi-unit features, and local field potentials (LFPs), as summarized in Table 2-2.

History and Overview of Extracted Neural Features

Sorted single units, the first type of neural feature to be derived from neural recordings (Andersen et al., 2004), are considered the gold standard against which all other extracted neural features are compared. Classically, single sorted units were

44

“extracted” from individual channels by visually inspecting the channel’s recorded multi- unit voltage waveform and manually setting amplitudes and widths to assign detected voltage “spikes” to individual neurons (Homer et al., 2013; Lefebvre et al., 2016). Early single unit feature data extracted in this manner facilitated an understanding of how motor parameters were encoded within, and could be decoded from, neural signals, both in non-human primates (Evarts, 1966, 1968; Bizzi and Schiller, 1970; Humphrey, 1970;

Georgopoulos et al., 1982; Georgopoulos et al., 1986; Amirikian and Georgopoulos,

2000) and in human participants (Kennedy and Bakay, 1998; Hochberg et al., 2006).

More recently, as electrode technology progressed towards enabling large-scale, simultaneous, multi-electrode recordings, classical spike sorting approaches gave way to a number of automated and semi-automated spike sorting algorithms to improve the consistency, accuracy, and speed of single unit detection (Lewicki, 1998; Rey et al.,

2015; Lefebvre et al., 2016).

Subsequently, features extracted for iBMI applications expanded to include lower-frequency (<300 Hz) LFPs, which encompass the summation of electrical activity from many neurons surrounding the recording site (Sanes and Donoghue, 1993;

Kennedy et al., 2004; Scherberger et al., 2005; Schwartz et al., 2006); and multi-unit features, which are derived from neural data that has not been spike-sorted (Homer et al., 2013). Though LFPs are a less specific signal than sorted single units due to their lower frequency content and spatial resolution, they nonetheless contain useful information content and have emerged as a potentially viable neural feature for use in iBMIs due to their increased signal stability over time (Andersen et al., 2004; Schwartz et al., 2006; Sharma et al., 2015). Multi-unit features arose as a middle ground between low-frequency, stable LFPs and high-frequency sorted single units with unstable amplitudes (Homer et al., 2013). Like sorted single units, multi-unit features are

45 extracted from high-frequency neural data, typically within time bins ranging from 10-100 ms (Hochberg et al., 2012; Wang et al., 2013; Aflalo et al., 2015; Gilja et al., 2015;

Wodlinger et al., 2015; Bouton et al., 2016; Ajiboye et al., 2017; Pandarinath et al.,

2017). While many kinds of multi-unit features exist, those extracted for use in human- operated iBMI systems typically fall into three categories: thresholded multi-unit activity, spike band power, and mean wavelet power. Thresholded multi-unit activity, which refers to the number of times a channel’s recorded voltage crosses a predefined noise threshold, will be described in more detail in the following section. Spike band power, which refers to the power contained within the frequency band encompassing multi-unit activity (250-5000 Hz), exhibits enhanced signal stability over sorted single units and less redundancy across multiple recording channels when compared to LFPs (Stark and

Abeles, 2007). Both of these features are utilized extensively in human-operated iBMIs, as shown in Tables 2-3 – 2-5.

The mean wavelet power (MWP) feature is a fairly new addition to human iBMI systems (Bouton et al., 2016). Briefly, this feature is derived by decomposing a pre- processed voltage trace into wavelets, which are transient, wave-like signals with zero mean (Mallat, 1999). Wavelets provide both frequency and temporal information about the underlying data and can represent information from a large range of frequency bands -- including those that encompass single unit, multi-unit, and LFP features -- without spike sorting (Sharma et al., 2015). MWP features are specifically computed by normalizing and averaging the wavelet coefficients from each channel of recorded neural data. Recently, MWP features derived from wavelets corresponding to the multi-unit frequency band were used in an iBMI system that restored finger movements in a human participant with tetraplegia (Bouton et al., 2016; Friedenberg et al., 2017;

Colachis et al., 2018; Skomrock et al., 2018).

46

While each neural feature subtype poses advantages and disadvantages for human-operated iBMI systems, the consensus as to which features yield the best iBMI performance is still an active area of research (Ajiboye et al., 2012; Bansal et al., 2012;

Sharma et al., 2015). The pros and cons of sorted single units, thresholded multi-unit activity, and local field potential features are described in more detail, below.

The advantages and disadvantages of single unit and multi-unit features for iBMI decoding

As previously described, recorded and pre-processed neural signals are actually a superposition of action potentials generated from multiple neurons. During spike sorting, recorded spikes from an individual channel are either manually or automatically grouped into different clusters based on their morphologies, and each cluster is assigned to a single unit (Rey et al., 2015). While spike sorting is necessary for investigations of single neuron responses, the use of sorted single units in iBMI systems poses a number of disadvantages (Todorova et al., 2014; Trautmann et al., 2019). First, traditional manual spike sorting methods are subjective and time-intensive. While automated spike sorting algorithms mitigate some of these challenges, these algorithms are computationally intensive and require large amounts of data to produce accurate results.

Finally, due to the gradual degradation of signals recorded by microelectrode arrays, many chronically implanted electrodes detect real neural activity that cannot be assigned to an individual neuron during spike sorting (Todorova et al., 2014; Pandarinath et al.,

2015; Trautmann et al., 2019). These disadvantages make sorted single units problematic for incorporation into real-time iBMI systems

In contrast, the extraction of thresholded multi-unit activity and other multi-unit neural features is fast, easy, and all-inclusive of neural activity that cannot be sorted

(Todorova et al., 2014). Perhaps for these reasons, a number of iBMI studies in human

47 participants have made use of extracted multi-unit features, as shown in Tables 2-3 – 2-

5. In fact, most real-time iBMIs that control end-effectors do not use spike sorting at all and use multi-unit features exclusively (Hochberg et al., 2012; Gilja et al., 2015;

Wodlinger et al., 2015; Bouton et al., 2016; Ajiboye et al., 2017; Pandarinath et al., 2017;

Nuyujukian et al., 2018; Stavisky et al., 2018; Young et al., 2019). Despite these successes, the use of multi-unit features over single sorted units is still debated. Since multi-unit feature extraction fails to account for the fact that individual channels record from multiple neurons, and that these neurons may exhibit different responses to behavioral stimuli, information loss can occur when spike sorting is omitted (Todorova et al., 2014; Christie et al., 2015; Rey et al., 2015).

Recently, a number of groups have attempted to quantify the degree of information loss that occurs when thresholded multi-unit activity is used in lieu of sorted single units, and in particular, how these losses affect iBMI performance (Chestek et al.,

2011; Todorova et al., 2014; Christie et al., 2015; Trautmann et al., 2019). With the exception of Todorova’s 2014 study in non-human primates, all studies conducted by these groups exhibited little to no information loss and comparable iBMI performance between the two feature modalities. Specifically, Todorova’s study found that thresholded multi-unit activity yielded relatively poor iBMI performance as compared to sorted single units. However, these results were soon refuted by Christie and colleagues, who attributed the information loss reported in Todorova’s study to a non- optimal noise threshold. According to Christie’s study, a systematic sweep through various noise thresholds typically yielded a range of thresholds (typically -3 to -4.5 x

RMS, as opposed to Todorova’s -6.5 x RMS) in which thresholded multi-unit activity performed approximately as well as sorted spikes (Todorova et al., 2014; Christie et al.,

2015). Even more recently, Trautmann and colleagues found that population-level

48 dynamics derived from multi-unit thresholded activity were comparable to those derived from sorted single units, and that both sets of features yielded the same scientific conclusions (Trautmann et al., 2019). Given that thresholded multi-unit activity contains almost the same information as sorted single units, and given that multi-unit features are much easier to compute in real time, it may be more realistic for iBMI systems to continue relying exclusively on these features.

Open-Loop Signal Characterization

The question of which types of features are most effective for decoder performance and the use of neural signals for end-effector control can be addressed with iBMI investigations in closed-loop control. However, the intracortical interface allows for the exploration of questions from a neuroscience perspective by analyzing signal modulation in open-loop (meaning the signal decoding happens offline in post- processing). There remains considerable debate regarding how the motor cortex coordinates the muscle activity required for movement. Traditionally, single neuron activity in MC has been viewed as representing high level movement parameters such as reach direction, velocity, or endpoint (Churchland et al., 2012). In other words, individual neural firing rates are thought to be a function of one or more kinematic or kinetic parameters. However, recent studies have uncovered patterns of activity spanning populations of neurons that may be better explained using a dynamical system model (Churchland et al., 2010). In this view, neurons don’t function as individual units to encode movement information, but exist as part of vast networks whose behavior is driven by a relatively small number of emergent state variables (Churchland et al.,

2012). Preparatory activity in the cortex sets the initial states of the system, and movement results as the system evolves from those initial states (Churchland et al.,

2012). Observed “tuning” to high level parameters by single neurons is therefore a

49 consequence of it being a part of a system that drives movement directly (Churchland et al., 2012). An in-depth review of the dynamical system interpretation of neural population activity can be found in a review by Pandarinath and colleagues (Pandarinath et al.,

2018).

Recordings from high density microelectrode arrays, coupled with newly developed statistical analysis methods, have aided researchers in studying neural activity at the population level. The foundation of many of these statistical methods is principle component analysis (PCA). This tool finds an orthogonal basis whose axes best capture the variance in the population data (Lever et al., 2017). Each axis, or principle component (PC), is a linear combination of the features used to describe the input data. The first PC captures most of the information in the data, the second PC the second most, and so on. Patterns of behavior inherent to the population often become apparent when viewed in a dimensionally reduced space composed of the first few PCs. jPCA was developed as an extension of PCA to find an orthogonal basis that best captures rotational dynamics in the population, a behavior commonly seen in movement related activity (Churchland et al., 2012). Demixed-PCA (dPCA) is another extension of

PCA that incorporates information about task parameters to find a basis that best de- mixes, or separates, the observed patterns of the population based on their correlation to the task parameters (Kobak et al., 2016). Spike Train SIMilarity Space (SSIMS) combines a spike train similarity matrix with dimensionality reduction to provide a tool for quantifying similarity between population responses (Vargas-Irwin et al., 2015).

Real-time Signal Decoding

The very first human iBMI was implemented using a cone electrode that could detect neural activity from a population of neurons (Kennedy and Bakay, 1998). Control was achieved by mapping differentiated firing rates to movement variables. For example,

50 an increase in firing rate from one electrode corresponded to cursor movement in one direction, whereas a decreasing or constant firing rate would cause the cursor to stop moving (Kennedy et al., 2000). Based on successful pre-clinical studies, the first clinical microelectrode iBMI study utilized a decoder to transform the user’s neural activity into cursor movements (Hochberg et al., 2006). Generally, in order to map neural activity to kinematics for closed-loop control, a decoder is calibrated with neural data recorded for a task with known kinematics while the participant imagines or attempts the movements.

Currently, a common approach to decoding is the use of a Kalman filter for Cartesian endpoint control (Brandman et al., 2017). However, many iBMI investigations focus on optimizing decoder performance and stability by exploring various calibration techniques, filters, classifiers, and regression techniques (listed in Table 2-4).

Decoding methods in early clinical iBMI applications

A specialized linear decoder developed with pre-clinical studies in non-human primates was used to control two-dimensional cursor movements from the first microelectrode recordings in a human (Hochberg et al., 2006). This linear decoder utilized a Wiener filter, which was calibrated using an open-loop approach, in which single unit activity was mapped to cursor movements based on the participants’ imagining themselves controlling the cursor (Kao et al., 2014). The Wiener filter is different from a simple linear regression because it incorporates time history in the error minimization (Kao et al., 2014). Later iBMI investigations improved upon the linear decoder with the Kalman filter. The Kalman filter models kinematic variables such as position, velocity, and higher order parameters as hidden states, and then estimates the values of these variables using recursive Bayesian inference (Wu et al., 2006; Kim et al.,

2011). The Kalman filter decoder can be trained with a few minutes of data and is well- suited as a kinematic decoder because it can take advantage of the sorted single units

51 being tuned to preferred directions (Brandman et al., 2017). The decoder has also been shown to produce smoother, more accurate, and faster cursor movements than the linear filter and requires less training time from each participant (Kim et al., 2008).

Therefore, the Kalman filter is the most commonly used decoder in iBMI applications.

Decoders employed in iBMI systems

The Kalman Filter

The Kalman filter is used in two-dimensional iBMI applications such as cursor control and virtual typing (Perge et al., 2014; Bacher et al., 2015; Gilja et al., 2015).

Additionally, this decoder can be combined with a state classifier to increase the dimensionality of control allowing the user to control a mouse click (Kim et al., 2011;

Simeral et al., 2011; Jarosiewicz et al., 2016) or select a hand grasp (Hochberg et al.,

2012). Combining the Kalman filter with Fisher linear discriminant analysis enables the decoder to continuously switch between discrete states such as movement and click

(Kim et al., 2011). Training can still be done using open-loop calibration with the user imagining a specific movement for the “click” state such as squeezing their hand closed

(Kim et al., 2011; Simeral et al., 2011). In the first implementations with sorted single units, it was determined that the neuronal firing rates were tuned to the kinematic states in four ways: click only; movement velocity only; both states, decreasing with click; and both states, increasing with click (Kim et al., 2011). The hybrid velocity/state filter has also been used with thresholded multi-unit activity (Hochberg et al., 2012; Jarosiewicz et al., 2016) and spike band power (Jarosiewicz et al., 2016).

Decoders for three-dimensional movement

New decoders have been developed in order to better control three-dimensional movements using positional and velocity information from neural signals. These

52 decoders use optimal linear estimation (OLE) which can independently decode three- dimensional hand position and velocity, improving end-point control. Due to the nature of neural activity in the motor cortex, there is a delay in cortical representation of a movement and the actual movement, thus the first OLE implementation was combined with the traditional population vector algorithm (Georgopoulos et al., 1986) to account for this delay (Wang et al., 2007). For the traditional population vector algorithm, each sorted single unit is assumed to have a preferred direction; increasing its firing rate when the user imagines moving in that direction. Therefore, the neural features translate directly to kinematic variables in the decoder (Kao et al., 2014). The indirect OLE decoder has been used in clinical iBMI investigations with robotic limb control allowing users to achieve multi-dimensional control moving the arm, orienting the hand, and selecting grasp states (Collinger et al., 2013b; Wodlinger et al., 2015; Downey et al.,

2016; Downey et al., 2017). A full OLE decoder has also been implemented using first- order smoothing allowing for three-dimensional control of electrically stimulated arm movements (Ajiboye et al., 2017; Young et al., 2018; Young et al., 2019).

With the new application of intracortical brain control to movement restoration with functional electrical stimulation (FES), one group of investigators has implemented an advanced classification algorithm called the support vector machine classifier (SVM)

(Humber et al., 2010; Bouton et al., 2016; Friedenberg et al., 2017; Colachis et al., 2018;

Skomrock et al., 2018). This classifier maps neural signals or features to specific imagined movements that correspond to stimulated movements. For example, in one study, the imagined movements were wrist, whole hand, and thumb flexion corresponding to the FES movements and the SVM classified mean wavelet power on each channel (Bouton et al., 2016).

53

Decoder optimization

Calibration improvements

Decoder calibration is a crucial aspect of the brain-machine interface. Online performance is mostly dependent on how well the decoder maps neural signals to intended movements. The first decoders were calibrated in open-loop then used in closed-loop control. However, closed-loop calibration yields a better performing decoder

[133] and it quickly became common practice to start with an initial decoder built with open-loop data and incrementally recalibrate using trial blocks of closed-loop control

(Kim et al., 2008; Kao et al., 2014; Bacher et al., 2015; Jarosiewicz et al., 2015; Bouton et al., 2016; Jarosiewicz et al., 2016; Nuyujukian et al., 2018; Stavisky et al., 2018;

Milekovic et al., 2019). It was also determined with a pre-clinical iBMI investigation that adding error attenuation to the closed-loop control portion of calibration is useful for high dimensional control (Velliste et al., 2008). This technique is now commonly used in clinical iBMI applications, allowing the user to ease into full control by decreasing the amount of error attenuation with each iteration (Hochberg et al., 2012; Collinger et al.,

2013b; Wodlinger et al., 2015; Willett et al., 2017a; Young et al., 2018; Young et al.,

2019). A new advancement in decoder calibration is the use of a fast, automated, closed-loop calibration method. This decoder, called the Gaussian process regression discriminative Kalman filter (GP-DKF), allows for a non-linear relationship between the decoded state variables and the neural signals (Burkhart et al., 2016). This architecture allows for the decoder to be calibrated in closed-loop without error attenuation, updating every 2-5 seconds leading to a fully trained decoder in as little as three minutes for multiple iBMI study participants (Brandman et al., 2018b).

54 iBMI user control policies and intention estimation

How the neural features are interpreted during calibration is dependent on how the user changes their control signal in response to feedback. Part of the decoder is the internal model used to estimate how the neural activity maps to the control vector, reflecting the user’s intent (Kao et al., 2014). Specifically, there are two aspects of this model that can improve online performance: the estimation of the user’s intent during calibration and the user’s feedback control policy. Intention estimation is used during closed-loop calibration to modify the decoded kinematics to better represent the user’s intent and the feedback control policy is related to how the user integrates visual feedback into their modulation of the control vector (Kao et al., 2014). Based on these concepts, one group developed the recalibrated feedback intention trained Kalman filter

(ReFIT-KF) to improve closed-loop cursor control (Gilja et al., 2012b; Gilja et al., 2015).

For cursor control, the internal model is based on the assumed control policy employed by the user using visual feedback for cursor position and velocity at each time point

(Willett et al., 2017b). The ReFIT-KF makes certain assumptions during closed-loop recalibration based on the fact that the online control policy is distinctly different from the neural signal modulation used to build a decoder offline (Gilja et al., 2012b; Andersen et al., 2014; Kao et al., 2014). Therefore, during closed-loop calibration, there is no uncertainty term for the cursor position, all decoded velocity is assumed to point towards the target (effectively rotating decoded velocities towards the target), and the control vector is assumed to be zero on top of the target (Gilja et al., 2012b; Fan et al., 2014;

Gilja et al., 2015; Willett et al., 2018). In the first clinical translation of the ReFIT-KF, users were reported to have the highest performance for iBMI cursor control (Gilja et al.,

2015).

55

A recent study compared previously proposed models of the user’s control policy for cursor control (Willett et al., 2017b). In this study, the commonly used steady-state

Kalman filter was separated into two parts: the projection of neural activity onto a two- dimensional control vector and the dynamical system that moves the cursor in response to the control vector. This study made two major conclusions based on modeling the control policies employed by three study participants during closed-loop control. First, the magnitude of the control vector declines non-linearly as the cursor approaches the target; very slowly at first and then rapidly in close proximity of the target. Previous assumptions were that this decline was linear (Taylor et al., 2002b; Lagang and

Srinivasan, 2013; Shanechi et al., 2013; Gowda et al., 2014; Benyamini and

Zacksenhouse, 2015; Shanechi et al., 2016). Second, the decoded velocity is not zero on top of the target and, in fact, the user constantly makes feedback corrections in order to dwell on the target (Willett et al., 2017b). The conclusion that the control vector is non- zero and rapidly changing when the cursor is on the target directly contradicts the control policy assumed for several iBMI cursor control applications that utilize the ReFIT-KF. In fact, many other decoders with various intention estimations still yield high iBMI performance results (Willett et al., 2018). It was determined in a second comparison study (Willett et al., 2018), that intention estimation has little effect on decoder performance for cursor control when a piecewise linear model (PLM) of the control policy is used to optimize gain (speed scaling) and temporal smoothing parameters for the cursor dynamics (Willett et al., 2019). The various intention estimations lead to different optimal decoder parameters but otherwise similar decoded control vectors (Willett et al.,

2018).

56

Increasing decoder stability

One of the major limitations of current brain-machine interfaces is that due to drifting neural signals, the decoder must be recalibrated each day it is used. Therefore, a major research thrust for iBMI application is to create decoders that can adapt to signal nonstationarities and do not require traditional recalibration over days of use. First, it has been noted that using multi-unit activity rather than sorted single units increases robustness to units dropping out and therefore improves stability over time and longevity as signal recording quality degrades (Gilja et al., 2012b). Second, it has been shown that non-stationarities are sparse (occurring on only a few features) and sudden (Homer et al., 2014). This concept has been expanded into three different approaches to creating a more stable Kalman filter. First, the steady-state Kalman filter was altered with a multiple offset correction algorithm (MOCA) to account for sudden changes. This corrective algorithm updates an offset term in the decoder at each time step with the assumption that signal changes will be sparse and sudden. Implementing the MOCA significantly improved performance for those instances when the decoder was previously inaccurate, showing that the MOCA is useful for adapting the Kalman filter decoder to nonstationarities without recalibration (Homer et al., 2014). Another set of innovations to mitigate nonstationarities has allowed for long periods of iBMI self-paced typing without a decline in performance (Jarosiewicz et al., 2015). The standard Kalman velocity filter can be automatically recalibrated using retrospective target inferencing, meaning that the decoder parameters are updated with past recordings and selected targets.

Additional stability can be provided by tracking the baseline drift in neural features during pauses in neural control then applying bias correction to decoded velocity during use

(Jarosiewicz et al., 2015). Most recently, the quickly and easily calibrated GP-DKF was updated to be robust against nonstationarities by adding principled kernel

57 selection, summing features across multiple channels thus ignoring the sparse nonstationarities that occur on a few channels (Brandman et al., 2018a).

As of now, there have been two reports of decoders remaining stable over weeks of use. One decoder was employed in an iBMI system that classified hand and wrist movements from mean wavelet power. This classifier is a deep neural network (DNN) incorporating a long short-term memory layer that is capable of learning long-term dependencies. The network was trained offline with 40 sessions of open loop data collected while the participant imagined six different cued movements (hand open/close, wrist flex/extend, and index flex/extend) (Skomrock et al., 2018). The DNN was then used in closed-loop, retrained with one block, used in closed-loop again, and retrained with five more blocks before being fixed. The fixed DNN classifier was then used over the course of 50 days with stable performance in classifying two (hand open/close) and four movements (wrist and index flex/extend) (Skomrock et al., 2018). The longest reported use of a stable decoder is 138 days by a study participant with tetraplegia due to progressing ALS (Another participant with locked-in syndrome used a stable decoder for 76 days) (Milekovic et al., 2018). Both participants used an iBMI system with the

FlashSpeller to communicate and type messages at home without recalibration or technical intervention. The FlashSpeller utilizes a regularized linear discriminant analysis

(rLDA) classifier that determines whether the user wants to make a selection as they are sequentially highlighted. The stable decoder utilized features from local field potentials separated into three frequency bands (see Table 2-3) and was calibrated with data previously collected over 42 days. Each data collection session consisted of three tasks: open-loop calibration, free-spelling with the historical decoder, and double-blinded, interleaved spelling blocks of using either the newly calibrated decoder or the historical decoder. Each task set began with a block of normalization, where the participant

58 watched the FlashSpeller without interacting. The historical decoder was then updated with the collected data (Milekovic et al., 2018). iBMI Applications: the End-effector

The first implementations of intracortical brain-machine-interface in humans included a computer cursor as the end-effector (Kennedy et al., 2000; Hochberg et al.,

2006). Investigational iBMIs have since evolved to include four main types of end- effectors: computer cursors, movement in virtual three-dimensional (3D) environments, robotic arms, and reanimated paralyzed limbs. Currently, iBMIs are being optimized for cursor control to complete functional tasks such as typing to speak or using applications on a tablet. Computer interfaces are particularly important for people affected by progressive diseases or stroke that lead to anarthria (the inability to speak). Two- dimensional (2D) cursor control gave rise to 3D end-point control which allows for investigations of brain-controlled movement in a virtual environment. Optimizing virtual

3D end-point control is an essential step in transitioning to physical end-effectors. One of these physical end-effectors is a robotic arm which can be equipped with various types of hand attachments. Robotic end-effectors are the key to pushing the limits of brain- machine interface beyond 3D end-point translation into simultaneous control of high dimensional arm and hand movements. Finally, the newest area of end-effector research involves the adaptation of an established technology known as functional electrical stimulation (FES) for movement restoration. FES technology has been used for decades to restore movement to those with paralysis. The devices are limited to those with incomplete injuries or injuries below the C5 level due to the constraints of stimulation control. Brain-machine interfaces allow for FES technology to be used by those with high-level SCI. Current studies investigate how brain control can be optimized in order to combine with stimulation of paralyzed musculature in order to reanimate the user’s arm

59 and hand. Table 2-5 summarizes the iBMI investigations focused on these four types of end-effectors.

Computer cursor control for typing and tablet use

The first intracortical brain-machine-interfaces allowed human participants with anarthria to communicate by using a computer cursor to select letters on a keyboard

(Kennedy et al., 2000; Hochberg et al., 2006). Providing the ability to communicate for those who have lost the ability to speak due to ALS or brainstem stroke is still the driving clinical motivation for many research groups looking to optimize brain-machine interface.

In addition to improving recording and signal processing, research is focused on optimizing communication software as an end-effector. This involves redesigning the keyboard layout or creating new ways to quickly type using brain control. The traditional

QWERTY keyboard layout is not optimal for cursor-based typing which is why simply rearranging the letters in a compact layout can triple typing rates with brain control

(Pandarinath et al., 2017). Many groups forgo the keyboard and utilize selection-based software optimized for faster word generation (Bacher et al., 2015; Milekovic et al.,

2019). For example, one group investigating EEG-based BMI developed a radial, two- tiered, selection-based keyboard with groups of letters placed on six radial targets

(Treder and Blankertz, 2010). Another group expanded on the concept of a radial keyboard and added predictive algorithms to create an ambiguous text entry system that greatly outperforms the traditional QWERTY keyboard in accuracy, speed, and ease of use (Bacher et al., 2015). In this system, several letters are included on each radial target so as the user makes selections, the algorithm updates a list of possible words.

Once the intended word appears, the user can select to choose from the list of words in the radial targets.

60

Cursor control for tablet use is also an important innovation for those with tetraplegia and anarthria. Voice-control is an essential tool for someone with tetraplegia to interact with their environment such as controlling lights, bed movements, and computer applications. Tablet control or the use of a control panel can restore the level of independence usually available through voice-controlled devices for those with anarthria. Additionally, integrating brain control with a tablet is useful for all iBMI users allowing them to interface with common applications such as email, instant messaging, news, and streaming entertainment (Hochberg et al., 2006; Nuyujukian et al., 2018).

Intracortical brain control in three-dimensional virtual environment

Beyond computer interfaces, brain-machine research investigations focus on restoring arm and hand function to those with tetraplegia. Whether the physical movements are made with a robotic limb or FES, control can be modeled and tested in a

3D virtual environment. End-point control in a 3D Cartesian workspace directly translates to end-point control of a robotic limb in physical space. This is useful for calibrating decoders and can even be coupled with virtual object interactions to properly train an online decoder (Wodlinger et al., 2015; Downey et al., 2017). Virtual control can also serve as a precursor to controlling FES movements without confounding variables such as stimulation dynamics. Before FES was implemented in a human participant, intracortical control of multiple arm joints was tested in a virtual environment (Ajiboye et al., 2012). The virtual end-effector was useful for optimizing joint control and determining which neural features were most appropriate for high-dimensional control (Ajiboye et al.,

2012; Ajiboye et al., 2017). To further optimize FES control, one investigation compared joint control to Cartesian end-point control in a virtual environment which allowed for a precise comparison of path efficiencies and four DOFs. The investigators concluded that even with the addition of arm swivel as a fourth DOF, representing movements in

61

Cartesian end-point velocities yielded improved performance over joint velocity control

(Young et al., 2019).

Robotic arm control

Brain control of a robotic arm allows for the interaction with real world objects, offering the possibility of further independence for a person with tetraplegia.

Furthermore, research studies involving intracortical brain-machine-interface with robotic limbs are often focused on studying physical control capabilities for a complicated end- effector with many DOF. The robotic limb end-effector is ideal for optimizing intracortical brain control of increasing DOF, allowing for the incorporation of anthropomorphic prosthetic hands and real world object interactions. Recent investigations push the limits of functional control using brain control so far showing that 3D endpoint control can couple with grasp state control to allow for users to complete reach, grasp, and carry tasks (Hochberg et al., 2012; Collinger et al., 2013b; Wodlinger et al., 2015; Downey et al., 2016). The first report of robotic arm control with an iBMI showed more complex control than had previously been achieved in non-human primate studies and even functional control where a participant was able to reach for, grab, and drink from a cup

(Hochberg et al., 2012). Later work investigated higher dimensions of control and evaluated iBMI robotic control with the Action Research Arm Test (ARAT), further demonstrating that intracortical neural control can be used to restore arm and hand function to a person with tetraplegia (Collinger et al., 2013b). So far, it has been shown that up to ten DOF can be simultaneously accounted for while the user modulates several simultaneous dimensions in order to sequentially translate the arm, orient the hand, and select a grasp state (Wodlinger et al., 2015).

Another important investigative aspect of robotic end-effectors is the opportunity to incorporate other technology for guidance or feedback such as cameras, force

62 sensors or proximity sensors. Image guidance has been used to provide more efficient movements, select appropriate grasps based on the intended object interaction, and accurately place objects onto a surface (Downey et al., 2016). Control assistance through integration of sensors and intelligent guidance systems can greatly improve the iBMI experience for its user by making control less frustrating and allowing for accurate control even after neural signals begin to degrade and provide less information (Downey et al., 2016).

Brain controlled FES

FES for arm and hand movement restoration

The newest end-effector for brain-machine applications is the paralyzed arm of the user. Electrical stimulation of the neuromuscular system has been used to reanimate paralyzed musculature and restore function to patients for decades (Peckham, 1981).

Coordinated electrical stimulation of intrinsic and extrinsic hand musculature can elicit functional hand grasps in individuals with mid-level cervical spinal cord injury (C5-C6

SCI) (Peckham et al., 1988; Kilgore et al., 1989). FES parameters have been well established and optimized for both nerve and muscle stimulation with surface, percutaneous, and implanted electrodes (Peckham and Knutson, 2005; Ho et al., 2014).

Currently, myoelectrically controlled FES hand grasp systems supply the two essential grasps (lateral pinch and palmar) and can be programmed for patient specific activity grasps (Kilgore et al., 2008; Kilgore et al., 2018). Whole hand function is achieved with two voluntarily activated muscles via state control. In this way, one muscle with a consistent range of graded activation proportionally opens/closes the grasp and another muscle acts as a switch to cycle between grasp patterns and to lock the hand position

(Kilgore et al., 2018).

63

First implementations of brain-controlled FES

The first implementation of implanted FES for restoration of arm and hand function in a person with high-level SCI (C4) utilized a brain-machine interface to control stimulation (Ajiboye et al., 2017). The use of brain control addressed many limitations of the current myoelectrically controlled FES system for hand function. The most notable limitation of state control is the inability to continuously move from one grasp to another allowing for object manipulation. Continuous and dexterous hand movements cannot be achieved with a one-to-one mapping of myoelectric inputs to each DOF; there are not enough voluntarily controlled signal sources that are non-essential for other functions in a person with tetraplegia. In order for FES movement restoration applications to be expanded in the upper extremities (adding arm, individuated finger, and continuous movements), a new control scheme needs to be developed. There is also ongoing research into the development of brain-controlled FES with surface stimulation (Bouton et al., 2016; Friedenberg et al., 2017; Colachis et al., 2018; Skomrock et al., 2018).

Given that as many as 10 DOF of have been decoded simultaneously from intracortical brain recordings in robotic control applications (Wodlinger et al., 2015), it seems as though brain-machine interface is the key to expanding the functionality of upper extremity FES movement restoration.

As of now, two individuals with tetraplegia have used intracortical recordings to control FES in their paralyzed arm and hand. Each participant has completed a variety of tasks using BMI. One participant with a C5/C6 level motor complete injury was able to form up to seven different functional grasps (Colachis et al., 2018; Skomrock et al.,

2018), complete a functional task that involved pouring from a glass (Bouton et al.,

2016), and control graded force output (Friedenberg et al., 2017) in a surface FES system where stimulation is provided through a high density sleeve of electrodes (up to

64

130 contacts). Another participant with a C4 level motor complete injury was able to complete self-care tasks such as drinking from a cup through a straw or eating from a plate with an adaptive utensil using a percutaneous intramuscular FES system (Ajiboye et al., 2017). As these are the first implementations of brain-controlled FES, there are limitations to each system, however, the research groups responsible for these studies are continuing to improve their systems.

The surface stimulation system is limited to hand and wrist movements, therefore it can currently only be implemented in SCI survivors that retain elbow and shoulder function (Colachis et al., 2018). The intramuscular FES is also limited in function due to the level of muscle activation achievable with percutaneous intramuscular electrodes and the amount of force needed to move the arm against gravity. The system included a mobile arm support to move the shoulder against gravity, although it was still controlled by neural signals. Both systems were limited in grasp formation due to insufficient thumb activation (Ajiboye et al., 2017; Colachis et al., 2018). Future iterations of brain- controlled FES systems could incorporate peripheral nerve stimulation for more included muscles and stronger contractions as well as more finger control (Ajiboye et al., 2017).

Adding electrical stimulation to a brain recording interface also introduces a significant amount of noise in the low amplitude neural signals. Several artifact reduction techniques are used during intracortical microstimulation, such as blanking, template subtraction, and linear regression (O'Shea and Shenoy, 2018; Weiss et al., 2019). Most

FES implementations utilize blanking to remove the stimulation artifact which allows for effective control (Bouton et al., 2016; Friedenberg et al., 2017; Colachis et al., 2018).

Recently, one study compared blanking, and common average referencing to linear regression referencing in order to use the neural information present during the blanking time periods (Young et al., 2018). This study established a reliable method for filtering

65 the stimulation artifact in both surface and intracortical stimulation paradigms allowing for better neural recordings and higher performance comparable to that without stimulation.

Brain-controlled FES for dexterous hand function

Brain-control may be the clear option for restoring arm and hand function to high level SCI survivors; however, brain implants are highly invasive and are not ready for commercialization. Therefore, myoelectric FES systems remain the most plausible option for mid-level SCI survivors who only need the restoration of hand function. Future iterations of brain-controlled FES could provide naturalistic hand function to mid-level

SCI survivors. Continuous and simultaneous control of multiple DOF through a brain- machine interface would allow users to make individuated finger movements, complicated grasp transitions, and graded force output. Furthermore, FES neuroprostheses are used to restore movement and function to lower level SCI survivors. Trunk stabilization, standing, and walking systems are all being developed for those with thoracic level or incomplete injuries (Cikajlo et al., 2005; Triolo et al., 2012;

Ho et al., 2014; Audu et al., 2015; Chang et al., 2017; Odle et al., 2019). Using brain control offers the possibility of controlling more FES restored functions in one person, thus allowing high-level SCI survivors to utilize these other neuroprosthetic systems. In fact, such a system is currently in development in which multiple FES functions can be combined in a modular stimulation and recording system called the Networked Neural

Prosthesis (Smith et al., 2005; Kilgore et al., 2007; Peckham and Kilgore, 2013).

Bi-directional iBMI with sensory feedback

Critical role of somatosensory feedback

Somatosensory feedback is critical for dexterous upper limb motor control.

Somatosensation combines tactile and proprioceptive sensation. Tactile sensation is the

66 feeling of contact forces exerted on the skin; proprioception is the awareness of relative position and movement. Both senses facilitate object grasping and manipulation.

Proprioception enables humans to accurately reach for objects, while tactile information encodes when the hand initiates, maintains, and loses contact with an object

(Johansson and Flanagan, 2009). The relative positions of fingers, the distribution of force, and force magnitudes on the skin provide information about object shape and size

(Delhaye et al., 2016). Object identification is also enhanced by texture and weight information (Delhaye et al., 2016). Since tactile information includes friction and the direction of movement, humans can adjust their grip to increase stability when they notice the object slipping (Johansson and Flanagan, 2009; Delhaye et al., 2016).

Somatosensation is also crucial for neuroprostheses acceptance, as demonstrated in sensory-enabled prostheses for people with upper limb amputations

(Christie et al., 2019). Peripheral nerve stimulation evokes useful, naturalistic, and diverse sensations from the upper limb prostheses. Users report that sensory feedback enhances prosthesis embodiment, social interactions, and self-sufficiency. Users also wore the prostheses for longer durations and reported that less focus and attention were necessary to execute tasks. Overall, the somatosensory feedback improved outcome acceptance (Christie et al., 2019). Similar increases in neuroprostheses acceptance are expected for intracortical brain machine interfaces with sensory feedback in addition to improved, more intuitive control.

Sensory restoration options

People with tetraplegia and sensory impairment have multiple options for limited sensory restoration. The least invasive approach is where skin with intact sensation is mechanically displaced when the end effector moves or contacts an object (Antfolk et al., 2013). For example, the neck of the person can be vibrated when

67 the end-effector makes contact with a table. The main limitations are that stimulation locations are limited in a person with tetraplegia and the mapping is non-intuitive. The user needs training to interpret the input, increasing cognitive load. Peripheral nerve stimulation is a sensory restoration method used for people with amputations (Graczyk et al., 2016). Peripheral nerve stimulation can elicit sensations in different locations at different intensities. Since peripheral nerve stimulation activates the pathway starting below the spinal cord, the main limitation is that people with spinal cord damage to the dorsal column–medial lemniscus pathway (DCML) will not receive the entire sensory signal.

Intracortical microstimulation (ICMS) is a promising option for those with tetraplegia and severe sensory deficits that want dexterous upper limb motor control.

Given the severity of damage to the spinal cord, moving the electrode interface to the brain ensures that somatosensory feedback will be integrated into. People with moderate damage to the sensory pathway of the spinal cord can still benefit from ICMS, although to a lesser degree. For example, if FES electrodes are activating the muscles of a participant with a degree of intact sensation, the larger-amplitude electrical signal may overwhelm the smaller-amplitude somatosensory signal, and ICMS can amplify the somatosensory input. Intracortical sensory interfaces provide more localized signals than

ECoG applications. ICMS can elicit percepts specific to areas on fingertips while ECoG percepts are more general, such as an entire finger (Lee et al., 2018). Reports indicate that the ICMS electrodes damaged the neural tissue in NHPs, but chronic stimulation, with parameters selected to mitigate tissue damage, does not exhibit any detectable adverse effects nor impair fine motor control (Rajan et al., 2015). All four dimensions of tactile sensation (intensity (magnitude of force), location, timing, and quality (texture)) have been preliminarily investigated in NHPs. Investigation has shown that NHPs can

68 discriminate between electrical stimuli with different amplitudes (corresponding to different intensities of applied pressure), discriminate between stimuli on different electrodes which map to different receptive fields, and detect stimulus duration (Tabot et al., 2013). Texture is represented by a combination of timing, frequency, amplitude, and pulse width of ICMS signals and can also be discriminated by NHPs (O'Doherty et al.,

2012; O'Doherty et al., 2019). Since ICMS can elicit myriad distinctive sensations corresponding to tactile information by varying stimulation parameters as shown by NHP discrimination, translation to clinical iBMIs could lead to incorporating a degree of biomimetic feedback to humans.

Intracortical Microstimulation in the human somatosensory cortex

ICMS in the human somatosensory cortex has elicited meaningful percepts.

ICMS electrodes were safely implanted in the primary somatosensory cortex of human participants with tetraplegia with no reported adverse events (Armenta Salas and Helms

Tillery, 2016; Flesher et al., 2016). The stimulation parameters were chosen to remain under the established threshold for tissue damage, and the participants did not report pain or discomfort (Rajan et al., 2015; Flesher et al., 2016). The ICMS electrode array contacts mapped somatotopically to regions in the hand, and the amplitudes of the stimulation pulses modulated the perceived intensity (Flesher et al., 2016). Flesher et al. found a linear relationship between amplitudes ranging from 10 to 100 µA and perceived sensation, with a just-noticeable difference of 15 µA. The elicited sensations were stable for at least 6 months and recorded from for at least 9 months, suggesting long-term viability (Flesher et al., 2016; Weiss et al., 2019). The stimulation parameters also modulated the quality of the perceived sensation. Salas et al. reported a range of elicited cutaneous and proprioceptive sensations including squeeze, vibration, forward

69 movement, and upward movement (Armenta Salas et al., 2018). The range of provide the foundation for the development of iBMIs with sensory feedback.

Neuroprostheses can interface with tetraplegic participants using ICMS. Flesher et al. used a modular, robotic prosthetic limb with force sensors on the fingers (Flesher et al., 2016). Electrode contacts mapped to individual fingers were stimulated with amplitudes modulated by the recorded forces on the robotic sensors when the researcher applied pressure. The participant was able to detect differences in intensity of the applied pressure through ICMS (Flesher et al., 2016). To increase the clarity of the motor command signals for the robotic limb, Weiss et al. removed stimulation artifact by blanking then using a first order Butterworth filter. The approach was less computationally demanding to optimize for online use (Weiss et al., 2019). The state-of- the-art technology of human somatosensory ICMS provides a promising outlook for the future advancement of iBMI systems.

Future work to develop bi-directional intracortical BMIs

In order to close the loop with ICMS in iBMI systems, the and the effect of ICMS on need to be explored in greater detail. It is necessary to better understand how tactile sensation is encoded throughout the sensory pathway in humans—from fingertips through the spine and into the brain (Delhaye et al.,

2018). Additionally, we can explore the intracortical representation of all four dimensions of tactile sensation (intensity, location, timing, quality) (Graczyk et al., 2016). We can also delve in greater detail into how ICMS activates neural populations, and how neural signals adapt to applied electrical stimulation (Histed et al., 2009; Graczyk et al., 2018).

Safe ranges of stimulation parameters such as pulse width, pulse amplitude, pulse frequency, duration, and periodicity all need to be established for clinical investigation and for long term applications (O'Doherty et al., 2012; Rajan et al., 2015; Flesher et al.,

70

2016; O'Doherty et al., 2019). Mapping sensor recordings from end effectors to ICMS stimulation parameters is another critical step for neuroprostheses integration. It remains to be established what sensations can and cannot be elicited by ICMS as well as how the user adapts to the stimulated sensation. The broader and deeper understanding of tactile perception modulated by ICMS will inform the development of a biomimetic ICMS paradigm to evoke functionally-relevant tactile percepts.

Integrating ICMS in the somatosensory cortex with electrodes in the motor cortex for motor control requires a more thorough understanding cortical-cortical communication. We need to understand how signals initiated in the primary somatosensory cortex are propagated to other brain regions, especially the primary motor cortex, the premotor cortex, and AIP. We also can explore alternative options to remove stimulation artifacts from ICMS in intracortical recordings (Weiss et al., 2019).

Bi-directional iBMIs have great potential to allow for dexterous arm and hand movement restoration with physical end-effectors given the progress of state-of-the-art sensory interfacing technology.

Conclusion

The advances in iBMI technology presented in this review for each of the components of the iBMI system indicate that future systems will provide a wide variety of sophisticated functions to paralyzed individuals over long periods of time. Current research thrusts are pushing towards creating stable, consistently performing iBMI systems by improving electrode technology, optimizing signal processing and the viability of neural signal information, as well as improving the adaptability, robustness, and stability of the neural decoder. Functionality and ease of use is increasing with improved incorporation of sensory feedback both to the controller for physical end- effectors and to the user via bi-directional iBMI systems. Although the focus of this

71 review was to present the current state of iBMI investigations, further analysis is warranted to identify the essential advancements that will lead to widespread deployment of iBMI neuroprosthetic devices.

72

Tables

Table 2-1. iBMI clinical trial study participants. Updated and expanded table from (Bullard et al., 2019) that summarizes all chronically-implanted iBMI participants and their associated publications. Abbreviations: MEA = microelectrode array; PD = participant designation; R = recording; S = stimulating; MPA = microelectrodes per array; Pt = platinum; SIROF = sputtered iridium oxide film. iBMI Clinical Trial Study Participants

Research Injury PD Sex Implanted brain area R/S Implant Characteristics Publication inclusion Group

ALS; locked-in motor cortex MH F R cone electrode Kennedy and Bakay, 1998 syndrome (hand area) brainstem stroke; motor cortex Georgia tetraplegia and JR M R cone electrode Kennedy et al, 2000; Kennedy et al, 2004 (hand area) Tech anarthria mitchochondrial motor cortex myopathy affecting TT M R cone electrode Kennedy et al, 2004 (hand area) muscle and CNS S1 Motor Cortex 4x4 mm platform MEA; 96 C4 SCI; AIS-A (formerly M R Hochberg et al, 2006 (hand/arm area) silicone MPA; 1.0 mm depth MN) Kim et al, 2008; Kim et al, 2011; Simeral et al, 2011; brainstem stroke; Motor Cortex 4x4 mm platform MEA; 96 Hochberg et al, 2012; Homer et al, 2014; Perge et al, tetraplegia and S3 F R 2014; Bacher et al, 2015; Jarosiewicz et al, 2015; BG2 (hand area) silicone MPA; 1.5 mm depth anarthria Malik et al, 2015; Masse et al, 2015; Vargas-Irwin et al, 2018 Brown Motor cortex 4x4 mm platform MEA; 96 Case ALS; tetraplegia A1 M R Kim et al, 2008; Kim et al, 2011 Stanford (arm area) silicone MPA 4x4 mm platform MEA; 96 ALS; tetraplegia T1 F Motor Cortex R Vargas-Irwin et al, 2018 MPA; 1.0 mm depth brainstem stroke; Motor Cortex 4x4 mm platform MEA; 96 Hochberg et al, 2012; Homer et al, 2014; tetraplegia and T2 M R Jarosiewicz et al, 2015; Perge et al, 2014; Milekovic (hand area) MPA; 1.5 mm depth anarthria et al, 2018; Masse et al, 2015; Milekovic et al, 2019

73

Pandarinath et al, 2017; Even-Chen et al, 2018; 2 arrays in Motor 4x4 mm platform MEA; 96 Pandarinath et al, 2018; Stavisky et al, 2018; Young C4 SCI; AIS-C T5 M Cortex R et al, 2019; Brandman et al, 2018; Nuyujukian et al, silicone MPA 1.5 mm depth (hand/arm area) 2019; Stavisky et al, 2018; Willett et al, 2019; Rastogi et al, 2020; Willett et al, 2020 Gilja et al, 2015; Jarosiewicz et al, 2015; ALS; ALSFRS-R 16, Pandarinath et al, 2015; Willett et al, 2017; Willett et Motor Cortex 4x4 mm platform MEA; 96 al, 2017; Pandarinath et al, 2017; Even-Chen et al, dexterous finger and T6 F R (hand area) silicone MPA; 1.0 mm depth 2018; Stavisky et al, 2018; Milekovic et al, 2018; wrist movement Willett et al, 2018; Milekovic et al, 2019; Nuyujukian et al, 2019; Willett et al, 2019; Gilja et al, 2015; Jarosiewicz et al, 2015; 2 arrays in Motor 4x4 mm platform MEA; 96 Pandarinath et al, 2015; Willett et al, 2017; ALS; ALSFRS-R 17 T7 M Cortex R Pandarinath et al, 2017; Pandarinath et al, 2018; silicone MPA; 1.5 mm depth (hand area) Willett et al, 2018; Milekovic et al, 2019; Willett et al, 2019 Ajiboye et al, 2017; Willett et al, 2017; Willett et al, 2 arrays in Motor 4x4 mm platform MEA; 96 2017; Brandman et al, 2018; Willett et al, 2018; C4 SCI; AIS-A T8 M R Cortex (hand area) MPA 1.5 mm depth Young et al, 2018; Young et al, 2019; Brandman et al, 2018; Willett et al, 2019; Rastogi et al, 2020 2 arrays in Motor 4x4 mm platform MEA; 96 Jarosiewicz et al, 2016; Willett et al, 2018; ALS; ALSFRS-R 8 T9 M Cortex (hand/arm R silicone MPA; 1.5 mm depth Nuyujukian et al, 2019; Rastogi et al, 2020 area) 2 arrays in Motor Cortex 4x4 mm platform MEA; 96 C4 SCI; AIS-A T10 M (precentral gyrus, R Brandman et al, 2018 silicone MPA 1.5 mm depth caudal middle frontal gyrus)

C5-C6 SCI; AIS-A zone of partial Battelle / preservation for Motor Cortex 4x4 mm platform MEA; 96 Bouton et al, 2016; Friedenberg et al, 2017; Colachis M R OSU motor function to C6 (arm area) MPA; 1.5 mm depth et al, 2018; Skomrock et al, 2018 some proprioception in right upper limb

spinocerebellar 2 arrays in Motor 4x4 mm platform MEA; 96 Collinger et al, 2013; Wodlinger et al, 2015; Downey Pittsburgh degeneration; F Cortex R et al, 2016; Downey et al, 2017; Downey et al, 2018; MPA; 1.5 mm depth tetraplegia (hand area) Downey et al, 2020

74

4x4 mm platform 100 Pt-tipped MEs (88 MEs 2 arrays in S1 R wired to external connector); 1.5 mm depth Downey et al, 2016

2.4 x 4 mm platform 2 arrays in posterior 60 SIROF-tipped MEs (32 MEs S C5-C6 SCI; AIS-B M parietal cortex wired to external connector); 1.5 mm depth 4x4 mm platform 100 Pt-tipped MEs (88 MEs 2 arrays in M1 R wired to external connector) 1.5 mm depth Flesher et al, 2016; Downey et al, 2017; Downey et al, 2018; Downey et al, 2018; Weiss et al, 2019; 2.4 x 4 mm platform Downey et al, 2020 60 SIROF-tipped MEs (32 MEs 2 arrays in S1 S wired to external connector) 1.5 mm electrode depth reach area on superior parietal lobule (5d) C3-C4 SCI; 96 Pt-tipped MPA; 1.0/1.5 mm NS F grasp area at junction R Zhang et al, 2017; Rutishauser et al, 2018; tetraplegia depth of intraparietal and Sakellaridi et al, 2019 postcentral sulci (AIP) reach area on superior parietal lobule (5d) C3-C4 SCI; 96 Pt-tipped MPA; 1.5 mm Aflalo et al, 2015; Klaes et al, 2015; Rutishauser et CalTech EGS M grasp area at junction R tetraplegia depth al, 2018 of intraparietal and postcentral sulci (AIP) C5 SCI; tetraplegia 1 array in motor complete Supramarginal Gyrus sensory incomplete FG M (SMG) R 96 MPA with Pt-coated tips Armenta Salas et al, 2018 (residual sensation 1 array in ventral in anterior-radial premotor cortex (PMv)

75

section of upper arm and posteror-radial sensation of upper 2 arrays in S1 (area 1) S 48 SIROF-tipped MEs arm and forearm)

2 in posterior parietal 96 Pt-tipped MPA; 1.0-1.5 mm tetraplegia U1 M R Saif-Ur-Rehman et al, 2019 cortex depth 2 in posterior parietal 96 Pt-tipped MPA; 1.0-1.5 mm tetraplegia U2 M R Saif-Ur-Rehman et al, 2019 cortex depth 2 in contralateral M1; 1 4 mm platform MEA; 96 Pt- R Johns in ipsilateral M1 tipped MPA C6 SCI M Thomas et al, 2020 Hopkins 2 in contralateral S1; 1 S 32 SIROF-tipped MPA in ipsilateral S1

76

Table 2-2. Neural signal feature types. Feature Category Definition

Sorted single Sorted Recorded action potentials from individual neurons units single units Thresholded Multi-unit Recorded neural activity that crosses a predefined noise threshold, often 3-4.5 x RMS multi-unit activity feature (Christie et al., 2015). This is also known as threshold crossing rate. Multi-unit Spike band power The spectral power contained within a high-frequency band, typically 250-5000 Hz feature

Spectral power contained within multiple wavelets. MWP is calculated by decomposing a Mean wavelet Multi-unit neural signal into wavelets representing multiple frequency bands, and then normalizing and power feature averaging the resulting wavelet coefficients (Bouton et al., 2016)

The sum of excitatory and inhibitory potentials from many neurons surrounding an electrode Local field Local field (Sharma et al., 2015). Traditional LFPs range up to 300 Hz, while high-frequency LFPs range potential potential between 150-400 Hz (Scherberger et al., 2005; Schwartz et al., 2006)

77

Table 2-3. Neural signal and feature characterization studies. This table includes iBMI investigations that characterize signals collected in areas outside of motor cortex and/or decode non-kinematic variables, such as force or speech. Feature types: SU = sorted single units; TC = thresholded multi-unit activity; SBP = spike band power; LFP = local field potential; MWP = mean wavelet power

iBMI Investigations: Data and Feature Characterization

Extracted Study Brain Areas Decoded Variables Decoder/Classifier/Model Calibration Features principal component analysis and Gaussian Downey et al. 2 in M1 (hand/arm online: TC N/A mixture model expectation-maximization closed-loop calibration 2018 area); 2 in S1 offline: SU algorithm to discriminate between single units 2 in hand/arm area Downey et al. 2018 of M1; 2 in S1 (not TC Force naive Bayes classification (offline) N/A used in this study) Off-line analysis, TC; multiple Stavisky et al. 2 in motor cortex classifier trained with features from LFP speech sounds support vector machine classifier 2018 (hand area) repeated leave-one-out power cross-validation tuning of memory and confidence selective neurons: Off-line analysis, Rutihauser et left PPC (memory SU 1. novel vs. familiar stimuli regularized least squares decoder decoder trained with al, 2018 recall) 2. high vs. low confidence in 60% of all correct trials memory Spike train similarity space analysis (SSIMS) to compare neural activity during three No decoder calibration: Vargas-Irwin volitional states: Motor cortex SU N/A Open-loop study with et al, 2018 1. watching a movement offline analysis 2. imagining performing a movement 3. attempting to perform a movement Wrist flexion/extension target Friedenberg Motor cortex (hand beta regression; non-linear support vector et al. MWP angles (a proxy for discrete open-loop calibration area) regression 2017 imagined and executed forces) 1 in reach area on represented body part: linear regression cross- Zhang et al, superior parietal SU 1. left or right shoulder or hand linear classifier validated between each 2017 lobule (5d) 2. speech, "left" or "right" condition 1 in grasp area at

78

junction of intraparietal and postcentral sulci (AIP) open-loop calibration Downey et al. 2 in motor cortex 3D cartesian endpoint velocity, indirect optimal linear estimation with ridge then computer assited TC 2017 (hand area) wrist roll, grasp regression closed-loop recalibration Ajiboye et al. Motor cortex (hand SU; TC; LFP; and imagined single-joint linear impulse response filter using system offline model training 2012 area) variations movements identification and validation action potentials Kennedy et Motor cortex (hand from groups of al. on/off signal Differentiator N/A 1998 area) neurons converted to pulse trains

79

Table 2-4. Decoder design and optimization studies. These studies either introduce new decoders, characterize the properties of existing decoders, or improve the performance of existing decoders. Feature types: SU = sorted single units; TC = thresholded multi- unit activity; SBP = spike band power; LFP = local field potential; MWP = mean wavelet power

iBMI Investigations: Decoder Design and Optimization Decoded Study Brain Areas Extracted Features Decoder Calibration Variables Re-parameterized version of the Kalman Willett et al. 2D Cartesian end- 1-2 in motor cortex TC; SBP filter with multiple gain and smoothing 2019 point velocity parameter values classification SpikeDeeptector: deep neural network Saif-Ur- between spikes and Rehman et al, 2 in PPC TC trained to determine whether a recording N/A artifacts in data 2019 channel contains neural spikes or noise from new subjects LFPs in low (<6Hz), regularized linear discriminant analysis Milekovic et al. Motor cortex medium (15-45Hz), FlashSpeller (rLDA) classifier; comparison of same-day open-loop calibration 2018 (hand area) and high (50-350Hz) selection decoder to a previously calibrated and frequency bands unchanged decoder 2 in Motor Cortex MK-DKF (principled kernel selection and rapid-update (2-5s) Brandman et (precentral gyrus, 2D Cartesian end- al. TC; SBP Gaussian process regression with the closed-loop calibration caudal middle frontal point velocity 2018 discriminative Kalman filter); Kalman filter with computer assistance gyrus) 2 in motor cortex steady-state Kalman filter; Gaussian process rapid-update (2-5s) Brandman et (precentral gyrus, Cartesian end-point al, TC; SBP regression with the discriminative Kalman closed-loop calibration caudal middle frontal velocity 2018 filter with computer assistance gyrus) principal component analysis and Gaussian Downey et al. 2 in M1 (hand and arm online: TC N/A mixture model expectation-maximization closed-loop calibration 2018 area); 2 in S1 offline: SU algorithm to discriminate between SUs PCA on the difference between neural TC in all participants; activity recorded on each electrode during Even-Chen et Motor cortex (hand and outcome error LFPs (150-450Hz) in successful trials and failed trials, then LDA N/A al, 2018 arm areas) signal one participant of the projection of successful and failed trials within PCA space

80

for SVM: open-loop the closed-loop recalibration Skomrock et Motor cortex discrete hand non-linear support vector machine; for DNN: off-line training al. MWP 2018 (hand area) postures deep neural network decoder with 80 blocks from 40 sessions, then used without recalibration Pandarinath et NHP M1 and PMd; latent factor analysis via dynamical systems Many Many N/A al, 2018 Human M1 (LFADS) optimal linear estimation with stimulation open-loop calibration then Young et al, 2 in motor cortex (hand 3D Cartesian end- artifact reduction using either: blanking, TC; SBP computer assisted closed- 2018 area) point velocity common average referencing, or linear loop recalibration regression referencing Re-parameterized version of the Kalman open-loop calibration then Willett et al. 1-2 in motor cortex 2D Cartesian end- TC; SBP filter with multiple gain and smoothing computer assisted closed- 2017 (hand area) point velocity parameter values loop recalibration Jarosiewicz et 2 in motor cortex Cartesian endpoint Hybrid kinematic/state filter (Kalman filter + open-loop then closed- al. TC; SBP 2016 (hand area) velocity, click linear discriminant state classifier) loop recalibration retrospective target inference-based Jarosiewicz et 1-2 in motor cortex Cartesian endpoint open-loop then closed- al. TC; SBP decoder calibration, Kalman filter velocity (hand area) velocity, click loop recalibration 2015 control Homer et al. motor cortex (hand 2D Cartesian end- multiple offset correction algorithm (MOCA) TC; SBP open-loop calibration 2014 area) point velocity to adapt Kalman filter during use Perge et al. motor cortex (hand 2D Cartesian end- SU; LFP Kalman Filter open-loop calibration 2014 area) point velocity Cartesian end-point Kim et al. motor cortex (hand Hybrid kinematic/state filter (Kalman filter + SU velocity open-loop calibration 2011 area) linear discriminant state classifier) Click Kim et al. motor cortex (hand 2D Cartesian end- open-loop then closed- SU linear filter compared to Kalman filter 2008 area) point velocity loop recalibration

81

Table 2-5. iBMI end-effector studies. These studies include the first-reported use of each type of end effector and studies that increase or evaluate functionality. Feature types: SU = sorted single units; TC = thresholded multi-unit activity; SBP = spike band power; LFP = local field potential; MWP = mean wavelet power. Calibration types: O = open-loop only; O then C = open-loop calibration then closed-loop calibration; O comp C = open-loop calibration, then with computer-assisted closed-loop recalibration; M = motor control based calibration. M was used in a study in which the participants had limited voluntary finger function.

iBMI Investigations: End-effectors

Decoded Study End-effector DOF Brain Areas Decoded Variables Decoder Calibration Features 1 in reach area on superior parietal lobule (5d) Sakellaridi et 2 DOF 1 in grasp area at Cartesian endpoint linear discriminant al. cursor control SU; TC O 2019 simultaneous junction of velocity analysis classifier intraparietal and postcentral sulci (AIP) 3D Cartesian end-point velocity + swivel angle Optimal linear estimation Young et al. virtual arm, end-point 3-4 DOF 2 in motor cortex TC; SBP velocity (OLE) + first-order O comp C 2019 control simultaneous (hand area) Joint angle velocity + smoothing swivel angle velocity FlashSpeller, selection- Milekovic et motor cortex LFP (40-400 al. based BCI 1 DOF 1D cursor position linear decoder O then C (hand area) Hz) 2019 communication ReFIT Kalman Filter + Tablet control with Hidden Markov Model 3 DOF multiple applications: state classifier (in two simultaneous: 2D Nuyujukian et email, chat, motor cortex 2D Cartesian end-point participants) cursor velocity + TC; SBP O then C al, 2019 entertainment (hand area) velocity + 1D click A cumulative closed-loop state classifier streaming, decoder + LDA state (movement + click) news/weather classifier (in one participant) TC; high- Stavisky et 2 DOF 1-2 in motor cortex 2D Cartesian end-point Kalman filter cursor control frequency O then C al. 2018 simultaneous (hand/arm area) velocity ReFIT Kalman filter LFP

82

FES surface stimulation Colachis et for hand grasp (high motor cortex non-linear support vector al. 7 grasp states TC; MWP discrete hand postures O 2018 density electrode sleeve (hand area) machine on forearm) 3 DOF simultaneous: 2D cursor control for TC; high- cartesian end-point ReFIT Kalman filter (all Pandarinath cursor velocity + 1-2 in motor cortex et al. QWERTY and BCI frequency velocity participants); HMM- O state classifier (hand/arm area) 2017 optimized keyboard LFP click based state classifier (movement vs. click) FES Percutaneous intramuscular stimulation for arm for FES: rate of change movement and hand of the activation grasp, with shoulder Single joint (1 DOF) percentage of joint Full optimal linear Ajiboye et al. 2 in motor cortex positioning via BCI- Multi-joint end-point TC; SBP stimulation patterns estimation (Full OLE) + O 2017 (hand area) controlled mobile arm (up to 3 DOF) first-order smoothing support for virtual arm: joint angle velocities virtual arm, endpoint control FES surface stimulation Bouton et al. for hand grasp Motor Cortex non-linear support vector 6 movement states SU; MWP discrete hand postures O then C 2016 (high density electrode (hand area) machine sleeve on forearm) Robotic arm with 3 fingered hand 2 in motor cortex 3D cartesian end-point indirect optimal linear Downey et al, used with and without 4 DOF (hand area); 2 in TC velocity, 1D grasp estimation with ridge O comp C 2016 intelligent, vision-guided simultaneous S1; 2 in parietal velocity regression assistance for grasping cortex and placing objects 1 in reach area on superior parietal lobule (5d) cartesian end-point linear discriminant 2 DOF Aflalo et al. 1 in grasp area at velocity analysis classifier cursor control simultaneous SU; TC O 2015 junction of target goal L1-constrained linear 4 or 6 target states intraparietal and (classification) least square regression postcentral sulci (AIP)

83

SBP; high- Gilja et al. 2 DOF 1-2 in motor cortex 2D Cartesian end-point cursor control frequency kalman filter O, M 2015 simultaneous (hand area) velocity LFP Translation: endpoint velocity indirect optimal linear Wodlinger et robotic arm with 10 DOF 2 in motor cortex al. SU; TC Orientation: degrees of estimation with ridge O comp C anthropomorphic hand simultaneous (hand area) 2015 roll, pitch, and yaw regression Grasp: 4 states Point and click virtual Bacher et al. typing with QWERTY 2 DOF motor cortex 2D Cartesian end-point SU Kalman Filter O then C 2015 keyboard or radial simultaneous (hand area) velocity communication three-dimensional 3 DOF Cartesian end-point simultaneous velocity, three- indirect optimal linear Collinger et robotic arm with 4 DOF 2 in motor cortex al. SU dimensional orientation estimation with ridge O comp C anthropomorphic hand simultaneous (hand area) 2013 velocity (of wrist), and regression 7 DOF one-dimensional simultaneous grasping velocity robotic arm with Hybrid kinematic/state Hochberg et anthropomorphic hand 3-4 DOF motor cortex Cartesian end-point filter (Kalman filter + al. TC O comp C 2012 (DEKA arm, DLR Light- simultaneous (hand area) velocity, grasp velocity linear discriminant state Weight Robot III) classifier) 3 DOF simultaneous: 2D Hybrid kinematic/state Cartesian end-point Simeral et al. cursor velocity + motor cortex filter (Kalman filter + cursor control SU velocity O 2011 state classifier (hand area) linear discriminant state Click (movement vs. classifier) click) Hochberg et 2 DOF motor cortex 2D Cartesian end-point al. cursor control SU linear filter O 2006 simultaneous (hand knob) velocity cursor control 2 DOF cursor 2D Cartesian end-point Kennedy et motor cortex finger movements on control LFP velocity Differentiator N/A al. 2004 (hand area) virtual hand 1DOF virtual hand 1D flexion/extension Kennedy et motor cortex increase or decrease in cursor control 1 DOF SU Differentiator N/A al. 2000 (hand area) firing rate

84

Chapter 3: Neural representation of observed, imagined, and attempted grasping force in motor cortex of individuals with chronic tetraplegia

Note that this chapter has been published in an open access peer-reviewed journal:

Rastogi, A., Vargas-Irwin, C.E., Willett, F.R., Abreu, J. Crowder, D.C., Murphy,

B.A., Memberg, W.D., Miller, J.P., Sweet, J.A., Walter, B.L., Cash, S.S., Rezaii,

P.G., Franco, B., Saab, J., Stavisky, S.D., Shenoy, K.V., Henderson, J.M.,

Hochberg, L.R., Kirsch, R.F., Ajiboye, A.B. Neural representation of observed,

imagined, and attempted grasping force in motor cortex of individuals with

chronic tetraplegia. Sci Rep 10, 1429 (2020). https://doi.org/10.1038/s41598-020-

58097-1

The article is licensed under a Creative Commons Attribution 4.0 International License

(http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format. Therefore, we reproduce the article below. The article has been re-formatted to adhere to the requirements of this dissertation.

85

Abstract

Hybrid kinetic and kinematic intracortical brain-computer interfaces (iBCIs) have the potential to restore functional grasping and object interaction capabilities in individuals with tetraplegia. This requires an understanding of how kinetic information is represented in neural activity, and how this representation is affected by non-motor parameters such as volitional state (VoS), namely, whether one observes, imagines, or attempts an action. To this end, this work investigates how motor cortical neural activity changes when three human participants with tetraplegia observe, imagine, and attempt to produce three discrete hand grasping forces with the dominant hand. We show that force representation follows the same VoS-related trends as previously shown for directional arm movements; namely, that attempted force production recruits more neural activity compared to observed or imagined force production. Additionally, VoS- modulated neural activity to a greater extent than grasping force. Neural representation of forces was lower than expected, possibly due to compromised somatosensory pathways in individuals with tetraplegia, which have been shown to influence motor cortical activity. Nevertheless, attempted forces (but not always observed or imagined forces) could be decoded significantly above chance, thereby potentially providing relevant information towards the development of a hybrid kinetic and kinematic iBCI.

Introduction

Intracortical brain-computer interfaces (iBCIs) that command have the potential to restore lost or compromised function to individuals with tetraplegia. iBCIs typically detect neural activity from motor cortex, which encodes kinetic and kinematic information in rhesus macaques (Evarts, 1968; Humphrey, 1970; Fetz and

86

Cheney, 1980; Georgopoulos et al., 1982; Evarts et al., 1983; Georgopoulos et al., 1986;

Kakei et al., 1999; Carmena et al., 2003; Morrow and Miller, 2003; Sergio and Kalaska,

2003; Pohlmeyer et al., 2007; Oby et al., 2010; Vargas-Irwin et al., 2010; Zhuang et al.,

2010). iBCIs that extract kinematic parameters have allowed individuals to command one- and two-dimensional computer cursors (Wolpaw et al., 2002; Leuthardt et al., 2004;

Kubler et al., 2005; Hochberg et al., 2006; Kim et al., 2008; Schalk et al., 2008; Hermes et al., 2011; Kim et al., 2011; Simeral et al., 2011; Gilja et al., 2015; Jarosiewicz et al.,

2015; Pandarinath et al., 2017), prosthetics (Hochberg et al., 2012; Collinger et al.,

2013b; Wodlinger et al., 2015), and functional electrical stimulation of paralyzed muscles

(Bouton et al., 2016; Ajiboye et al., 2017). Additional work has characterized closed-loop kinetic control in nonhuman primates (Moritz et al., 2008; Pohlmeyer et al., 2009; Ethier et al., 2012) and open-loop force modulation in human participants (Flint et al., 2014;

Downey et al., 2018). These studies could potentially move iBCI technology towards restoring functional tasks requiring both kinetic and kinematic control.

Motor cortex can exhibit activity in the absence of motor output, such as during mental rehearsal or movement observation (Sanes and Donoghue, 2000; Hatsopoulos and Suminski, 2011). Furthermore, rather than representing fixed motor parameters, M1 may adapt to achieve the task at hand (Kalaska, 2009). A body of work has investigated how motor cortex modulates to different volitional states, including passive observation, imagined action, and attempted or executed action, mostly in the context of kinematic outputs. For example, kinematic imagery produces similar neural activity patterns as movement execution – or attempted movement, in persons with tetraplegia – but more weakly and in a smaller subset of motor areas (Porro et al., 1996; Grezes and Decety,

2001; Filimon et al., 2007; Miller et al., 2010). Furthermore, in individuals with tetraplegia, observed, imagined, and attempted arm reaches recruit shared neural populations, but nonetheless yield unique patterns of activity (Vargas-Irwin et al., 2018).

87

This supports the existence of a “core” network that modulates to all volitional states, and the recruitment of additional neural circuitry during the progression from passive observation to attempted movement (Jeannerod, 2001; Page et al., 2001; Page et al.,

2007; Mukamel et al., 2010; Vargas-Irwin et al., 2018).

The effects of volitional state on non-kinematic representation, including that of grasping forces, remains uncertain. In able-bodied participants with implanted with sEEG electrodes, executed forces produce stronger neural signals than imagined forces

(Murphy et al., 2016). This supports fMRI findings in able-bodied individuals and participants with spinal cord injury, who exhibited weaker, less widespread BOLD activations when imagining (vs. attempting) forces (Cramer et al., 2005). However, while this trend was readily apparent during a standard two-tailed analysis of all able-bodied subjects, it only appeared in the SCI group when within-subject comparisons were implemented.

To our knowledge, neural modulation to observed, imagined, and attempted forces has not been evaluated at the resolution of intracortical activity in humans.

Furthermore, while open-loop force decoding was achieved in a single individual with tetraplegia (Downey et al., 2018), the extent to which force is neurally represented in this patient population, or how volitional state may affect this representation, is unresolved.

Therefore, the present study evaluates how volitional state (observe, imagine, and attempt) during kinetic behavior (grasp force production) influences neural activity in motor cortex. Specifically, we characterize the topography of the neural space representing force and volitional state in three individuals with tetraplegia, both at the level of single neural features extracted from multiunit intracortical activity, and at the level of the neural population. We show that 1) volitional state affects how neural activity modulates to force; namely, that attempted forces generate stronger cortical modulation than observed and imagined forces; 2) grasping forces are reliably decoded when

88 attempted, but not always when observed or imagined, and 3) volitional state is represented to a greater degree than grasping forces in the motor cortex.

Results

Characterization of Individual Features

A major goal of this study was to determine whether force-related tuning was present at the level of single neural features extracted from multiunit intracortical activity

Figure 3-1A and to assess the extent to which volitional state affected this tuning. These neural features were extracted during a force-matching behavioral task, as illustrated in

Figure 3-1B, in which participants T8, T5, and T9 observed, imagined, and attempted producing three discrete forces using either a power or a pincer grasp.

Supplemental Table 3-S1 shows a full list of sessions and their associated parameters.

In all participants, two neural features were extracted from each of the 192 recording electrodes implanted in motor cortex. Threshold crossing (TC) features, defined as the number of times the neural activity crossed a pre-defined, channel-specific noise threshold, are numbered from 1–192 according to the recording electrode from which they originate. Corresponding spike band power (SBP) features, defined as the root mean square of the signal in the spike band (250–5000 Hz) of each channel, are numbered from 193–384.

89

Figure 3-1. A. Experimental setup. Prior to the current study, participants were implanted with two 96-channel microelectrode arrays in motor cortex as a part of the BrainGate2 Pilot Clinical Trial. The microelectrode arrays recorded neural activity while participants completed a force task. Two neural features (threshold crossings, spike band powers) were extracted from each channel in the arrays. B. Experimental session architecture. Each session consisted of 12–21 blocks, each of which contained ~20 trials (see Supplemental Table 3-S1). In each trial, participants observed, imagined, or attempted to generate one of three cued forces with either a power grasp or a closed pincer grasp; to perform a wiggling finger movement; or to rest. Trial types were presented in a pseudorandom order. Each trial contained a preparatory (prep) phase, a go phase where forces were actively embodied, and a stop phase where neural activity was allowed to return to baseline. Participants were prompted with both audio and visual cues, in which a researcher squeezed an object associated with each force level. Visual cues were presented with a third person, frontal view, in which the researcher faced the participants while squeezing the objects. Lateral views are shown here for visual clarity, but were not displayed as such to the participants.

90

Figure 3-2 shows four representative TC and SBP features in participant T8 that are tuned to one of four marginalizations of parameters as evaluated with 2-way Welch-

ANOVA: force only, volitional state only, neither force nor volitional state, both force and volitional state independently, and an interaction between force and volitional state.

Supplemental Figures 3-S6 and 3-S7 show corresponding feature traces for participants

T5 and T9, respectively. Additionally, Supplemental Figures 3-S8 – 3-S11 contain rasters for exemplary single sorted units that are tuned to these factors in participant T8.

For each neural feature, peristimulus time histograms (PSTHs) averaged within individual forces (Column 1) and within individual volitional states (Column 2) are illustrated in Figure 3-2. SBP Feature 272 was tuned to force only (row 2); as such, it showed go-phase differentiation across multiple force levels that were statistically discriminable (corrected p < 0.05, 2-way Welch-ANOVA, Benjamini-Hochberg method).

In contrast, TC feature 182 was tuned to volitional state only (row 1), and, therefore, did not exhibit force discriminability but were discriminable across multiple volitional states

(corrected p < 0.05, 2-way Welch-ANOVA). TC Feature 79, which was tuned independently to both force and volitional state, was statistically discriminable for both parameters.

Column 3 of Figure 3-2 graphically represents the simple main effects of the 2- way Welch-ANOVA analysis, as represented by mean go-phase neural deviations from baseline activity for each force level within each volitional state. In these panels, features that are tuned independently to force exhibit a similar pattern of modulation to light, medium, and hard grasping forces, regardless of the volitional state used in the trial.

These include TC Feature 182 (row 2), which is tuned to force only, and TC Feature 79

(row 4), which is tuned independently to both force and volitional state. In contrast, SBP

Feature 257 (row 5) exhibits a statistically significant interaction between force and volitional state (p < 0.05). For this feature, the mean neural deviations attributed to each

91 force level are affected by the volitional states used to emulate them. This suggests that volitional state could affect the degree to which interacting features are tuned to force in participant T8.

Figure 3-3 summarizes the tuning of all 384 TC and SBP features in each participant across both power and pincer grasps, during the active “go” phase of the behavioral task. In participant T8, these data are averaged over multiple experimental sessions. Figure 3-3A shows the average fraction of features in the neural population that were tuned to the four marginalizations of interest (corrected p < 0.05, 2-way Welch-

ANOVA). In all participants, a substantial proportion of modulating features are tuned to volitional state only. A second population of features – denoted by blue and red bars – exhibit independent go-phase tuning to force (27.0% in T8 power, 24.3% in T8 pincer,

3.6% in T5 power, 8.3% in T5 pincer, and 2.3% in T9 pincer). A majority of these force- tuned features exhibit “mixed selectivity”, in that they are also independently tuned to volitional state. A final subset of features exhibits a statistically significant interaction

(corrected p < 0.05, 2-way Welch-ANOVA) between force-related and volitional-state- related modulation. Figure 3-3B further subdivides these interacting features into those that are tuned to force within each individual volitional state (corrected p < 0.05, 1-way

Welch ANOVA). Note that in participant T9, no interacting features were detected. In participants T8 and T5, a larger proportion of interacting features are recruited by attempted forces than by observed and imagined forces. Therefore, in these two participants, an overall greater proportion of neural features are tuned to force during the attempted volitional state.

92

93

Figure 3-2. Single features are tuned to force and volitional state. Rows: Average per-condition activity (PSTH) of five exemplary TC and SBP features tuned to force only (session 1), volitional state (VoS) only (session 4), neither factor (session 1), both factors (session 4), and an interaction between both factors (session 4) in participant T8 (2-way Welch ANOVA, corrected p < 0.05, Benjamini-Hochberg method). Neural activity was smoothed with a 100-ms Gaussian kernel prior to trial averaging to aid in visualization. Statistically significant p-values for force modulation, VoS modulation, and interaction are indicated with asterisks. Neural activity in Column 1 is averaged over all volitional states, such that observable differences in modulation are due to force alone (~50–90 trials per force level, depending on session number). Similarly, Column 2 depicts the activity of individual features during distinct volitional states, averaged over all force levels (~50–90 trials per volitional state, depending on session number). Simple main effects are represented graphically in Column 3 via normalized mean neural deviations from baseline activity during force trials within each of the three volitional states. Modulation depths were computed over the go phase of each trial, and then averaged within each force-VoS pair. Error bars indicate 95% confidence intervals.

Figure 3-3. Overall tuning of neural features. A. Fraction of neural features significantly tuned (2-way Welch-ANOVA, corrected p < 0.05) to force and/or volitional state during the go phase of force production. For participant T8, results are averaged across multiple sessions. Error bars indicate standard deviation. B. Fraction of total features exhibiting a statistically significant interaction (2-way Welch-ANOVA, p < 0.05) between force and volitional state, subdivided into force-tuned features during observation (O), imagination (I), and attempt (A). Force tuning within each volitional state was determined via one-way Welch-ANOVA (corrected p < 0.05). Error bars indicate standard deviation. Results show that features with an interaction between force and VoS are more likely to have force tuning in the attempt condition than in the observed or imagined conditions.

94

Neural Population Analyses

The next goal was to examine how volitional state affected force-related tuning at the neural population level. To this end, feature vectors representing individual trials were projected into low dimensional “similarity spaces” using continuous similarity

(CSIM) analysis, adapted from Spike-Train Similarity (SSIM) analysis (Vargas-Irwin et al., 2015), as described in the Methods. Briefly, CSIM makes pair-wise comparisons between patterns in neural features and then projects these pair-wise comparisons into a low-dimensional representation using t-distributed Stochastic Neighbor Embedding (t-

SNE) (van der Maaten and Hinton, 2008). In the low-dimensional representation, feature activity during an individual trial is represented as a single point, and the distance between points denotes the degree of similarity between trials.

Figure 3-4A shows the relationship between single-trial activity patterns using two-dimensional CSIM plots for a representative session from each participant-grasp pair. In concordance with the single feature data, clustering of trials according to volitional state is evident in all participants during both power and pincer grasping

(Kruskall-Wallis p < 0.00001). In contrast, force-related clustering of trials occurred only during sessions 1, 7, and 11 and was most apparent during attempted force production

(Kruskal-Wallis, p < 0.0001). These trends are further illustrated in Figure 3-4B, which shows the distribution of pairwise distances between trials within and between volitional states in the upper left panel, as well as within and between forces within each volitional state. The medians corresponding to the within-condition and between-condition distributions are farthest apart for volitional state, are closest together for observed and imagined forces, and are separated by an intermediate amount for attempted forces.

This indicates that volitional state has a stronger influence on trial similarity than even

95 attempted forces. In other words, trials cluster more readily according to volitional state than to force in the CSIM space.

96

Figure 3-4. Feature population activity patterns. A. Two-dimensional CSIM plots for a representative session from each participant-grasp pair. Each point represents the activity of the entire population of simultaneously-extracted features during a single trial. The distance between points indicates the degree of similarity between single trials. Clustering of similar symbols denotes similarity between trials with the same intended force level, while clustering of similar colors indicates similarity of trials within the same volitional state. In all panels, the distribution of distances for pairs of trials within the same VoS displayed a significantly smaller median than distances between trials in different VoS categories (Kruskall-Wallis p < 0.00001). Analyzing pair-wise distances for trials within and between force conditions produced different results across sessions. Asterisks in the top left corner denote sessions with significantly smaller within-force than across-force distances (*p < 0.05 **p < 0.01, and ***p < 0.0001) within each VoS, as indicated by the color of the asterisks. B. Distribution of pairwise distances within and between categories for VoS (upper left) and and observed, imagined, and attempted force. Distances were normalized and pooled across all sessions shown. Triangles on the X-axis denote medians for each distribution. Overall, VoS had a stronger effect on trial similarity than even the attempted force condition.

In order to further quantify the degree to which volitional state and force were separable at the population level, VoS and force classification was performed using an

LDA classifier operating on 10-dimensional CSIM space projections. To determine the time resolution at which force- and VoS-related information could be decoded, the LDA was applied to 10-dimensional CSIM data of varying window lengths, beginning at the start of the go phase, as described in the Methods and as shown in Figure 3-5. Further, to determine how force and volitional state were represented in the neural space as a function of time, time-dependent classification accuracies were determined by applying the LDA to CSIM data with a 400 millisecond sliding window stepped in 100 millisecond increments (Figure 3-6). For both of these analyses, chance performance was estimated by applying the LDA to the neural data during 10,000 random shuffles of the trial labels.

The mean of the empirical chance distribution (averaged across participants) was

33.9%, with 95% of samples between 26.6 and 40.3%. Both Figures 3-5 and 3-6 show that, in agreement with the structure observed in the CSIM neural space in Figure 3-4,

VoS is decoded with greater accuracy than force for all participants and grasps.

However, force-related information also appears to be present, and is decoded above

97 chance throughout the go phase of attempted force production in all participants (but not always across other VoS conditions). In other words, volitional state appears to affect the degree to which force is represented at the level of the neural population.

Figure 3-5. Feature ensemble CSIM force and volitional state decoding accuracies as a function of window length. Offline decoding accuracies were computed using an LDA classifier implemented within a 10-dimensional CSIM representation of the neural feature data, using 10-fold cross-validation. 10-dimensional CSIM data of window lengths ranging from 100 ms to 3000 ms were passed to the LDA, as described in the Methods. Each window began at the start of the go phase and ended at the time point indicated on the x-axis. For participant T8, each panel shows session-averaged decoding performances from each participant-grasp pair. The T8 power and pincer panels were averaged over 5 and 3 sessions, respectively. Standard deviations across T8 sessions are indicated by the dotted lines. Gray line indicates the upper boundary of the 95% empirical confidence interval of the chance distribution, estimated using 10,000 random shuffles of the trial labels.

98

Figure 3-6. Time-dependent feature ensemble CSIM force and volitional state decoding accuracies. Offline decoding accuracies were computed using an LDA classifier implemented within a 10-dimensional CSIM representation of the neural feature data, using 10-fold cross-validation. The LDA was applied to a 400 ms sliding window, stepped in 100 ms increments. Each panel shows decoding performance for a representative session from each participant-grasp pair, where time = 0 indicates the start of the active “go” phase of the trial. The gray line indicates the upper boundary of the 95% empirical confidence interval of the chance distribution, estimated using 10,000 random shuffles of the trial labels.

To elaborate the extent to which individual volitional states were represented in the neural space, classification accuracies of individual volitional states were computed, averaged over the go phase of the behavioral task, and then compiled into confusion matrices. Figure 3-7 shows that, while all volitional states are classified at above-chance accuracy, observation and attempt appear to be classified with greater accuracy than imagery in multiple datasets. In particular, attempt is classified with high accuracy across all sessions, while observation is classified with high accuracy during Sessions 1, 7, and

11. During Sessions 9 and 10, observation and imagery are classified with similar accuracy and tend to be confused with each other more often than they are confused with attempt. These results suggest that attempt (and possibly observation) drives volitional state representation to a greater degree than imagery.

99

Figure 3-7. Feature ensemble volitional state go-phase confusion matrices. Offline decoding accuracies were computed using an LDA classifier implemented within a 10- dimensional CSIM representation of the neural feature data, using 10-fold cross- validation, over a 400 ms sliding window stepped down in 100 ms increments. Classification accuracies for individual volitional states were averaged over all time points within the go phase of the trial, resulting in confusion matrices of true vs. predicted (P) observed (O), imagined (I), and attempted (A) volitional states. Note that the attempted volitional state is classified with a high accuracy rate across all sessions, while observed trials are classified with high accuracy during sessions 1, 7, and 11.

Discussion

Force representation persists in motor cortex after tetraplegia

The primary study goal was to characterize how volitional state affects neural representation of force in human motor cortex at the feature-ensemble level. We found that force-related activity persists in motor cortex, even after tetraplegia. This validates fMRI findings in individuals with spinal cord injury, who exhibited BOLD activity that modulated to imagined force (Cramer et al., 2005); as well as intracortical findings in an individual with tetraplegia (Downey et al., 2018). The present work expands upon these

100 studies by demonstrating force-related activity in multiple people with paralysis within single features (Figures 3-2 – 3-3) and the neural population (Figures 3-4 – 3-6).

The present work also shows quantitative, electrophysiological evidence that force modulation depends on volitional state. Briefly, attempted forces recruit more single features (Figure 3-3), separate more distinctly in the CSIMS neural space (Figure

3-4), and yield higher classification accuracies (Figures 3-5 – 3-6) than observed and imagined forces. This validates previous kinematic (Porro et al., 1996; Grezes and

Decety, 2001; Filimon et al., 2007; Miller et al., 2010; Vargas-Irwin et al., 2018) and kinetic (Cramer et al., 2005; Murphy et al., 2016) volitional state studies, in which attempted actions were more readily decoded and yielded stronger BOLD activations than other volitional states.

Volitional state modulates neural activity to a greater extent than force

The present work demonstrates that volitional state is represented robustly within the neural space in individuals with tetraplegia (Figures 3-4 – 3-7). In particular, individual volitional states recruit an overlapping population of neural features (Figure 3-

3), but nonetheless result in unique activity patterns (Figure 3-4). This supports the theory of overlapping yet distinct neural populations corresponding to each volitional state (Vargas-Irwin et al., 2018). Additionally, while individual volitional states are represented at above-chance levels in all participants, the attempted state recruits the most neural features (Figure 3-3) and is classified with high accuracy (Figure 3-7), compared to the other volitional states. In contrast, the imagined volitional state was classified least accurately and was often confused with observation. These results appear to be consistent with previous intracortical volitional state investigations in humans, in which the attempted state was shown to recruit more single units and result

101 in higher firing rates, while the imagined state recruited the fewest number of single units in a human participant with the highest neural volitional state representation (Vargas-

Irwin et al., 2018). Taken together, the current study results suggest that volitional states are represented similarly in multiple individuals with tetraplegia, regardless of whether these states were used to emulate kinematic vs. kinetic tasks.

Here, however, force representation was weaker than volitional state representation. Specifically, while many features modulated primarily to volitional state, few exhibited force tuning; and most of these were also tuned to volitional state (Figure

3-3). Additionally, volitional state was discriminated more reliably from feature ensemble activity than even attempted forces (Figures 3-4 – 3-6). This relative deficiency of force information, which contrasts with previous work in nonhuman primates (Evarts, 1968;

Evarts et al., 1983; Kalaska et al., 1989; Carmena et al., 2003; Sergio and Kalaska,

2003) and able-bodied humans (Rearick et al., 2001; Cramer et al., 2005; Keisker et al.,

2009; Neely et al., 2013; Flint et al., 2014; Murphy et al., 2016; Wang et al., 2017), could have potentially resulted from several factors. Here, we discuss the effects of two such factors, which include the visual cues used to prompt the force task and the effects of deafferentation in individuals with tetraplegia.

Effects of visual cues on neural activity

During most experimental sessions presented in this study, visual cues were used to prompt the observation, imagery, and attempt of forces. Briefly, a researcher lifted one of six graspable objects associated corresponding to light, medium, and hard power and pincer grasping forces to indicate the preparatory phase; squeezed the object during the active “go” phase; and ultimately released the object at the beginning of the stop phase of the trial. In response to these visual cues, participants were instructed to

102 observe, imagine, or attempt emulating sufficient force to crush the object squeezed by the researcher.

As discussed, this visually-cued behavioral task yielded three separate yet overlapping populations of the neural features corresponding to each volitional state.

Previous studies in non-human primates have identified a class of cells within motor cortex, termed “mirror neurons”, that exhibit changes in activity in response to the observed and executed volitional states (Dushanova and Donoghue, 2010; Vigneswaran et al., 2013; Kraskov et al., 2014; Mazurek et al., 2018). The presence of these neurons may account for the degree of overlap between feature populations tuned to observation, imagery, and attempt. Additional studies have demonstrated that the activity of neurons responding to observed and executed actions depends on characteristics of how the motor actions are observed. For example, neural activity has been shown to depend on whether the observed action is static versus dynamic

(Lanzilotto et al., 2019), whether it occurs within versus beyond the participant’s reach

(Umilta et al., 2001; Caggiano et al., 2009), and whether it is presented within an allocentric versus an egocentric reference frame (Caggiano et al., 2011; Maranesi et al.,

2017).

The visual cues in this study were static during the active “go” phase of the task, occurred within the participants’ extrapersonal space, and were presented using a third- person perspective. Since some of these characteristics are associated with weaker neural activity during action observation (Caggiano et al., 2009; Caggiano et al., 2011;

Lanzilotto et al., 2019), the visual cues included in this study may partially account for the weak force representation demonstrated here. However, many of these same non- human primate studies suggest that static, third-person, and extrapersonal visual cues can still elicit robust neural responses to observed actions in able-bodied individuals. For example, one study demonstrated force encoding during the reach phase of an

103 observed motor task–before contact had been made with the object, such that the only force-related information available to the monkey was from previous knowledge of the object’s weight (Alaerts et al., 2012). In light of these findings, it is likely that robust force-related representation can be elicited even in response to relatively “weak” visual stimuli, such as the static squeezing of graspable objects. Furthermore, while separate yet overlapping populations of neurons respond to first-person versus third-person views of motor acts, the number of neurons within each of these populations is relatively similar, albeit larger for first-person views (Caggiano et al., 2011). Similar results were found for neurons recruited during the observation of motor acts within the peripersonal versus the extrapersonal space (Caggiano et al., 2009). Therefore, while it is still possible that the current study’s visual cues contributed to the weak force representation observed here, this contribution was likely minor.

Effects of deafferentation in individuals with tetraplegia

An additional explanation for the deficiency in neural force representation could include the effects of deafferentation-induced cortical reorganization in tetraplegia

(Green et al., 1999; Lacourse et al., 1999). Notably, these effects were absent in the previously cited non-human primate literature, which predicts robust representation of observed and executed motor actions in able-bodied individuals (Caggiano et al., 2009;

Caggiano et al., 2011; Alaerts et al., 2012). In individuals with spinal cord injury, force- related BOLD activity has been shown to overlap minimally with able-bodied force activity (Cramer et al., 2005), suggesting that altered cortical networks in tetraplegia could affect the extent of force representation in motor cortex. However, kinematic representation is well preserved in tetraplegia, as evidenced by high decoding accuracies in kinematic iBCIs (Hochberg et al., 2006; Kim et al., 2008; Kim et al., 2011;

Simeral et al., 2011; Hochberg et al., 2012; Collinger et al., 2013b; Wodlinger et al.,

104

2015; Ajiboye et al., 2017; Vargas-Irwin et al., 2018). Therefore, a discrepancy exists between kinematic and kinetic representation in tetraplegia.

This discrepancy may result from sensory feedback differences between the able-bodied and paralyzed states – and indeed, between kinematic and kinetic tasks attempted by individuals with tetraplegia. Without sensory feedback, motor performance is significantly compromised (Sainburg et al., 1993; Ghez et al., 1995; Gordon et al.,

1995); while reintroducing visual (Ghez et al., 1995; Rearick et al., 2001), auditory

(Wang et al., 2017), tactile (Tan et al., 2014), or proprioceptive (Ramos-Murguialday et al., 2012) feedback enhances motor-related neural modulation and BCI control.

Moreover, in an individual with tetraplegia who had intact sensation, motor cortical neurons modulated to both passive joint manipulation and attempted arm actions

(Shaikhouni et al., 2013). Taken together, these studies suggest that multiple sensory inputs influence motor cortical activity, and that diminished sensory feedback compromises modulation to motor parameters.

Of all sensory modalities, tactile and proprioceptive feedback are the most relevant to fine control of grasping forces (Tan et al., 2014; Tabot et al., 2015; Schiefer et al., 2018). This is because kinetic tasks are largely controlled through feedback via somatosensory pathways (Brochier et al., 1999; Carteron et al., 2016), which are profoundly altered in tetraplegia (Solstrand Dahlberg et al., 2018). Participants T8, T5, and T9 were deafferented and received no feedback regarding their own force output. In contrast, kinematic studies in people with paralysis were performed in the context of directional movements that rely on visual feedback, which remains intact after tetraplegia. After complete sensorimotor deafferentation, parameters relying on visual pathways could be preserved in motor cortex, while those relying on somatosensory feedback could diminish. This could explain the relatively weak force tuning observed here, as opposed to the robust force modulation seen in able-bodied literature.

105

Limitations of open-loop task

In this study, participants observed, imagined, and attempted discrete grasping forces, with the understanding that they could not execute these forces. This open-loop investigation is a key first step towards elucidating neural force representation; however, it came with inherent limitations regarding how participants engaged in the task. For example, volitional states may be challenging for some individuals with paralysis to differentiate cognitively, though previous work suggests this is generally achievable

(Vargas-Irwin et al., 2018). Additionally, since participants received no sensory feedback, the study findings depended on each participant’s understanding of the task and their ability to kinesthetically observe/imagine/attempt discrete force levels despite their chronic tetraplegia. Furthermore, while the use of visual cues enhanced this understanding by increasing the applicability of the force task to participants, the various objects and the hand shapes used to squeeze them may have been reflected in the neural data along with forces and volitional states. Finally, while participants were instructed to vary only the amount of force needed to crush these objects, there was no way to measure their intended kinematic and kinetic outputs, which could have differed somewhat between force levels.

To address and mitigate these challenges, participants completed a Kinesthetic

Force Imagery Questionnaire (KFIQ), adapted from the Kinesthetic and Visual Imagery

Questionnaire (Malouin et al., 2007), in order to qualitatively assess the degree to which they emulated various force levels during each volitional state. In addition, participants reported in their own words how they differentiated between observed, imagined, and attempted forces. For example, participant T8 reported conceiving of a virtual arm performing the task during force imagery, whereas he emulated forces with his own arm

106 during the attempt state. In all participants, KFIQ scores rose with volitional state in concordance with the neural data, as shown in Supplemental Figure 3-S5.

Furthermore, prior to completing the sessions presented in this main text, participant T8 completed a series of additional sessions, summarized in Supplemental

Table 3-S2, in which he was asked to observe, imagine, and attempt producing forces both when visual cues were included and when visual cues were omitted. Supplemental

Figures 3-S1 and 3-S2 show that the visual cues do introduce extraneous preparatory and stop phase neural activity within force trials, which is in agreement with previous non-human primate studies of neural activity occurring immediately before and after observed force tasks (Fabbri-Destro and Rizzolatti, 2008; Mazurek et al., 2018).

However, Supplemental Figures 3-S2 and 3-S3 suggest that the visual cues do not influence neural activity during the active “go” phase of the trial.

Finally, participants performed kinematic control trials where they embodied finger wiggling movements. Supplemental Figure 3-S4 shows that force trials were more correlated with each other than with kinematic trials, especially during attempted forces.

Additionally, the participants’ KFIQ scores, which reflect embodiment of forces as opposed to kinematics, were often lower for finger wiggling than for force (Supplemental

Figure 3-S5). Though only a closed-loop study could guarantee that participants were emulating forces, the KFIQ scores, and their correlation with the neural data, indicate that this was likely the case.

Force representation differences across participants

Relative to other participants, T8’s motor cortical activity exhibited the most force- tuned features, the clearest separation between attempted forces in CSIM space, and the highest force decoding accuracy. In contrast, T5’s force modulation was more robust during pincer grasping (Session 11) than during power grasping (Session 10), and T9’s

107 force modulation appeared to be relatively weak overall. The study’s open-loop nature partially explains these discrepancies: T5 had difficulty distinguishing light, medium, and hard forces during Session 10, but improved during Session 11. Additionally, differences in pathology may account for T9’s relatively weak force representation. While T8 and T5 were otherwise neurologically healthy participants with cervical spinal cord injury, T9 had advanced ALS, which may have comparatively affected his motor-related cortical activity. Future work with a greater number of participants, with different paralysis etiologies, will be needed to more comprehensively investigate these inter-participant differences.

Implications for iBCI development

A long-term motivation behind this study is to investigate the feasibility of incorporating force control into closed-loop iBCIs. This work validates the presence of kinetic information in motor cortex and shows that volitional state influences force modulation. Therefore, while decoding intended forces in real time appears feasible, future closed-loop force iBCIs will need to take non-motor parameters into account. For example, the data suggest that iBCI force decoders should be trained on neural activity generated during attempted (as opposed to imagined) force production.

More broadly, accurate force decoding will likely depend on increasing neural force representation in individuals with tetraplegia. Since closed-loop decoding performance often exceeds open-loop performance (Chase et al., 2009; Koyama et al.,

2010; Jarosiewicz et al., 2013), simply providing visual feedback about intended forces could improve neural force representation. In addition, somatosensory feedback has been shown to elicit motor cortical activity in a tetraplegic individual with intact sensation

(Shaikhouni et al., 2013). This supports the possibility of enhancing force-related neural representation with somatosensory feedback. In completely deafferented individuals,

108 graded tactile percepts have been evoked on the hand via microstimulation of somatosensory cortex (Tabot et al., 2015; Flesher et al., 2016), indicating that it is possible to provide closed-loop sensory feedback about intended grasping forces in this population. More work is needed to determine the extent to which sensory feedback affects motor cortical force representation, and how this representation translates into closed-loop iBCI force decoding performance. Nevertheless, this study shows promise for informing future closed loop iBCI design and provides further insight into to the complexity of motor cortex.

Methods

Study permissions and participants

Study procedures were approved by the US Food and Drug Administration

(Investigational Device Exemption #G090003) and the Institutional Review Boards of

University Hospitals Case Medical Center (protocol #04-12-17), Massachusetts General

Hospital (2011P001036), the Providence VA Medical Center (2011-009), Brown

University (0809992560), and Stanford University (protocol #20804). Participants were enrolled in the BrainGate2 Pilot Clinical Trial (ClinicalTrials.gov number NCT00912041).

All research was performed in accordance with relevant guidelines and regulations.

Informed consent, including consent to publish, was obtained from the participants prior to their enrollment in the study.

This study includes data from three participants with chronic tetraplegia. All participants were implanted in the hand and arm area on the precentral gyrus (Yousry et al., 1997) of dominant motor cortex with two, 96-channel microelectrode intracortical arrays (1.5 mm electrode length, Blackrock Microsystems, Salt Lake City, UT).

109

Participant T8 was a 53-year-old right-handed male with C4-level AIS-A spinal cord injury that occurred 8 years prior to implant; T5 was a 63-year-old right-handed male with C4-level AIS-C spinal cord injury; and T9 was a 52-year-old right-handed male with tetraplegia due to ALS. Surgical details can be found at (Gilja et al., 2015; Ajiboye et al.,

2017; Nuyujukian et al., 2018) for each respective participant.

Neural recordings and feature extraction

In all participants, broad-band neural recordings were sampled at 30 kHz and pre-processed in Simulink using the xPC real-time operating system (The Mathworks

Inc., Natick, MA, US; RRID: SCR_014744). From each pre-processed channel, extraction of two neural features from non-overlapping 20 millisecond time bins was performed in real time, as illustrated in Figure 3-1A. These included 192 unsorted threshold crossing (TC) and 192 spike band power (SBP) features. Here, we evaluated the neural space by characterizing TC and SBP features as opposed to sorted single units, in order for our results to be directly applicable to iBCI systems, and because iBCI performance has been shown to be comparable when derived from thresholded data as opposed to spike-sorted data (Fraser et al., 2009; Chestek et al., 2011; Christie et al.,

2015; Trautmann et al., 2019). Supplemental Figures 3-S8 – 3-S11 illustrate a visual comparison of neural activity from threshold crossings and sorted single units extracted from identical channels. Unless otherwise stated, all offline analyses were performed using MATLAB software (The Mathworks Inc., Natick, MA, US; RRID: SCR_001622).

Neural recordings, multiunit feature extraction methods, and single unit sorting methods for this study are described in more detail within the Supplemental Information.

110

Behavioral task

During data collection, all participants observed, imagined, and attempted producing three discrete forces (light < medium < hard) with the dominant hand, using either a power or a pincer grasp, over multiple data collection sessions. T8 completed eight sessions between trial days 511–963 relative to the date of his microelectrode array implant surgery; T9 completed one session on trial day 736; and T5 completed two sessions on trial days 365 and 396. Supplemental Table 3-S1 lists all relevant sessions and their associated task parameters.

Each session consisted of multiple 4-minute data collection blocks, as illustrated in Figure 3-1B. During each block, one volitional state (observe, imagine, or attempt) and one hand grasp type (power or pincer) were presented. Blocks were arranged in a pseudorandom order, in which volitional states were assigned randomly to each set of three blocks chronologically. This ensured an approximately equal number of blocks per volitional state, distributed evenly across the entire session. All blocks consisted of approximately 20 trials. Participants were encouraged to take an optional break between experimental blocks.

During each trial, participants were prompted to use kinesthetic imagery

(Stevens, 2005; Mizuguchi et al., 2017) to internally emulate a) a “light” squeezing force, b) “medium” force, c) “hard” force, d) a finger wiggling movement, or e) rest, with the dominant hand. Finger wiggling trials served as a kinematic control for the force trials.

Trials were presented in a pseudorandom order by repeatedly cycling through a complete, randomized set of five trial types until the end of the block.

Participants received audio cues indicating which force to produce (prep phase), when to produce it (go phase), and when to relax (stop phase). Participants also observed a researcher squeezing one of six graspable objects corresponding to light,

111 medium, and hard forces (no object was squeezed during “rest” trials). These objects were grouped into two sets of three, corresponding to forces embodied using a power grasp (sponge = light, stress ball = medium, tennis ball = hard) and a pincer grasp (cotton ball = light, nasal aspirator tip = medium, eraser = hard), as shown in Figure 3-1B. At the start of each trial, the researcher presented an object indicating the force level to be observed, imagined, or attempted. At the end of the prep phase (at the beginning of the go phase), the researcher squeezed the object. The prep phase lasted between 2.7 and

3.3 seconds. The variability in the prep phase time reduced confounding effects from anticipatory activity. The researcher squeezed the object during the go phase (3–

5 seconds), and then released the object at the beginning of the stop phase (3–

5 seconds). These visual cues were presented during the majority of experimental trials in a third-person, frontal perspective, in which the researcher faced the participants while squeezing the objects. Visual cues were presented during the majority of force trials; however, during two experimental sessions, visual cues were omitted during some trials to determine whether force-related information resulted from the presence of objects.

Supplemental Figures 3-S1 and 3-S2 visually compare neural feature activity recorded during these trials with feature activity recorded when both audio and visual cues were presented.

During force observation blocks, participants simply observed these actions without engaging in any activity. During force imagery and attempt blocks, participants imagined generating or attempted to generate sufficient force to “crush” the object squeezed by the researcher. For force imagery blocks, participants were instructed to mentally rehearse the forces needed to crush the various objects, without actually attempting to generate these forces.

To assess the extent to which participants embodied discrete forces during each volitional state, participants completed a Kinesthetic Force Imagery Questionnaire

112

(KFIQ), adapted from the Kinesthetic and Visual Imagery Questionnaire (KVIQ) (Malouin et al., 2007) after each experimental block. Briefly, participants rated on a scale of 0–10 how vividly they kinesthetically emulated various force levels during light, medium, hard, and wiggling force trials, where 0 = no force embodiment, and 10 = embodiment as intense as able-bodied force execution. The KFIQ is described in more detail within the

Supplemental Information.

Effects of audio vs. audiovisual cues

During all experimental sessions presented in Figures 3-2 – 3-7, participants received audio cues indicating which forces to produce and when to embody these forces. Participants also received visual cues during most sessions, in which they observed a researcher squeezing objects corresponding to the forces they were asked to observe, imagine, or attempt to produce. These visual cues were presented in order to encourage participants to emulate light, medium, and hard forces within the framework of a real-world setting. The goal was to elicit appropriate neural responses to the force task by providing participants with meaningful visual information about which forces they were emulating. More specifically, the visual cues were implemented to make the physical concept of light, medium, and hard forces seem less abstract to the participants after several years of deafferentation.

To determine the extent of any resulting confounds of object size, hand grasp shape, and other factors within the neuronal responses to the force task, additional supplemental data was collected from participant T8, who was cued to observe, imagine, or attempt producing forces using either only audio cues or both audio and visual cues.

These additional data, which are summarized in Supplemental Table 3-S2, are solely presented within the Supplemental Information and do not appear in the main text.

Within each supplemental session, correlation coefficients were computed between pairs

113 of trial-averaged feature time courses that resulted from audio-only cues (a) and audiovisual cues (av). These correlations were computed within each volitional state, and within each phase of the task (prep, go, stop), in order to discern whether neuronal responses to the visual cues varied by volitional state or trial phase. For each individual session, this correlational analysis was performed on 120 neural features with the highest signal to noise ratio. Additional methods can be found in the Supplemental

Materials.

Assessment of kinetic vs. kinematic activity

Prior to determining the effects of volitional state on neural force representation in the main dataset, we performed an initial analysis to determine whether force-related modulation of neural feature activity was distinct from modulation to kinematic activity.

Briefly, correlation coefficients were computed between pairs of trial-averaged go-phase feature modulation time courses during force (light, medium, hard) and finger-wiggling trials, within each volitional state. For each participant, this analysis was performed on

120 neural features with the highest signal to noise ratio (SNR), as described and presented in the Supplemental Information.

Characterization of individual features

The first goal of this study was to characterize the tuning properties of individual features. Neural activity resulting from the three volitional states (observe, imagine, attempt) and the three discrete forces (light, medium, hard) resulted in nine conditions of interest. For each condition, each neural feature’s peristimulus time histogram (PSTH) was computed by averaging the neural activity over go-phase-aligned trials, which were temporally smoothed with a Gaussian kernel (100-ms standard deviation) to aid in visualization.

114

Additionally, the effect of discrete force levels and volitional states on the activity of individual features was assessed. This analysis consisted of feature pre-processing performed in MATLAB, as well as statistical analysis implemented within the WRS2 library in the R programming language (Wilcox, 2017). Features were pre-processed to calculate each neural feature’s mean deviation from baseline during the go phase of each trial. For each feature, baseline activity was computed by averaging neural activity across multiple “rest” trials. Next, trial-averaged baseline activity was subtracted from neural activity that occurred during individual experimental trials. Finally, the resulting activity traces were averaged across multiple time points spanning each trial’s go phase, which yielded a collection of go-phase neural deviations from baseline.

In R, it was determined that the distribution of go-phase neural deviations passed normality tests (analysis of Q-Q plots and Shapiro-Wilk test, p < 0.05) but was heteroskedastic (Levene’s test, p < 0.05). Thus, a robust 2-way Welch ANOVA on the untrimmed means was implemented to evaluate the main effects (force and VoS), as well as their interaction (p < 0.05). Features that demonstrated a significant interaction between force and VoS were further separated into individual VoS conditions (observe, imagine, attempt). Within each VoS condition, a 1-way Welch ANOVA was implemented on go-phase neural deviations to find features that had significant force tuning (p < 0.05).

All p-values were corrected for multiple comparisons using the Benjamini–Hochberg procedure (Benjamini and Hochberg, 1995). Taken together, the results of the 1-way and 2-way Welch ANOVA constituted the tuning properties for each individual feature during a given experimental session.

Neural population analysis and decoding

In addition to characterizing individual features, this work sought to elucidate how much volitional state affects the neural representation of force at the population level. To

115 this end, similarity analysis was used to examine the intrinsic relationship between the activity patterns observed across conditions (Shepard and Chipman, 1970; Kriegeskorte et al., 2008; Vargas-Irwin et al., 2015). This approach is based on pair-wise comparisons between neural activity patterns and makes no a priori assumptions about the structure of the data. T-distributed Stochastic Neighbor Embedding (t-SNE) (van der Maaten and

Hinton, 2008) was used to project these pair-wise measurements into a low-dimensional representation, which facilitated statistical analysis as well as data visualization while still capturing the relationship between individual trials. This dimensionality reduction technique is well suited to similarity analysis, because it explicitly attempts to preserve nearest-neighbor relationships in the data by minimizing KL-divergence between local neighborhood probability functions in the high and low dimensional spaces. That is, it explicitly attempts to preserve relationships between data points that are close together in the high-dimensional space, which makes it ideal for analyzing neural datasets that lie on or close to a nonlinear manifold. In the low-dimensional “similarity space”, a neural activity pattern is represented by a single point. The distance between points denotes the degree of similarity between the activity patterns they represent. Two identical activity patterns correspond to the same point in this space; the more different they are, the farther apart they lie in the similarity space projection.

We have previously used this method to compare spike trains using point process distance metrics (Vargas-Irwin et al., 2015). In the present study, the algorithm was adjusted to operate on continuous (binned) data, in order to include threshold crossing (TC) and spike band power (SPB) extracted features. This modified technique is referred to as continuous similarity (CSIM) analysis. Similarity between feature vectors was evaluated as one minus the cosine of the angle between them. Note that this metric is not affected by the magnitude of the vectors; this property made the analysis more robust to non-stationarities in the data. All features were binned using non-overlapping

116

20-ms windows, and then smoothed with a 20-bin (400 ms) Gaussian kernel. Two- dimensional CSIM projections were derived over the entire duration of the go-phase.

After CSIM was applied, all trials were labeled post-hoc with their associated forces and volitional states, in order to visualize how these experimental parameters affected the degree of similarity between neural activity patterns. Here, the goal was to visualize how force and volitional state affected the activity of the neural population, as represented by the low-dimensional CSIM manifold.

The force- and VoS-related information content within this manifold was quantified in two ways. First, force- and VoS-related clustering of trials was evaluated by comparing the distributions of CSIM distances within and between conditions using a

Kruskal-Wallis test. The neural space was identified as representing volitional state when the median CSIM distances between trials with the same volitional states were smaller than the distances between trials with different volitional states. Similarly, when the median CSIM distances between trials with the same force were smaller than the distances between trials with different forces, the neural space was identified as representing force.

Second, CSIM was also used to generate discrete state force and volitional state classifiers. TC and SBP features were projected onto a 10 dimensional CSIM representation using data from a 400 ms sliding window, stepped in 100 ms increments.

For each window, an LDA classifier was used to generate time-dependent force and volitional state decoding accuracies, using 10-fold cross-validation. Additionally, the effect of the analysis window size on classification accuracy was examined, starting with

100 ms following the go cue and increasing window length in 100 ms increments up to three seconds. For both sets of analyses, force decoding accuracies were determined within each volitional state, in order to determine how volitional state affected the degree to which force-related information was represented within the neural space. These

117 decoding accuracies were compared to the empirical chance distribution of decoding accuracies, estimated using 10,000 random shuffles of trial labels.

Data availability

The data can be made available upon reasonable request by contacting the lead or senior authors.

Code availability

Source code for the CSIM algorithm is publicly available at https://donoghuelab.github.io/SSIMS-Analysis-Toolbox/.

Acknowledgements

The authors would like to thank the BrainGate participants and their families for their contributions to this research. They also thank Glynis Schumacher for her artistic expertise during the creation of Fig. 1. Support provided by NICHD-NCMRR

(R01HD077220, F30HD090890); NIDCD (R01DC009899, R01DC014034); NIH NINDS

(UH2NS095548, 5U01NS098968-02); NIH Institutional Training Grants (5 TL1 TR 441-7,

5T32EB004314-15); Department of Veteran Affairs Office of Research and Development

Rehabilitation R&D Service (N2864C, N9228C, A2295R, B6453R, A6779I, A2654R);

Howard Hughes Medical Institute; MGH-Deane Institute; Katie Samson Foundation; the

Executive Committee on Research (ECOR) of Massachusetts General Hospital; the Wu

Tsai Institute; the ALS Association Milton Safenowitz Postdoctoral

Fellowship; the A.P. Giannini Foundation Postdoctoral Fellowship; the Wu Tsai

Neurosciences Institute Interdisciplinary Scholars Fellowship; the Larry and Pamela

Garlick Foundation; the Simons Foundation Collaboration on the Global Brain (543045); and Samuel and Betsy Reeves. The contents do not represent the views of the Dept. of

118

Veterans Affairs or the US Government. CAUTION: Investigational Device. Limited by

Federal Law to investigational Use. Since the dates of their initial contributions to the study, B. Murphy, B. Franco, and J. Saab are no longer affiliated with Case Western and

Brown Universities, respectively. Additionally, B. Walter has moved to the Department of

Neurology & Neurosurgery at the Cleveland Clinic in Cleveland, OH.

Author Contributions

A.R. conceived the study, performed the experiments, analyzed the data, wrote the main manuscript text, and compiled the Supplemental Information. C.V. designed and implemented the CSIM algorithm, assisted in writing the CSIM portions of the manuscript, and prepared Figs. 4–7. F.W. assisted in conceiving the study and performing data analysis. J.A. and D.C.C. implemented normality, heteroskedacity, and

2-way Welch-ANOVA analysis on the neural feature data and assisted in preparing Fig.

3. F.W., B.M., W.M., P.R., B.F., J.S., and S.S. assisted in data collection. K.S., J.H.,

L.H., R.K. and A.B.A. supervised data collection. J.M., J.S., B.W., and J.H. planned and performed the microelectrode array surgeries. R.K., A.B.A., and S.C. supervised and guided the project. L.H. is the sponsor-investigator of the BrainGate2 Pilot trial and oversaw the clinical trial, along with B.W., and J.H. at their respective sites. All authors reviewed and edited the manuscript.

Competing Interests

The MGH Translational Research Center has a clinical research support agreement with , Paradromics, and Synchron, for which L.H. provides consultative input. K.S. is a consultant to Neuralink Corp. and on the Scientific Advisory

119

Boards of CTRL-Labs, Inc., MIND-X Inc., Inscopix Inc., and Heal, Inc. J.H. is a consultant for Neuralink, Proteus Biomedical and Boston Scientific, and serves on the

Medical Advisory Board of Enspire DBS. This work was independent of and in no way influenced or supported by these commercial entities. All other authors declare no competing interests.

Supplemental Materials

Supplemental Methods

Pre-processing of Neural Data

In all participants, each intracortical microelectrode array was attached to a percutaneous pedestal connector on the head. Patient cables connected the pedestals to amplifiers (Blackrock Microsystems, Salt Lake City, UT) that bandpass filtered (0.3 Hz

– 7.5 kHz) and digitized (30 kHz) the neural signals from each channel on the microelectrode array. These digitized signals were pre-processed in Simulink using the xPC real-time operating system (The Mathworks Inc., Natick, MA, US; RRID:

SCR_014744). Each channel was bandpass filtered (250-5000 Hz), common average referenced (CAR), and down-sampled to 15 kHz in real time. CAR was implemented by selecting 60 channels from each microelectrode array that exhibited the lowest variance, and subsequently averaging these channels to yield an array-specific common average reference. This reference signal was subtracted from all individual channels on each of the arrays.

Extraction of Neural Features

From each filtered, referenced channel, two multiunit neural features were extracted in real time. These included unsorted threshold crossing (TC) and spike band

120 power (SBP) features, from non-overlapping 20 millisecond time bins, as illustrated in

Figure 1A. TC features were defined as the number of times the voltage on each channel crossed a predefined noise threshold within each time bin (-4.5 x RMS for T8; -

4.5 x RMS for T5; -3.5 x RMS for T9). These thresholds were hand selected to accept the most action potential signals and to reject the most non-neural noise (Fraser et al.,

2009; Christie et al., 2015). RMS was calculated from one minute of neural data recorded at the beginning of each experimental session. SBP features were defined as the root mean square of the signal in the spike band (250-5000 Hz) of each channel, within each time bin. Prior to offline analysis, these features were normalized by subtracting the mean of each feature within each recording block, in order to minimize non-stationarities in the data.

Extraction of Sorted Single Units

In addition to the multiunit features extracted and analyzed in the main text, sorted single units were extracted offline from each channel of neural activity for participant T8. Spike sorting was performed using the Wave_clus algorithm in Matlab

(Quiroga et al., 2004). Here, noise thresholds for single unit spike detection were equivalent to those implemented for TC multiunit features on corresponding recording channels. Next, the effect of discrete force levels and volitional states on the activity of single units was assessed by performing robust 2-way Welch ANOVA analysis on single unit activity. This analysis was implemented as described in the main text Methods section.

Kinesthetic Force Imagery Questionnaire (KFIQ)

Participants completed the KFIQ, adapted from the Kinesthetic and Visual

Imagery Questionnaire (KVIQ) (Malouin et al., 2007) after each experimental block. The

KFIQ was implemented to assess how intensely participants embodied the act of

121 producing grasping forces during each volitional state, and during each force level.

(During Session 12, participant T5 completed the KFIQ once at the end of the session rather than after every block, in order to minimize cognitive fatigue.) For each of the three volitional states, participants rated on a scale of 0-10 how vividly they were able to kinesthetically emulate various force levels, where a score of zero indicated no embodiment, and a score of ten indicated embodiment as intense as able-bodied execution of the force. Participants were specifically instructed to rate how intensely they felt themselves generating forces with the dominant hand, rather than rating how intensely they visualized producing them, in order to best capture the kinesthetic aspect of the task. KFIQ scores during the observed, imagined, and attempted force trials were then correlated with neural activity during the three volitional states. Additionally, participants were asked to rate the degree to which they kinesthetically emulated force production during finger wiggling trials, which, in theory, required minimal kinetic and maximal kinematic output.

Assessment of Neural Activity due to Audio vs. Audiovisual Cues

As described in the main text, force trials were prompted with both audio and visual cues during most sessions. Specifically, a researcher squeezed one of six graspable objects corresponding to light, medium, and hard forces produced via power and pincer grasping. These visual cues, which gave participants real-world context of the forces they were instructed to emulate, nonetheless posed a risk of introducing extraneous activity in the neural force data due to hand aperture and the presence of graspable objects of various sizes and shapes. Therefore, five supplemental datasets, plus on dataset from the main text, were analyzed to determine the extent to which neural modulation was influenced by the presence or absence of visual cues. During these sessions, participants observed, imagined, or attempted forces with and without

122 visual cues (see Supplemental Figure 3-S2). Trials prompted with both audio and visual cues are referred to as audiovisual (av) trials, while audio-only (a) trials were prompted solely with audio cues.

In this analysis, correlations were computed between neural features during a vs. av trials. Here, the analysis was performed on 120 features with the highest signal to noise ratio (SNR), which were isolated via the following pre-processing steps. Neural activity resulting from the three volitional states (observe, imagine, attempt) and the three discrete forces (light, medium, hard) resulted in nine conditions of interest. For each condition, each neural feature’s SNR was computed by averaging the neural activity over go-phase-aligned trials, dividing this mean activity by the standard deviation of the go-phase-aligned activity, and subsequently averaging this result over all time points of the trial. This resulted in nine SNR values per feature, which were averaged to obtain one SNR value per feature spanning all forces and volitional states.

For each of the 120 features with highest SNR, correlation coefficients were computed between pairs of trial-averaged PSTHs during audiovisual (av) and audio-only

(a) trials, within each volitional state (observe, imagine, attempt) and trial phase (prep, go, stop). Multi-session distributions of feature correlation coefficients between a-a, av- av, and a-av trial were generated for each volitional state and trial phase. Finally,

Kruskal-Wallis tests were implemented on each set of correlation coefficient distributions for each volitional state and trial phase, the Tukey Method was used to correct for multiple comparisons, and the resulting p values were further corrected using the

Benjamini-Hochberg method (Benjamini and Hochberg, 1995) across all three volitional states and all three trial phases. It was expected that no statistically significant differences would exist between go-phase a-av versus av-av or a-a distributions, which would suggest that go-phase neural activity was not affected by the presence of visual cues.

123

Assessment of Kinetic vs. Kinematic Activity

Prior to determining how force and volitional state modulated neural activity, an initial analysis was performed to determine whether neural modulation to force was distinct from modulation to kinematic activity. In order to visualize the extent to which force and wiggle trials were correlated across volitional states, heat maps of the absolute values of correlation coefficients averaged over 120 features were computed. (The 120 features were chosen to have the highest SNR across forces and volitional states, as described in the “Assessment of Neural Activity due to Audio vs. Audiovisual Cues” section of the Supplemental Information.) Distributions of feature correlation coefficients between force-force and force-wiggle trials were also plotted for each volitional state, to further visualize the extent to which these distributions differed from one another. To quantify the degree of correlation between force and wiggle trials, t-tests were implemented on all pairs of correlation coefficient distributions within each volitional state, as well as across volitional states, and were then corrected using the Benjamini-

Hochberg method across 15 pairs and three volitional states. It was expected that correlations between pairs of force trials would be significantly different from correlations between force and wiggle trials (p<0.05), indicating that neural activity generated during force trials was distinct from activity during kinematic wiggling.

Supplemental Results

Description of Experimental Sessions and Audiovisual Cues

Description of Experimental Sessions in Main Text

Supplemental Figure 3-S1 displays session information for each participant. T8 completed four sessions between trial days 511-536 relative to the date of his implant surgery, in which he emulated discrete forces using a power grasp, and three sessions

124 between trial days 732-963 in which he performed the same force matching task using a pincer grasp. Additionally, T5 completed one session each of power (trial day 365) and pincer (trial day 396) grasping force, and T9 completed one session of pincer grasping forces on trial day 369.

Post- No. Blocks Per Volitional State Session Participant, Implant Audio Static No. Grasp Type Observe Imagine Attempt Day Listen Observe 1 T8, power Day 511 4 4 4 4

2 T8, power Day 518 7* 7* 7*

3 T8, power Day 525 4* 4 4 4 4

4 T8, power Day 536 7 7 7

5 T8, power Day 550 7 7 8

6 T8, pincer Day 732 7 7 7 7 T8, pincer Day 737 7 7 7 8 T8, pincer Day 963 5 5 5

9 T9, pincer Day 736 4 4 4

10 T5, power Day 365 4 4 4

11 T5, pincer Day 396 4 4 4

Supplemental Table 3-S1. Session information. Session information for participants T8, T9, and T5, including the number of experimental blocks per volitional state. Each block contained approximately 20 force trials (see Figure 1). Visual cues were provided to participants during all sessions except session 2, in which the participant only received audio cues, as indicated by the asterisks. During session 1, T8 completed four blocks in which force-related objects were presented but not squeezed by the researcher (“static observe”), in addition to the usual observe, imagine, and attempt blocks. During session 3, T8 completed four blocks with solely audio cues (“audio listen”); four “static observe” blocks; and four of each of the usual observe, imagine, and attempt blocks. “Audio listen” and “static observe” blocks from Sessions 1 and 3 were excluded from the single-feature and CSIMS analysis presented in the main text.

125

During most sessions, participants received audio and visual cues indicating which force to produce, as described in the main text. However, during some trials of

Sessions 2 and 3, visual cues were omitted to determine the extent to which force- related information resulted from the presentation and manipulation of objects by the researcher. During Session 2, the participant solely received audio cues that indicated which force to produce (prep phase), when to produce it (go phase), and when to cease force production (stop phase). During Session 3, the participant received three types of cues during force observation blocks, as indicated in Supplemental Figure 3-S1: 1) audio cues only (listen only), 2) audio cues plus the presentation of an object without any subsequent object manipulation (static observe), and 3) audio cues plus the presentation and squeezing of an object, as described in the main text (observe). During imagine and attempt blocks, participants received both audio and visual cues as indicated in the main text. Here, we treat the listen only and static observe conditions as two additional volitional states, along with the aforementioned observe, imagine, and attempt conditions.

Supplemental Figure 3-S1 visually compares the activity of 25 features with the highest SNR from each session in participant T8, during several volitional states. For each session, these 25 features were isolated as described in the Assessment of

Kinematic vs. Kinetic Activity portion of the Supplemental Methods and z-scored to aid in feature-to-feature comparisons. Supplemental Figure 3-S1A displays neural activity during Sessions 2 and 3, in which solely audio cues were presented for at least four blocks of experimental trials. In contrast, Supplemental Figures S1B and S1B show neural activity produced during the standard presentation of visual and audio cues described in the main text, during embodied power grasping (Supplemental Figure 3-

S1B) and pincer grasping (Supplemental Figure 3-S1C) forces. A visual comparison between Supplemental Figures S1A, S1B, and S1C shows that the introduction of visual

126 cues often resulted in increased neural activity during the preparatory and stop phases of the experimental trial. These peaks in prep- and stop-phase activity were particularly prominent during observed forces. Additionally, prep- and stop-phase activity peaks tended to become less apparent during the progression from passive observation to attempted force production in participant T8. However, go-phase neural activity – during which T8 actively observed, imagined, or attempted force production – remained comparable whether or not the participant received visual cues. We opted present both visual and auditory cues to participants during all trials subsequent to Session 3, in order to encourage cognitive engagement with the force matching task.

Effects of “Audio Only” vs. “Audiovisual” Cues

During all experimental sessions presented in the main text, participants received audio cues indicating which force to produce and when to produce it. During a subset of sessions indicated in Supplemental Figure 3-S1, participants also received visual cues, in which they observed a researcher squeezing objects corresponding to the forces they were asked to observe, imagine, or attempt, as shown in Figure 1 of the main text.

These visual cues were presented in order to provide concrete, real-world context to the participants about force production, as discussed in the main text. This approach, while valuable from a functional standpoint, has a risk of producing confounds in the neural data that reflect object size, grasp aperture, kinematic neural responses, or additional variables other than force and volitional state. Therefore, prior to implementing visual cues during experimental sessions presented in the main text, supplemental data was collected in order to compare the neural responses to force trials with and without visual in participant T8. These datasets are summarized in Supplemental Figure 3-S2.

127

Supplemental Figure 3-S1. Feature activity by volitional state and cue in participant T8. A. Go-phase-aligned, normalized feature activity during audio listen, audio imagine, and audio attempt trials (Session 2) and audio listen, static observe, observe, imagine, and attempt blocks (Session 3). Note that in Session 2, visual cues were omitted during all trials. B. Feature activity during observed, imagined, and attempted power grasping force production. C. Feature activity during observed, imagined, and attempted pincer grasping force production. During all sessions, the magnitude of neural activity rises during the progression from passive observation to active attempt of forces, regardless of the presence or absence of visual cues. Note the presence of prep- and stop-phase peaks in panels B and C.

128

Post No. Blocks Per Volitional State Session Implant No. Audio Audio Audio AV AV AV Day Listen Imagine Attempt Observe Imagine Attempt S1 Day 451 5 2 S2 Day 459 8 3 S3 Day 499 3 3 3 Day 525 4 ` 4 4 4 S4 Day 539 7 7 S5 Day 554 6 6

Supplemental Table 3-S2. Supplemental session information. Supplemental grasping force session information for participant T8, including the number of experimental blocks per volitional state (observe, imagine, attempt) and cue type (audio only or audiovisual/AV). During audio blocks, visual cues were omitted and only audio cues were used to prompt the participant to observe, imagine, or attempt force production. During audiovisual (AV) blocks, participants received both audio and visual cues, in which a researcher squeezed an object associated with each force level. Note that except for Session 3, all sessions listed in this table are supplemental and do not appear in the main text. All sessions in this table were used to assess the influence of visual cues on observed (Sessions S4, S5), imagined (Sessions S1, S2, S3), and attempted (Session 3) force production.

To elucidate the influence of visual cues on the neural data, correlational analysis was performed on force trials that were prompted with audio cues only (a) and with both audio and visual cues (av). Specifically, each panel of Supplemental Figure 3-S2 shows the mean correlation between pairs of audio-only trials (a-a), pairs of audiovisual trials

(av-av), and pairs of audio-only and audiovisual trials (a-av). Here, prep- and stop-phase correlations between a-a, a-av, and av-av trials appear to differ statistically during observed and imagined force production (p<0.05 Tukey method of multiple comparisons,

Benjamini Hochberg correction. This suggests that the inclusion of visual cues introduces changes in the neural activity during the preparatory and stop phases of observed and imagined force trials. These changes take the form of prep- and stop- phase peaks in neural activity while visual cues are included, as shown in Supplemental

Figure 3-S1.

129

Supplemental Figure 3-S2. Mean correlations between audio-only (a) and audiovisual (av) trials. Mean feature correlations between pairs of a-a, av-av, and a-av trials are depicted for each volitional state (rows) and trial phase (columns), averaged over multiple sessions by volitional state. Asterisks indicate significance at the p<0.05 (*), p<0.01 (**), and p<0.001(***) level. P values were adjusted for multiple comparisons (Tukey Method) and corrected across volitional states and trial phases (Benjamini Hochberg method). Error bars indicate 95% confidence intervals.

However, with the exception of imagined force trials, correlations between a-a, a- av, and av-av trials do not differ statistically during the active “go” phase of the experimental trial. These results suggest that the introduction of visual cues do not introduce confounds in go-phase trial activity. Furthermore, Supplemental Figure 3-S3, which illustrates population-level go-phase activity during a session in which visual cues were omitted, shows that the neural activity follows the same trends as presented in the

130 main text. Specifically, volitional state is represented to a greater extent than force, and attempted forces are more discriminable than observed and imagined forces (Figures 4-

6). Therefore, while preparatory and stop phase activity was affected, the introduction of visual cues did not seem to influence the go-phase neural activity, from which the major results presented in the main text are derived.

Supplemental Figure 3-S3. CSIM neural population data for Session 2 (participant T8). A. Two-dimensional CSIM plot for Session 2. Each point represents the activity of the entire population of simultaneously-extracted features during a single trial. The distance between points indicates the degree of similarity between trials. Clustering of similar symbols denotes similarity between trials with the same intended force level, while clustering of similar colors denotes similarity between trials with the same volitional state. LDA classification accuracies for volitional state and observed (O), imagined (I), and attempted (A) forces from the CSIM space are indicated in the panel title. B. Feature ensemble volitional state confusion matrix for Session 2. Offline decoding accuracies were computed using an LDA classifier implemented within a 10-dimensional CSIM representation of the neural data, using 10-fold cross-validation, over a 400-ms sliding window stepped down in 100 ms increments. Classification accuracies for individual volitional states were averaged over all time points within the go phase of the behavior task. Note that the mean of the empirical chance distribution (averaged across participants) was 33%, with 95% of samples between 26.6 and 40.3%.

In addition to the results of this correlational analysis, Supplemental Figure 3-S3 exhibits population-level neural activity during Session 2, during which visual cues were entirely omitted from the behavioral task. Supplemental Figure 3-S3A exhibits clustering of experimental trials by volitional state, as well as force-related clustering that is most

131 evident during attempted force production. Furthermore, as indicated by the CSIM space classification accuracies for volitional state (76%), observed force (33%), imagined force

(55%), and attempted force (67%), volitional state representation in the neural space is stronger than force representation. Finally, as predicted by the results in the main text, attempted forces are more discriminable than observed and imagined forces.

Supplemental Figure 3-S3B further illustrates the representation of individual volitional states. Critically, the trends exhibited within this supplemental figure are nearly identical to the findings outlined in the main text. Therefore, taken together, Supplemental Figures

S2 and S3 suggest that the introduction of visual cues do not influence the major trends identified in the main study.

Characterization of Individual Features

Assessment of Kinetic Versus Kinematic Activity

To determine whether neural modulation to force was distinct from modulation to kinematic activity, an initial correlational analysis was performed between neural data collected during force trials and neural data collected during finger wiggling trials.

Specifically, Supplemental Figure 3-S2A shows confusion matrices of neural feature correlation coefficients between specific pairs of force trials and finger wiggling trials for each volitional state, averaged over 120 neural features with the highest signal-to-noise ratio, for five exemplary sessions and for all sessions averaged together. Correlation coefficients between pairs of force and wiggle trials were compared across volitional states, as well as within volitional states, to determine whether force-force and force- wiggle correlations were significantly different, and whether this difference was affected by volitional state. For all participant-grasp pairs except T8-power, correlation coefficients between force trials were greater during attempt than during observed and imagined force production (corrected p<0.05, t-test, Benjamini-Hochberg method).

132

Additionally, within volitional states, correlation coefficients between pairs of force trials were often greater than correlation coefficients between force and wiggling trials, especially for attempted forces (corrected p<0.05, t-test, Benjamini-Hochberg method).

These trends are further illustrated in Supplemental Figure 3-S2B, which shows distributions of force-force and force-wiggle correlation coefficients from all sessions for each volitional state. Here, both force-force and force-wiggle correlations rise as volitional state progresses from passive observation to active attempt. However, this change is significantly more pronounced for force-force correlations than for force-wiggle correlations (corrected p<0.05, Benjamini-Hochberg method). In other words, even though neural activity during force trials becomes increasingly correlated during attempt, force- and kinematic-related neural activity remains relatively less correlated across multiple volitional states.

Additionally, Supplemental Figure 3-S3 shows session-average force embodiment (KFIQ) scores for all participants (0 = no feeling, 10 = feeling as intense as able-bodied execution of grasping force). Here, KFIQ scores vary with volitional state

(attempt > imagine > observe, 2-way ANOVA). Additionally, while KFIQ scores are almost identical across forces within each volitional state, they are significantly lower during finger wiggling than they are for attempted forces in several participant-grasp pairs (p<0.05, paired-t-test, Benjamini-Hochberg correction). In other words, the degree to which participants kinesthetically emulate forces is strongest during attempted force, and is distinct from the degree to which they emulate forces during kinematic finger wiggling. Taken together, these results support the idea that neural activity during force trials encompassed modulation to kinetic parameters, rather than solely representing modulation to kinematic factors such as changes in hand posture.

133

Supplemental Figure 3-S4. Correlation between kinetic and kinematic activity. A. Heat maps of correlation coefficients between pairs of trial-averaged feature time courses, averaged over 120 features with highest signal-to-noise ratio for a representative session from each participant-grasp pair. Note that the last column shows session-averaged correlation coefficients. For nearly all sessions, correlations between attempted force and finger wiggle trials are smaller than correlations between two types of attempted force trials. B. Distributions of correlation coefficients between pairs of force trials (blue) and pairs of force and wiggle trials (orange) across all sessions. Note that during attempted trials, force trials are more correlated to each other than to finger wiggling trials.

134

Supplemental Figure 3-S5. Session-averaged kinesthetic force imagery questionnaire (KFIQ) scores. Participants rated on a scale of 0-10 how intensely they felt themselves producing light, medium, and hard forces during each volitional state and each grasp type (0 = no feeling, 10 = feeling as intense as able-bodied execution.) They were also asked to rate how intensely they emulated forces during finger wiggling trials. Scores were collected at the end of each experimental block in all sessions except Session 11 (T5 pincer), in which scores were recorded solely at the end of the session. Due to user error, participant T8 only reported accurate scores during sessions 5-8, so only these sessions are included in column 1. Also note that the absence of bars in T8 power and T5 pincer indicate KFIQ scores of 0 (T5 pincer, T8 power). Error bars indicate 95% confidence intervals. Asterisks indicate significant differences in KFIQ scores between trial types within a volitional state (p<0.05, paired t-test, Benjamini Hochberg correction). In all participants, force embodiment increases with volitional state. Additionally, for several participants and grasps, KFIQ scores during finger wiggling are significantly lower than KFIQ scores during attempted force production.

Assessment of Tuning to Force and Volitional State

Supplemental Figures S6 and S7 show average per-condition activity (PSTH) of hand-selected threshold crossing (TC) and spike band power (SBP) features in participants T5 and T9, respectively. As in participant T8 (Figure 2), these features were tuned to one of four marginalizations as evaluated by 2-way Welch-ANOVA analysis implemented on go-phase neural activity: force only, volitional state only, both force and volitional state, and an interaction between force and volitional state. Note that Welch-

135

ANOVA detected no features with a significant (corrected p<0.05) interaction between force and volitional state in participant T9.

In addition, Supplemental Figures S8-S11 depict single unit and multiunit features extracted from four representative channels in participant T8. The multiunit features from these four channels, which include threshold crossing rate and spike band power, were tuned to force (S8, S10), volitional state (S9), both factors (S10, S11), and an interaction (S9, S11) between force and volitional state. Note that the first detected unit from each channel was often a “hash” unit (Todorova et al, 2014), which encompassed neural activity that could not be assigned to a “true” unit but nonetheless contained useful force or volitional state information. In the four representative channels depicted here, the first sorted unit from each channel closely matched the activity of their corresponding threshold crossing feature both visually and statistically. Additional sorted units also exhibited statistical trends that were consistent with their associated threshold crossing and spike band power features. For example, Supplemental Figure 3-S11 shows that threshold crossing and spike band power features extracted from Channel 65 reflected the underlying activity of three single units. Specifically, the hash unit and Unit

2 were statistically tuned to an interaction between force and volitional state, as reflected in the activity of the threshold crossing feature; while Unit 1 was independently tuned to force and volitional state, as reflected by the activity of the spike band power feature.

These data suggest that single sorted units and extracted multiunit features contained similar information in regards to force and volitional state, which in turn suggests that an analysis of sorted single units would yield similar results to those identified in the main text.

136

Supplemental Figure 3-S6. Single features are tuned to force and volitional state in participant T5. Rows show average PSTHs of five exemplary TC and SBP features that are tuned to force only (session 10), volitional state (VoS) only (session 10), and neither factor (session 11), both factors (session 10), and an interaction between force and volitional state (session 10) (2-way Welch ANOVA, corrected p<0.05, Benjamini- Hochberg method). Columns show neural activity averaged over all forces (column 1), neural activity averaged over all volitional (column 2), and normalized mean go-phase neural deviations from baseline activity during force trials within each force-VoS pair (column 3). Statistically significant p-values for force modulation, VoS modulation, and interaction are indicated with asterisks. Error bars indicate 95% confidence intervals.

137

Supplemental Figure 3-S7. Single features are tuned to force and volitional state in participant T9. Rows show average PSTHs of four exemplary TC and SBP features from Session 9 that are tuned to force only, volitional state (VoS) only, neither factor, and both factors (2-way Welch ANOVA, corrected p<0.05, Benjamini-Hochberg method). Note that 2-way Welch-ANOVA detected no features with a statistically significant interaction between force and volitional state. Columns show neural activity averaged over all forces (column 1), neural activity averaged over all volitional (column 2), and normalized mean neural deviations from baseline activity during force trials within each force-VoS pair (column 3). Statistically significant p-values for force modulation, VoS modulation, and interaction are indicated with asterisks. Error bars indicate 95% confidence intervals.

138

139

Supplemental Figure 3-S8. Comparison of spike band power (SBP), threshold crossing (TC), and single unit (SU) features from a force-tuned channel. A. SBP data, TC rasters, and SU rasters extracted from channel 135 during Session 1, for individual volitional states (observe, imagine, attempt) and force levels (light, medium, hard). Within the last plot within each panel, activity across multiple single units was summed and visually compared to normalized, trial-averaged threshold crossing and spike band power activity for each volitional state and force level. Here, the active “go” phase of the trial occurred between the vertical lines. Note that the hash unit closely matches TC activity, while Unit 1 closely matches SBP activity. B. 2-way Welch ANOVA p values (p<0.05 in bold) for extracted SBP, TC, and single unit features. Note that the hash unit appears to exhibit force-related activity that approaches (but does not exceed) statistical significance.

140

141

Supplemental Figure 3-S9. Comparison of spike band power (SBP), threshold crossing (TC), and single unit (SU) features from a channel tuned to volitional state and an interaction between force and volitional state. A. Each panel depicts single unit and multiunit features extracted from channel 177 during Session 4, for individual volitional states (observe, imagine, attempt) and force levels (light, medium, hard). Within the last plot within each panel, activity across multiple single units was summed and visually compared to normalized, trial-averaged threshold crossing and spike band power activity for each volitional state and force level. Here, the active “go” phase of the trial occurred between the vertical lines. SBP data was tuned to an interaction between force and volitional state, while TC, Unit 1, and Unit 2 were tuned to volitional state alone. B. 2-way Welch ANOVA p values (p<0.05 in bold) for extracted SBP, TC, and single unit features.

142

143

Supplemental Figure 3-S10. Comparison of spike band power (SBP), threshold crossing (TC), and single unit (SU) features from a channel tuned independently to both force and volitional state. Each panel depicts the activity of single unit and multiunit features extracted from channel 77 during Session 4, for individual volitional states (observe, imagine, attempt) and force levels (light, medium, hard). Within the last plot within each panel, activity across multiple single units was summed and visually compared to normalized, trial-averaged threshold crossing and spike band power activity for each volitional state and force level. Here, the active “go” phase of the trial occurred between the vertical lines. The SBP feature was tuned to force, while TC, hash, and Unit 1 were independently tuned to both force and volitional state. B. 2-way Welch ANOVA p values (p<0.05 in bold) for extracted SBP, TC, and single unit features. Note that the SBP feature exhibits statistically significant tuning only to force; however, its volitional state tuning also approaches (but does not exceed) statistical significance.

144

145

Supplemental Figure 3-S11. Comparison of spike band power (SBP), threshold crossing (TC), and single unit (SU) features from a channel tuned to force, volitional state, and an interaction between these parameters. Spike band power (SBP) data, threshold crossing (TC) rasters, and single unit rasters extracted from channel 65 during Session 4, for individual volitional states (observe, imagine, attempt) and force levels (light, medium, hard). Within the last plot within each panel, activity across multiple single units was summed (black trace) and visually compared to normalized, trial-averaged threshold crossing and spike band power activity for each volitional state and force level. Here, the active “go” phase of the trial occurred between the vertical lines. B. 2-way Welch ANOVA p values (p<0.05 in bold) for extracted SBP, TC, and single unit features. Here, the SBP feature is tuned to the interaction between force and volitional state, which appears to capture the activity of Unit 1. Likewise, the TC feature is independently tuned to force and volitional state and appears to reflect the trends exhibited by the hash unit and Unit 2.

146

Chapter 4: The neural representation of force across grasp types in motor cortex of humans with tetraplegia

Note that this chapter has been posted as a preprint to BioRxiv and is currently under review by the peer-reviewed journal eNeuro:

Rastogi, A., Willett, F.R., Abreu, J. Crowder, D.C., Murphy, B.A., Memberg, W.D.,

Vargas-Irwin, C.E., Miller, J.P., Sweet, J.A., Walter, B.L., Rezaii, P.G., Stavisky,

S.D., Hochberg, L.R., Shenoy, K.V., Henderson, J.M., Kirsch, R.F., Ajiboye, A.B.

The neural representation of force across grasp types in motor cortex of humans

with tetraplegia. bioRxiv 2020.06.01.126755; doi:

https://doi.org/10.1101/2020.06.01.126755

We reproduce the article below. The article has been re-formatted to adhere to the requirements of this dissertation.

Abstract

Intracortical brain-computer interfaces (iBCIs) have the potential to restore hand grasping and object interaction to individuals with tetraplegia. Optimal grasping and object interaction require simultaneous production of both force and grasp outputs.

However, since overlapping neural populations are modulated by both parameters, grasp type could affect how well forces are decoded from motor cortex in a closed-loop force iBCI. Therefore, this work quantified the neural representation and offline decoding performance of discrete hand grasps and force levels in two participants with tetraplegia.

147

Participants attempted to produce three discrete forces (light, medium, hard) using up to five hand grasp configurations. A two-way Welch ANOVA was implemented on multiunit neural features to assess their modulation to force and grasp. Demixed principal component analysis was used to assess for population-level tuning to force and grasp and to predict these parameters from neural activity. Three major findings emerged from this work: 1) Force information was neurally represented and could be decoded across multiple hand grasps (and, in one participant, across attempted elbow extension as well); 2) Grasp type affected force representation within multi-unit neural features and offline force classification accuracy; and 3) Grasp was classified more accurately and had greater population-level representation than force. These findings suggest that force and grasp have both independent and interacting representations within cortex, and that incorporating force control into real-time iBCI systems is feasible across multiple hand grasps if the decoder also accounts for grasp type.

Significance Statement

Intracortical brain-computer interfaces (iBCIs) have emerged as a promising technology to potentially restore hand grasping and object interaction in people with tetraplegia. This study is among the first to quantify the degree to which hand grasp affects force-related – or kinetic – neural activity and decoding performance in individuals with tetraplegia. The study results enhance our overall understanding of how the brain encodes kinetic parameters across varying kinematic behaviors -- and in particular, the degree to which these parameters have independent versus interacting neural representations. Such investigations are a critical first step to incorporating force control into human-operated iBCI systems, which would move the technology towards restoring more functional and naturalistic tasks.

148

Introduction

Intracortical brain-computer interfaces (iBCIs) have emerged as a promising technology to restore upper limb function to individuals with paralysis. Traditionally, iBCIs decode kinematic parameters from motor cortex to control the position and velocity of end effectors. These iBCIs evolved from the seminal work of Georgopoulos and colleagues, who proposed that motor cortex encodes high-level kinematics, such as continuous movement directions and three-dimensional hand positions, in a global coordinate frame (Georgopoulos et al., 1982; Georgopoulos et al., 1986). Kinematic iBCIs have successfully achieved control of one- and two-dimensional computer cursors

(Wolpaw et al., 2002; Leuthardt et al., 2004; Kubler et al., 2005; Hochberg et al., 2006;

Kim et al., 2008; Schalk et al., 2008; Hermes et al., 2011; Kim et al., 2011; Simeral et al.,

2011); prosthetic limbs (Hochberg et al., 2012; Collinger et al., 2013b; Wodlinger et al.,

2015); and paralyzed arm and hand muscles (Bouton et al., 2016; Ajiboye et al., 2017).

While kinematic iBCIs can restore basic reaching and grasping movements, restoring the ability to grasp and interact with objects requires both kinematic and kinetic

(force-related) information (Chib et al., 2009; Flint et al., 2014; Casadio et al., 2015).

Specifically, sufficient contact force is required to prevent object slippage; however, excessive force may cause mechanical damage to graspable objects (Westling and

Johansson, 1984). Therefore, introducing force calibration capabilities during grasp control would enable iBCI users to perform more functional tasks.

Early work by Evarts and others, which showed correlations between cortical activity and force output (Evarts, 1968; Humphrey, 1970; Fetz and Cheney, 1980; Evarts et al., 1983; Kalaska et al., 1989), and later work, which directly decoded muscle activations from neurons in primary motor cortex (M1) (Morrow and Miller, 2003; Sergio and Kalaska, 2003; Pohlmeyer et al., 2007; Oby et al., 2010), suggest that cortex

149 encodes low-level dynamics of movement along with kinematics (Kakei et al., 1999;

Carmena et al., 2003; Branco et al., 2019). However, explorations of kinetic information as a control signal for iBCIs have only just begun. The majority have characterized neural modulation to executed kinetic tasks in primates and able-bodied humans

(Filimon et al., 2007; Moritz et al., 2008; Pohlmeyer et al., 2009; Ethier et al., 2012; Flint et al., 2012; Flint et al., 2014; Flint et al., 2017; Schwarz et al., 2018). Small subsets of

M1 neurons have been used to command muscle activations through FES to restore one-dimensional wrist control and whole-hand grasping in non-human primates with temporary motor paralysis (Moritz et al., 2008; Pohlmeyer et al., 2009; Ethier et al.,

2012). More recent intracortical studies demonstrated that force representation is preserved in individuals with chronic tetraplegia (Downey et al., 2018; Rastogi et al.,

2020).

Intended forces are usually produced in the context of task-related factors, including grasp postures used to generate forces (Murphy et al., 2016). The representation and decoding of grasps – independent of forces – has been studied extensively in non-human primates (Stark and Abeles, 2007; Stark et al., 2007a; Vargas-

Irwin et al., 2010; Carpaneto et al., 2011; Townsend et al., 2011; Hao et al., 2014;

Schaffelhofer et al., 2015) and humans (Pistohl et al., 2012; Chestek et al., 2013;

Bleichner et al., 2014; Klaes et al., 2015; Bleichner et al., 2016; Leo et al., 2016; Branco et al., 2017). Importantly, previous studies suggest that force and grasp are encoded by overlapping populations of neural activity (Sergio and Kalaska, 1998; Carmena et al.,

2003; Sergio et al., 2005; Sburlea and Muller-Putz, 2018). While some studies suggest that force is encoded at a high level independent of motion and grasp (Chib et al., 2009;

Hendrix et al., 2009; Pistohl et al., 2012; Intveld et al., 2018), others suggest that it is encoded at a low level intertwined with grasp (Hepp-Reymond et al., 1999; Degenhart et al., 2011). Thus, the degree to which intended hand grasps and forces interact within the

150 neural space, and how such interactions affect force decoding performance, remain unclear. To our knowledge, these scientific questions have not been explored in individuals with tetraplegia, who constitute a target population for iBCI technologies.

To answer these questions, we characterized the extent to which three discrete, attempted forces were neurally represented and offline-decoded across up to five hand grasp configurations in two individuals with tetraplegia. Our results suggest that force has both grasp-independent and grasp-dependent (interacting) representation in motor cortex. Additionally, while this study demonstrates the feasibility of incorporating discrete force control into human-operated iBCIs, these systems will likely need to incorporate grasp and other task parameters to achieve optimal performance.

Materials and Methods

Study permissions and participants

Study procedures were approved by the US Food and Drug Administration

(Investigational Device Exemption #G090003) and the Institutional Review Boards of

University Hospitals Case Medical Center (protocol #04-12-17), Stanford University

(protocol #20804), and Massachusetts General Hospital (2011P001036). The present study collected neural recordings from participants enrolled in the BrainGate2 Pilot

Clinical Trial (ClinicalTrials.gov number NCT00912041). The current work utilized the recording opportunity afforded by the BrainGate2 Pilot Clinical Trial but reports no clinical trial outcomes or endpoints. Informed consent, including consent to publish, was obtained from the participants prior to their enrollment in the study.

This present study includes data from two participants with chronic tetraplegia.

Both participants had two, 96-channel microelectrode intracortical arrays (1.5 mm

151 electrode length, Blackrock Microsystems, Salt Lake City, UT) implanted in the hand and arm area (“hand knob”) (Yousry et al., 1997) of dominant motor cortex. Participant T8 was a 53-year-old right-handed male with C4-level AIS-A spinal cord injury 8 years prior to implant; and T5 was a 63-year-old right-handed male with C4-level AIS-C spinal cord injury. More surgical details can be found at (Ajiboye et al., 2017) for T8 and

(Pandarinath et al., 2017) for T5.

Participant Task

The goal of this study was to measure the degree to which various hand grasps affect decoding of grasp force from motor cortical spiking activity. To this end, participants T8 and T5 took part in several research sessions in which they attempted to produce three discrete forces (light, medium, hard) using one of four designated hand grasps (closed pinch, open pinch, ring pinch, power). T8 completed six sessions between trial days 735-956 relative to the date of his microelectrode array placement surgery; and T5 completed one session on trial day 390. During Session 5, participant

T8 completed five additional blocks in which he attempted to produce discrete forces during attempted elbow extension. Table 1 lists all relevant sessions and their associated task parameters.

Each research session consisted of multiple 4-minute data collection blocks, which were each assigned to a particular hand grasp, as illustrated in Figure 4-1B.

Blocks were presented in a pseudorandom order, in which hand grasps were assigned randomly to each set of two (Session 1), four (Sessions 2-4, 6-7), or five (Session 5) blocks. This allowed for an equal number of blocks per hand grasp, distributed evenly across the entire research session.

152

Post- No. Blocks Per Grasp Session Participant Implant Closed Open Ring No. Power Elbow Day Pinch Pinch Pinch 1 T8 Day 735 11 10

2 T8 Day 771 5 5 5 5

3 T8 Day 774 6 5 5 5

4 T8 Day 788 5 5 5 5

5 T8 Day 802 4 4 4 4 5

6 T8 Day 956 4 4 4 4

7 T5 Day 390 4 4 4 4

Table 4-1. Session information. Session information for participants T8 and T5, including the number of blocks per grasp type.

All blocks consisted of approximately 20 trials, which were presented in a pseudorandom order by repeatedly cycling through a complete, randomized set of force levels until the end of the block. During each trial, participants used kinesthetic imagery

(Stevens, 2005; Mizuguchi et al., 2017) to internally emulate one of three discrete force levels, or rest, with the dominant hand. Participants received simultaneous audio and visual cues indicating which force to produce, when to produce it, and when to relax.

Participants were visually cued by observing a researcher squeeze one of nine graspable objects corresponding to light, medium, and hard forces (no object was squeezed during “rest” trials), as shown in Figure 4-1B. The participants were asked to

“follow along” and attempt the same movements that the researcher was demonstrating.

The graspable objects were grouped into three sets of three, corresponding to forces embodied using a power grasp (sponge = light, stress ball = medium, tennis ball = hard); a pincer grasp (cotton ball = light, nasal aspirator tip = medium, eraser = hard); or elbow extension (5-lb dumbbell = light, 10-lb dumbbell = medium, 15-lb dumbbell = hard).

153

During the prep phase, which lasted a pseudo-randomly determined period between 2.7 and 3.3 seconds to reduce confounding effects from anticipatory activity, the researcher presented an object indicating the force level to be attempted. The researcher then squeezed the object (or lifted the object, in the case of elbow extension) during the go phase (3-5 seconds), and finally released the object at the beginning of the stop phase

(5 seconds).

Neural Recordings

Pre-processing

In both participants, each intracortical microelectrode array was attached to a percutaneous pedestal connector on the head. Cables connected the pedestals to amplifiers (Blackrock Microsystems, Salt Lake City, UT) that bandpass filtered (0.3 Hz –

7.5 kHz) and digitized (30 kHz) the neural signals from each channel on the microelectrode array. These digitized signals were pre-processed in Simulink using the xPC real-time operating system (The Mathworks Inc., Natick, MA, US). Each channel was bandpass filtered (250-5000 Hz), common average referenced (CAR), and down- sampled to 15 kHz in real time. CAR was implemented by selecting 60 channels from each microelectrode array that exhibited the lowest variance, and then averaging these channels together to yield an array-specific common average reference. This reference signal was subtracted from the signals from all channels within each of the arrays.

Extraction of Neural Features

From each filtered, CAR channel, two neural features were extracted in real time using the xPC operating system unsorted threshold crossing (TC) and spike band power

(SBP) features, from non-overlapping 20 millisecond time bins, as illustrated in Figure 4-

1A. TC features were defined as the number of times the voltage on each channel

154 crossed a predefined noise threshold (Christie et al, 2015) within each time bin (-4.5 x root mean square voltage). Root mean square (RMS) voltage was calculated from one minute of neural data recorded at the beginning of each research session. SBP features were defined as the RMS of the signal in the spike band (250-5000 Hz) of each channel, time-averaged within each time bin. These calculations yielded 384 neural features per participant, which were used for offline analysis without spike sorting (Trautmann et al.,

2019). All features were normalized by subtracting the block-specific mean activity of the features within each recording block, in order to minimize non-stationarities in the data.

Unless otherwise stated, all subsequent offline analyses of neural data were performed using MATLAB software.

Characterization of Individual Neural Feature Tuning

The first goal of this study was to determine the degree to which force- and grasp-related information are represented within individual TC and SBP neural features.

Specifically, neural activity resulting from three discrete forces and two (Session 1), four

(Sessions 2-4, 6-7), or five (Session 5) grasps, resulted in 6, 12 or 15 conditions of interest per session. See Table 1 for a list of grasps included for each individual research session. To visualize individual feature responses to force and grasp, each feature’s peristimulus time histogram (PSTH) was computed for each of these conditions by averaging the neural activity over go cue-aligned trials. These trials were temporally smoothed with a Gaussian kernel (100-ms standard deviation) to aid in visualization.

To determine how many of these individual features were tuned to force and/or grasp, statistical analyses were implemented in MATLAB and with the WRS2 library in the R programming language (Wilcox, 2017). Briefly, features were pre-processed in

MATLAB to compute each feature’s mean go-phase deviation from baseline during each

155 trial. Baseline activity was computed by averaging neural activity across multiple rest trials.

In R, the distribution of go-phase neural deviations was found to be normal

(analysis of Q-Q plots and Shapiro-Wilk tests, p < 0.05) but heteroskedastic (Levene’s test, p < 0.05), necessitating a 2-way Welch ANOVA analysis to determine neural tuning to force, grasp, and their interaction (p < 0.05). Features exhibiting an interaction between force and grasp were further separated into individual grasp conditions (closed pinch, open pinch, ring pinch, power, elbow), within which one-way Welch-ANOVA tests were implemented to find interacting features that were tuned to force. All p values were corrected for multiple comparisons using the Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995).

Neural Population Analysis and Decoding

The second goal of this study was to determine the degree to which force and grasp are represented within – and can be decoded from – the level of the neural population. Here, the neural population was represented using both traditional and demixed principal component analysis (PCA).

Visualizing Force Representation with PCA

In order to visualize how consistently forces were represented across different grasps, neural activity collected during Sessions 5 and 7 were graphically represented within a low-dimensional space found using PCA. Notably, during Session 5, participant

T8 attempted to produce three discrete forces not only with several grasps, but also with an elbow extension movement. Therefore, two sets of PCA analyses were implemented on the data. The first, which was applied to both sessions, performed PCA on all force and grasp conditions within the session. In the second analysis specific to Session 5 only, PCA was applied solely on power grasping and elbow extension trials in order to

156 elucidate whether forces were represented in a consistent way across the entire upper limb. For both analyses, the PCA algorithm was applied to neural feature activity that was averaged over multiple trials and across the go phase of the task.

The results of each decomposition were plotted in a low-dimensional space defined by the first two principal components. The force axis within this space, given by

Equation 1, was estimated by applying multi-class linear discriminant analysis (LDA)

(Juric, 2020) to the centered, force-labelled PCA data, and then using the largest LDA eigenvector as the multi-dimensional slope m of the force axis. Here, PCscore is the principal component score or representation of the neural data in PCA space and f is the intended force level. A consistent force axis across multiple grasps within PCA space would suggest that forces are represented in an abstract (and thus grasp-independent) manner.

푷푪풔풄풐풓풆 = 풎푓 (1)

Demixed Principal Component Analysis

The remainder of population-level analysis was implemented using demixed principal component analysis (dPCA). dPCA is dimensionality reduction technique that, similarly to traditional PCA, decomposes neural population activity into a few components that capture the majority of variance in the source data (Kobak et al., 2016).

Unlike traditional PCA, which yields principal components (PCs) that capture signal variance due to multiple parameters of interest, dPCA performs an ANOVA-like decomposition of data into task-dependent dimensions of neural activity. Briefly, the matrix X of neural data is decomposed into trial-averaged neural activity explained by time (t), various task parameters (P1, P2), their interaction (P1P2), and noise, according to

Equation 2. Next, dPCA finds separate decoder (D) and encoder (E) matrices for each marginalization M by minimizing the loss function L exhibited in Equation 3.

157

푿 = 푿풕 + 푿풑ퟏ + 푿풑ퟐ + 푿풑ퟏ풑ퟐ + 푿풏풐풊풔풆 = 푿푴 + 푿풏풐풊풔풆 (2)

ퟐ 퐿 = ‖푿푴 − 푬푴푫푴푿‖ (3) 푴

The resulting demixed principal components (dPCs), obtained by multiplying the neural data X by the rows of each decoder matrix DM, are, in theory, de-mixed, in that the variance explained by each component is due to a single, specific task parameter M.

These dimensions of neural activity not only reveal population-level trends in neural data, but they can also be used to decode task parameters of interest (Kobak et al.,

2016).

Single dPCA Component Implementation

In the present study, the task parameters of interest were force and grasp. Here, one goal was to use variance as a metric to quantify the degree to which force and grasp were represented within the neural population as a whole. Therefore, for each research session listed in Table 1, the neural data X was temporally smoothed using a Gaussian filter (100 millisecond standard deviation) and decomposed into neural activity that varied with four marginalizations XM, as per Equation 2: time (condition independent), force, grasp, and an interaction between force and grasp. The variance that each marginalization accounted for was computed as the sum of squares of the mean- centered neural data contained within the marginalization.

An additional goal was to isolate neural components that contained useful information about force and grasp, i.e., components that would enable discrimination between individual force levels and grasp types. First, dPCA was used to reduce each of the four, 384-dimensional, mean-centered marginalizations XM into 20 dPCs, as described by Equation 3. This yielded 80 dPCs across all four marginalizations. Second, the variances accounted for by each of the 80 components were computed as the sum

158 of squares. Third, the top 20 out of 80 components with the highest variance were selected as representing the majority of variance in the neural dataset and were assembled into a decoder matrix D. Finally, each of these top 20 components was assigned to one of the four marginalizations of interest according to the marginalization from which it was extracted. For example, dPCs that were extracted from the force marginalization Xforce were deemed as force-tuned dPCs; those extracted from the grasp marginalization Xgrasp were deemed as grasp tuned dPCs; and those extracted from the marginalization XF/G representing an interaction between force and grasp were deemed as interacting dPCs.

Each dPC’s information content was further quantified in two ways. First, in order to assess the degree to which dPCs were demixed, each dPC’s variance was subdivided into four sources of variance corresponding to each of the four marginalizations of interest, as per Equation 2. Second, the decoder axis associated with each dPC was used as a linear classifier to decode intended parameters of interest.

Specifically, each force-tuned dPC was used to decode force at every time point of the behavioral task, while each grasp-tuned dPC was used to decode grasp, but not force.

Likewise, components that exhibited an interaction between force and grasp were used to decode force-grasp pairs. Condition-independent dPCs, which were tuned to time, were not used to decode force or grasp from the neural activity.

Linear classification was implemented using 100 iterations of stratified Monte

Carlo leave-group-out cross-validation (Kobak et al., 2016). During each iteration, one random group of F x G test “pseudo-trials,” each corresponding to one of the several force-grasp conditions, was set aside during each time point (F = number of intended forces, G = number of intended grasps). Next, dPCA was implemented on the remaining trials, and the decoder axes of the resulting dPCs were used to predict the intended forces or intended grasps indicated by the test set of pseudo-trials at each time point.

159

This was accomplished by first computing mean dPC values for each force-grasp condition, separately for each time point; projecting the F x G “pseudo-trials” onto the decoder axes of the dPCs at each time point; and then classifying the pseudo-trials according to the closest class mean (Kobak et al., 2016). The proportion of F x G pseudo-trials correctly classified across 100 iterations at each time point constituted a time-dependent classification accuracy. Chance performance was computed by performing 100 shuffles of all available trials, randomly assigning force or grasp conditions to the shuffled data, and then performing the same cross-validated classification procedure within each of the 100 shuffles. Classification accuracies that exceeded the upper range of chance performance were deemed significant.

Force and Grasp Decoding Using Multiple dPCs

Two additional goals of this study were to determine whether intended forces could be accurately predicted from neural population data and whether these predictions depended on hand grasp configuration. To this end, dPCs that were tuned to force, grasp, and an interaction between force and grasp were used to construct multi- dimensional force and grasp decoders within each session. Specifically, the force decoder was constructed by combining the decoding axes of force-tuned and interacting components into a single, multi-dimensional decoder DF; likewise, the grasp decoder DG was constructed by combining the decoding axes of grasp-tuned and interacting components.

Each of these decoders was used to perform 40 runs of linear force and grasp classification for each of S research sessions per participant, implemented using the aforementioned stratified Monte Carlo leave-group-out cross-validation procedure (S = 6 for T8; S = 1 for T5). As in the single component implementation (Kobak et al., 2016), each run was accomplished in multiple steps. First, the mean values of all dPCs

160 included within the multi-dimensional decoder were computed for each force-grasp condition, separately for each time point. Second, at each time point, the F x G “pseudo- trials” were projected onto the multi-dimensional decoder axis and classified according to the closest class mean. The proportion of test trials correctly classified at each time point over 100 iterations constituted a time-dependent force or grasp classification accuracy.

The aforementioned computations yielded 40 x S time-dependent force and grasp classification accuracies per participant. Session-averaged, time-dependent force and grasp classification accuracies were computed by averaging the performance over

240 session-runs for participant T8 (40 runs x 6 sessions) and 40 session-runs for participant T5 (40 runs x 1 session). These averages were compared to chance performance, which was computed by performing 100 shuffles of all trials, randomly assigning force or grasp conditions to the shuffled data, and then performing force and grasp classification on each of the shuffled datasets using the multidimensional decoders DF and DG. Time points in which force or grasp classification exceeded the upper bound of chance were deemed to contain significant force-related or grasp-related information.

To visualize the degree to which individual forces and grasps could be discriminated, confusion matrices were computed over go-phase time windows during which the neural population contained significant force- and grasp-related information.

The time window began when session-averaged, time-dependent classification accuracy exceeded 90% of maximum achieved performance within the go phase, and ended at the end of the go phase. First, classification accuracies for each of the S x 40 session- runs were approximated by averaging classification performance across the pre- specified go-phase time window. These time-averaged accuracies, which are henceforth referred to as mean force and grasp accuracies, were next averaged over all S x 40 session-runs to yield confusion matrix data. In this way, confusion matrices were

161 computed to visualize force-related discriminability across all trials, force-related discriminability within individual grasp types, and grasp-related discriminability across all trials.

Classification performances for individual forces and individual grasps were statistically compared using parametric tests implemented on mean force and grasp accuracies. Specifically, for each participant, mean classification accuracies for light, medium, and hard forces were compared by implementing one-way ANOVA across mean force accuracies from all S x 40 session runs. The resulting p values were corrected for multiple comparisons using the Benjamini-Hochberg procedure (Benjamini and Hochberg, 1995). Likewise, mean classification accuracies for closed pinch, open pinch, ring pinch, power, and elbow “grasps” were compared by implementing one-way

ANOVA across all mean grasp accuracies. These comparisons were implemented to determine whether offline force and grasp decoding yielded similar versus different classification results across multiple forces and multiple grasps.

Statistical analysis was also used to determine the degree to which grasp affected force decoding accuracy. This was achieved by implementing two-way ANOVA on mean force accuracies that were labelled with the grasps that were used to emulate these forces. The results of the two-way ANOVA showed a statistically significant interaction between force and grasp. Therefore, the presence of simple main effects was assessed within each force level and within each grasp type. Specifically, one-way

ANOVA was implemented on mean accuracies within individual force levels to determine whether light forces, for example, were classified with similar degrees of accuracies across all grasp types. Similarly, one-way ANOVA was implemented on mean accuracies within individual grasps to see whether intended forces affected grasp classification accuracy. P values resulting from these analyses were corrected for multiple comparisons using the Benjamini-Hochberg procedure.

162

Finally, this study evaluated how well dPCA force decoders could generalize to novel grasp datasets in T8 Session 5 and T5 Session 7. Specifically, within each session, a multi-dimensional force decoder DF was trained on neural data generated during all but one grasp type, and then its performance was evaluated on the attempted forces emulated using the left-out “novel” grasp. To establish the generalizability of force decoding performance across many novel grasps, this analysis cycled through all available grasps attempted during Session 5 (closed pinch, open pinch, ring pinch, power, elbow extension) and Session 7 (closed pinch, open pinch, ring pinch, power).

For each novel grasp, the trained decoder DF was used to perform 40 runs of stratified

Monte Carlo leave-group-out cross-validated linear force classification on two sets of test data: the “initial grasp” dataset, which originated from the grasps on which the force decoder was trained; the “novel grasp” dataset, which originated from the leave-out test grasp. The resulting time-dependent, “initial grasp” and “novel grasp” decoding performances from the go-phase time window during above-90% classification accuracy were averaged over 40 runs and then compared using a standard T test. P values resulting from the statistical analysis were corrected for multiple comparisons across forces and test grasps using the Benjamini-Hochberg procedure.

Results

Characterization of Individual Neural Features

Figure 4-2 shows the activity of four exemplary features from session 5 chosen to illustrate tuning to force, grasp, both force and grasp independently, and an interaction between force and grasp, as evaluated with 2-way Welch-ANOVA (corrected p < 0.05,

Benjamini-Hochberg procedure). These features demonstrate neural modulation to

163 forces that T8 attempted to produce using all five grasp conditions: closed pinch, open pinch, ring pinch, power grasp, and elbow extension. Supplemental Figure 4-2.1 shows the activity of four additional features from participant T5. TC features are labelled from

1-192 according to the recording electrodes from which they were extracted.

Corresponding SBP features are labelled from 193-384.

For each feature, column 1 shows neural activity that was averaged across grasp types (within force levels), resulting in trial-averaged feature traces whose differences in modulation were due to force alone. Similarly, Column 2 shows neural activity averaged within individual hand grasps. Here, SBP feature 302 exhibits modulation to force only

(row 1), as indicated by statistically significant go-phase differentiation in activity across multiple force levels, but not across multiple grasp levels. This force-only tuning is what might be expected for a “high-level” coding of force that is independent of grasp type. In contrast, TC feature 190 is statistically tuned to grasp only, in that it exhibits go-phase differentiation across multiple grasps, but not across multiple forces. SBP feature 201, in which multiple forces and multiple grasps are statistically discriminable, is tuned to both force and grasp.

Column 3 of Figure 4-2 displays a graphical representation of the simple main effects of the 2-way Welch-ANOVA analysis, as shown by mean go-phase neural deviations from baseline feature activity during the production of each individual force level using each individual grasp type. Here, SBP features 302 and 201, which were both tuned to force independent of grasp, showed similar patterns in modulation to light, medium, and hard forces within individual grasp types. In contrast, TC feature 83 was tuned to an interaction between force and grasp; accordingly, its modulation to light, medium, and hard forces varied according which grasp type the participant used to emulate these forces. This type of interaction is what might be expected for a more

“motoric” encoding of force and grasp type. If each grasp requires a different set of

164 muscles and joints to be active, then a motoric encoding of joint or muscle motion would end up representing force differently depending on the grasp.

Figure 4-1. Data collection scheme for research sessions. A. Experimental setup (Reproduced from (Rastogi et al., 2020)). Participants had two 96-channel microelectrode arrays placed chronically in motor cortex, which recorded neural activity while participants completed a force task. Threshold crossing (TC) and spike band power (SBP) features were extracted from these recordings. B. Research session architecture. Each session consisted of 12-21 blocks, each of which contained ~20 trials (see Table 1). In each trial, participants attempted to generate one of three visually-cued forces with one of four grasps: power, closed pincer (c-pinch), open pinch (o-pinch), ring pinch (r-pinch). During session 5, participant T8 also attempted force production using elbow extension. Each trial contained a preparatory (prep) phase, a go phase where forces were actively embodied, and a stop phase where neural activity was allowed to return to baseline. Participants were prompted with both audio and visual cues, in which a researcher squeezed or lifted an object associated with each force level.

165

Figure 4-2. Exemplary threshold crossing (TC) and spike band power (SBP) features tuned to task parameters of interest in participant T8. (TC and SBP features in participant T5 are illustrated in Supplemental Figure 4-2.1.) Rows indicate average per-condition activity (PSTH) of four exemplary features tuned to force, grasp, both factors, and an interaction between force and grasp, recorded during session 5 from participant T8 (2-way Welch-ANOVA, corrected p < 0.05, Benjamini-Hochberg method). Neural activity was normalized by subtracting block-specific mean feature activity within each recording block, and then smoothed with a 100-millisecond Gaussian kernel to aid in visualization. Column 1 contains PSTHs averaged within individual force levels (across multiple grasps), such that observable differences between data traces are due to force alone. Similarly, column 2 shows PSTHs averaged within individual grasps (across multiple forces). Column 3 shows a graphical representation of the simple main effects as normalized mean neural deviations from baseline activity during force trials within each of the five grasps. Mean neural deviations were computed over the go phase of each trial and subsequently averaged within each force-grasp pair. Error bars indicate 95% confidence intervals.

166

Figure 4-3 summarizes the tuning properties of all 384 TC and SBP neural features in participants T8 and T5, as evaluated with robust 2-way Welch-ANOVA.

Specifically, Figure 4-3A shows the fraction of neural features tuned to force, grasp, both force and grasp, and an interaction between force and grasp. Features belonging to the former three groups (i.e., those that exhibited no interactions between force and grasp tuning) were deemed as independently tuned to force and/or grasp. As shown in row 1, the proportion of features belonging to each of these groups varied considerably across experimental sessions. However, during all sessions in both participants, a substantial proportion of features (ranging from 15.4-54.7% of the feature population across sessions) were tuned to force, independent of grasp. In other words, a substantial portion of the measured neural population represented force and grasp independently.

Figure 4-3. Summary of neural feature population tuning to force and grasp. Row 1: fraction of neural features significantly tuned to force, grasp, both force and grasp, and an interaction between force and grasp in participants T8 and T5 (2-way Welch- ANOVA, corrected p < 0.05). Row 2: Fraction of neural features significantly tuned to an interaction between force and grasp, subdivided into force-tuned features within each individual grasp (c-pinch = closed pinch, o-pinch = open pinch, r-pinch = ring pinch). Note that the number of grasp types differed between sessions (see Table 1).

167

A smaller subset of features exhibited an interaction between force and grasp in both T8 (5.2 +/- 4.2%) and T5 (13.8%). Row 2 of Figure 4-3 further separates these interacting features into those that exhibited force tuning within each individual grasp type, as evaluated by one-way Welch-ANOVA (corrected p < 0.05). Here, the proportion of interacting features tuned to force appeared to depend on grasp type, particularly during sessions 2, 4, 5, 6, and 7, in a session-specific manner. In other words, within a small contingent of the neural feature population, force representation showed some dependence on intended grasp. Taken together, Figure 4-3 suggests that force and grasp are represented both independently and dependently within motor cortex at the level of individual neural features.

Neural Population Analysis and Decoding

Simulated Force Encoding Models

The goal of this study was to clarify the degree to which hand grasps affect neural force representation and decoding performance, in light of conflicting evidence of grasp-independent (Chib et al., 2009; Hendrix et al., 2009; Pistohl et al., 2012; Intveld et al., 2018) versus grasp-dependent (Hepp-Reymond et al., 1999; Degenhart et al., 2011) force representation in the literature. Prior to visualizing population-level representation of force, we first illustrate these differing hypotheses with a toy example of expected grasp-independent versus grasp-dependent (interacting) representations of force within the neural space. Figure 4-4 simulates grasp-independent force encoding with an additive model, given by Equation 4, and grasp-dependent force encoding with a scalar model, given by Equation 5.

풙풊풋 = 품풊 + 풇푠 (4)

풙풊풋 = 푠품풊 (5)

168

In these equations, xij is a vector of trial-averaged activity from 100 simulated neural features, generated during a particular grasp i and force j. Here, gi is a 100 x 1 vector of normalized baseline feature activity during the grasp i, f is a 100 x 1 vector of normalized baseline neural feature activity during force generation, and sj is a discrete, scalar force level (1, 2 or 5). The vectors gi and f contained values drawn from the standard normal distribution.

Within the additive model in Equation 4, the overall neural activity xij is represented as a summation of independent force- and grasp-related contributions.

Thus, Equation 4 models independent neural force representation, in which force is represented at a high level independent of grasp. In contrast, the scalar encoding model in Equation 5 models the neural activity xij as resulting from a multiplication of the force level sj and the baseline grasping activity gi. Such an effect might be expected if force were encoded as low-level tuning to muscle activity. In this case, different force levels would result in the same pattern of muscle activity being activated to a lesser or greater degree, thus scaling the neural activity associated with that grasp, resulting in a coupling between force and grasp. Therefore, Equation 5 models an interacting (grasp- dependent) neural force representation.

Row 2 of Figure 4-4 shows simulated neural activity resulting from the independent and interacting encoding models within two-dimensional PCA space. In the independent model, force is represented in a consistent way across multiple simulated grasps, as indicated by the force axis. In contrast, within the interacting model, force representation differs according grasp. These differences are further highlighted in Row

3 of Figure 4-4, in which dPCA was applied to the simulated neural data (over 20 simulated trials) resulting from each model. While the additive model exhibited no interaction-related neural variance, the scalar model yielded a substantial proportion of force, grasp, and interaction-related variance. Note that within these toy models, the

169 simulated neural activity did not vary over its time course and, thus, exhibited no condition-independent (time-related) variance.

Figure 4-4. Simulated models of independent and interacting (grasp-dependent) neural representations of force. Row 1: Equations corresponding to the independent and interacting models of force representation. Here, xij represents neural feature activity generated during a particular grasp i and force j, gi represents baseline feature activity during grasp i, f represents force-related neural feature activity, and sj is a discrete force level. Row 2: Simulated population neural activity projected into a two- dimensional PCA space. Estimated force axes within the low-dimensional spaces are shown as blue lines. Row 3: Summary of variances accounted for by the top 20 demixed principal components extracted from the simulated neural data from each model. Here, the variance of each individual component is separated by marginalization (force, grasp, and interaction between force and grasp). Pie charts indicate the percentage of total signal variance due to these marginalizations.

Neural Population Analysis

Figure 4-5 shows neural population-level activity patterns during two sessions from participants T8 and T5. In the first two columns, dPCA and traditional PCA were applied to all force-grasp conditions in both participants. In the third column, these dimensionality reduction techniques were applied solely to force trials attempted using

170 the power grasping and elbow extension, in order to further quantify force representation across the entire upper limb.

171

Figure 4-5. Neural population-level activity patterns. A. Demixed principal components (dPCs) isolated from all force-grasp conditions from T8 Session 5, all force- grasp conditions from T5 Session 7, and power versus elbow conditions from T8 Session 5 neural data. (Additional sessions are shown in Supplemental Figure 4-5.1). For columns 1 and 2, dPCs were tuned to four marginalizations of interest: time (condition-independent tuning), force, grasp, and an interaction between force and grasp (f/grasp tuning). For column 3, dPCs were tuned to time, force, movement, and an interaction between force and movement (f/movement tuning). dPCs that account for the highest amount of variance in the per-marginalization neural activity are shown here. These variances are included in brackets next to each component number. Vertical bars indicate the start and end of the go phase. Horizontal bars indicate time points at which the decoder axes of the pictured components classified forces (row 2), grasps/movements (row 3), or force-grasp/force-movement pairs (row 4) significantly above chance. B. Summary of variances accounted for by the top 20 dPCs and PCs from each session. Here, the variance accounted for by the dPCs approaches the variance accounted for by traditional PCs. Horizontal dashed lines indicate total signal variance, excluding noise. Row 2 shows the variance of each individual component, separated by marginalization C. Go-phase activity within a two-dimensional PCA space. Estimated force axes within the low-dimensional PCA spaces are shown as blue lines.

The twelve dPCs shown in Figure 4-5A explain the highest amount of variance within each of the four marginalizations of interest, for each participant. For example, participant T8’s Component #4 (row 2, column 1) is the largest force-tuned component in the dataset and explains 3.3% of the neural data’s overall variance. Similarly, T8’s

Component #2 (row 3, column 1), which captures grasp-related activity, explains 8.1% of neural variance. Horizontal black bars on each panel indicate time points at which individual dPC decoding axes predict intended forces (row 2), grasps (row 3), and force- grasp pairs (row 4) more accurately than chance performance. In both participants, single components were able to offline-decode intended forces at above-chance levels solely during the active “go” phase of the trial, indicated by the vertical gray lines.

However, grasp-tuned components were able to accurately predict intended grasps at nearly all time points during the trial, including the prep and stop phases. These trends were observed when dPCA was applied across all force-grasp conditions (Columns 1 and 2) and across solely power and elbow trials in participant T8 (Column 3).

172

Figure 4-5B summarizes the variance accounted for by the entire set of dPCs extracted from each dataset. Specifically, the first row shows the cumulative variance captured by the dPCs (red), as compared to components extracted with traditional PCA

(black). Here, dPCs extracted from different marginalizations were not necessarily orthogonal, and accounted for less cumulative variance than traditional PCs because the axes were optimized for demixing in addition to just capturing maximum variance.

However, the cumulative dPC variance approached total signal variance, as indicated by the dashed horizontal lines in each panel, and were thus deemed as a faithful representation of the neural population data.

The second row of Figure 4-5B further subdivides the variances of individual dPCs into per-marginalization variances. Here, most of the variance in each extracted component can be attributed to one primary marginalization, indicating that the extracted components are fairly well demixed. Pie charts indicate the percentage of total signal variance (excluding noise) due to force, grasp, force/grasp interactions, and condition- independent signal components. In both participants, condition-independent components accounted for the highest amount of neural signal variance, followed by grasp, then force, then force-grasp interactions. In other words, more variance could be attributed to putative grasp representation than force representation at the level of the neural population. Additionally, force-grasp and force-movement interactions only accounted for a small amount of neural variance, even when dPCA was applied solely across power grasping and elbow extension trials (Column 3). Similar results were found when analyzing additional sessions, as shown in Supplemental Figure 4-5.1.

Finally, Figure 4-5C visualizes the trial-averaged, go-phase-averaged neural activity from each dataset within two-dimensional PCA space. Within these plots, each data point represents the average neural activity corresponding to an individual force- grasp condition. In all panels, light, medium, and hard forces, represented as different

173 shapes within PCA space, aligned to a consistent force axis (shown in blue) across multiple grasps – and also across power grasping and elbow extension movements.

The findings exhibited within Figures 5B and 5C closely resemble the simulation results from the additive force encoding model (Figure 4-4, Equation 4), which would be expected for grasp-independent force representation. However, these results differ slightly from those expected from the abstract model, in that some amount of interaction- related variance was present in Figure 4-5B, and that the force activity patterns in Figure

4-5C deviated to a small degree from the force axis. These small deviations somewhat resemble the scalar encoding model (Figure 4-4, Equation 5), which would be expected for interacting force and grasp representations.

Time-Dependent Decoding Performance

Figure 4-6 summarizes the degree to which intended forces and grasps could be predicted from the neural activity using the aforementioned dPCs. Here, offline force decoding accuracies were computed by using a force decoder DF – created by assembling the decoding axes of multiple force-tuned and interacting components – to classify light, medium, and hard forces over multiple session-runs of a 100-fold, stratified, leave-group-out Monte Carlo cross-validation scheme, as described in the

Methods. Similarly, grasp decoding accuracies in row 3 were computed using a grasp decoder DG, created by assembling the decoding axes of grasp-tuned and interacting dPCs. Row 1 of Figure 4-6 shows time-dependent force decoding results, averaged over

S x 40 session-runs in participants T8 (S = 6) and T5 (S = 1). Row 2 further subdivides the results of Row 1 into force decoding accuracies achieved during individual hand grasps. Finally, Row 3, shows time-dependent grasp decoding results for both participants.

174

Figure 4-6. Time-dependent classification accuracies for force (rows 1-2) and grasp (row 3). Data traces were smoothed with a 100 millisecond boxcar filter to aid in visualization. Shaded areas surrounding each data trace indicate the standard deviation across 240 session-runs for most trials in participant T8, 40 session-runs for elbow extension trials in participant T8, and 40 session-runs in participant T5. Gray shaded areas indicate the upper and lower bounds of chance performance over S x 100 shuffles of trial data, where S is the number of sessions per participant. Time points at which force or grasp is decoded above the upper bound of chance are deemed to contain significant force- or grasp-related information. Blue shaded regions indicate the time points used to compute go-phase confusion matrices in Figure 4-7. Time-dependent classification accuracies for individual force levels and grasp types are shown in Supplemental Figure 4-6.1. Grasp classification accuracies, separated by number of attempted grasp types, are presented in Supplemental Figure 4-6.2.

Here, intended forces were decoded at levels exceeding the upper bound of chance solely during the go phase, regardless of the grasp used to emulate the force.

The exception to this trend occurred during elbow extension trials, in which intended forces were decoded above chance during the stop phase. In contrast, intended grasps

175 were decoded above chance during all trial phases, regardless of the number of grasps from which the decoder discriminated (Supplemental Figure 4-6.2) – though go-phase grasp decoding accuracies tended to exceed those achieved during other trial phases.

In summary, both intended forces and grasps were decoded above chance during time periods when participants intended to produce these forces and grasps – and in some cases, during preparatory and stop periods. Time dependent decoding accuracies for individual force levels and individual grasp types are displayed in

Supplemental Figure 4-6.1.

Go-Phase Decoding Performance

Figure 4-7 summarizes go-phase force and grasp decoding accuracies as confusion matrices. Here, time-dependent classification accuracies for each force level and each grasp type were averaged over go-phase time windows (see Figure 4-6) that commenced when overall classification performance exceeded 90% of their maximum, and ended with the end of the go phase. This time period was selected in order to exclude the rise time in classification accuracy at the beginning of the go phase, so that the resulting mean trial accuracies reflected stable values. The mean trial accuracies were then averaged over all session-runs in each participant to yield confusion matrices of true versus predicted forces and grasps. Figure 4-7B further subdivides overall three- force classification accuracies into force classification accuracies achieved during each individual grasp type (columns) in both participants (rows). The confusion matrices in

Figure 4-7 represent cumulative data across multiple sessions in participant T8, and one session in participant T5. Supplemental Figures 4-7.1, 4-7.2 and 4-7.3 statistically compare decoding accuracies between individual force levels and grasp types within each individual session.

176

In Figure 4-6A, Figure 4-7A, and Supplemental Figure 4-6.1, overall three-force classification accuracies exceeded the upper limit of chance in both participants.

However, the decoding accuracies of individual force levels were statistically different.

For almost all sessions, hard forces were classified more accurately than light forces

(with the exception of Session 4, during which light and hard force classification accuracy was statistically similar); and both light and hard forces were always classified more accurately than medium forces. More specifically, hard and light forces were decoded above chance across all sessions, while medium force classification accuracies often failed to exceed chance in both participants.

In contrast, both overall and individual grasp decoding accuracies always exceeded the upper limit of chance. According to Figure 4-7A and Supplemental Figure

4-7.1B, certain grasps were decoded more accurately than others. Specifically, in participant T8, the power and ring pincer grasps were often classified more accurately than the open and closed pincer grasps across multiple sessions (corrected p << 0.05, one-way ANOVA). Elbow extension, which required the participants to attempt force production in the upper limb in addition to the hand, was classified more accurately than any of the grasping forces during Session 5 (corrected p << 0.05). In participant T5, grasp classification accuracies, in order from greatest to least, were ring pincer > open pincer > power > closed pincer. Regardless, grasp decoding performance always exceeded force decoding performance in both participants, as seen in Figures 4-6 and

4-7.

In Figure 4-7 and Supplemental Figure 4-7.3, overall and individual force classification accuracies varied depending on the hand grasps used to attempt these forces. Specifically, classification accuracies for forces attempted with different grasps were, with few exceptions, statistically different (corrected p << 0.05, one-way ANOVA).

For example, in Figure 4-7B and Supplemental Figure 4-7.3, hard forces attempted

177 using the open pincer grasp were always classified more accurately than hard forces attempted using the ring pincer grasp in both participants. In other words, grasp type affected how accurately forces were decoded.

Figure 4-7. Go-phase confusion matrices. A. Time-dependent classification accuracies (shown in Figure 4-6) were averaged over go-phase time windows that commenced when performance exceeded 90% of maximum, and ended with the end of the go phase. These yielded mean trial accuracies, which were then averaged over all session-runs in each participant. Overall force and grasp classification accuracies are indicated above each confusion matrix. Standard deviations across multiple session- runs are indicated next to mean accuracies (cp = closed pinch, op = open pinch, rp = ring pinch, pow = power, elb = elbow extension). Statistical comparisons between the achieved classification accuracies are shown in Supplemental Figure 4-7.1. B. Confusion matrices now separated by the grasps that participants T8 (row 1) and T5 (row 2) used to attempt producing forces. Statistical comparisons between the achieved force accuracies are shown in Supplemental Figures 4-7.2 and 4-7.3.

178

Figure 4-8. Go-phase force classification accuracy for novel (test) grasps. Within each session (rows), dPCA force decoders were trained on neural data generated during all grasps, excluding a single leave-out grasp type (columns). The force decoder was then evaluated over the set of training grasps (gray bars), as well as the novel leave-out grasp type (white bars). The horizontal dotted line in each panel indicates upper bound of the empirical chance distribution for force classification.

Discussion

The current study sought to determine how human motor cortex encodes hand grasps and discrete forces, the degree to which these representations overlapped and interacted, and how well forces and grasps could be decoded. Three major findings emerged from this work. First, force information was present in – and could be decoded from – intracortical neural activity in a consistent way across multiple hand grasps. This suggests that force is, to some extent, represented at a high level, independent of motion and grasp. However, as a second finding, grasp affected force representation and classification accuracy. This suggests that there is a simultaneous, low-level, motoric representation of force. Finally, hand grasps were classified more accurately and explained more neural variance than forces. These three findings and their implications for future online force decoding efforts are discussed here.

179

Force information persists across multiple hand grasps in individuals with tetraplegia.

Overall Force Representation

Force was represented in a consistent way across multiple hand grasps within the neural activity. In particular, a substantial contingent of neural features was tuned to force independent of grasp (Figure 4-3); force-tuned components explained more population-level variance than components tuned to force-grasp interactions (Figure 4-

5); intended forces were accurately predicted from population-level activity across multiple grasps (Figures 4-6 – 4-7); and force decoding performance generalized to novel grasps (Figure 4-8). The study results suggest that, to a large extent, force is represented at a high level within motor cortex, distinct from grasp, in accordance with the independent force encoding model described by Equation 4 (Figure 4-4). This conclusion agrees with previous motor control studies (Mason et al., 2004; Chib et al.,

2009; Casadio et al., 2015), which suggest that at the macroscopic level, force and motion may be represented independently. In particular, Chib and colleagues showed that descending commands pertaining to force and motion could be independently disrupted via TMS, and that these commands obeyed simple linear superposition laws when force and motion tasks were combined.

Furthermore, intracortical non-human primate studies (Mason et al., 2006;

Hendrix et al., 2009; Intveld et al., 2018) suggest that forces are encoded largely independently of the grasps used to produce them. However, in those studies and within the present work, the hand grasps used to produce forces likely recruited overlapping sets of muscle activations. Thus, the relatively low degree of interactions observed here and in the literature could actually be due to overlapping muscle activations rather than truly grasp-independent force representation. For this reason, participant T8 emulated

180 forces using elbow extension in addition to the other hand grasps during Session 5. In

Column 3 of Figure 4-5, dPCA was implemented solely on force trials emulated using elbow extension and power grasping, which involved sets of muscles that operated relatively independently. The resulting dPCA composition yielded a slightly larger amount of variance due to interaction (4%) that was nonetheless smaller than that attributed to force (~12%) or grasp (~35%). Furthermore, discrete force data, when represented within two-dimensional PCA space, aligned closely with a force axis that was conserved over both power grasping and elbow extension movements. These data provide further evidence that force may be encoded independently of movements and grasps.

Representation of Discrete Forces

While overall force accuracies exceeded chance performance (Figure 4-6), hard and light forces were classified more accurately than medium forces across all hand grasps, sessions and participants. In fact, medium forces often failed to exceed chance classification performance (Figure 4-7A, Supplemental Figure 4-6.1B).

Notably, classification performance likely depended on the participants’ ability to kinesthetically attempt various force levels and grasps without feedback, despite having tetraplegia for several years prior to study enrollment. Anecdotally, participant T8 reported that light and hard forces were easier to attempt than medium forces, because they fell at the extremes of the force spectrum and could thus be reproduced consistently. Though his confidence with reproducing all forces improved with training, it is conceivable that, without sensory feedback, medium forces were simply more difficult to emulate cognitively, and thus yielded neural activity patterns that were more inconsistent and difficult to discriminate.

181

Additionally, this and prior studies suggest that neural activity increases monotonically with increasing force magnitude (Evarts, 1969; Thach, 1978; Cheney and

Fetz, 1980; Wannier et al., 1991; Ashe, 1997; Cramer et al., 2002). As a result, medium forces, by virtue of being intermediate to light and hard forces, may be represented intermediate to light and hard forces in the neural space, and may thus be more easily confused with other forces during classification. In the present work, population-level activity associated with medium and light forces appeared similar (Figure 4-5,

Supplemental Figure 4-5.1). Likewise, medium forces were most often confused with light forces during offline force classification (Figure 4-7). The decoding results are consistent with previous studies (Murphy et al., 2016; Downey et al., 2018), in which intermediate force levels were more difficult to discriminate than forces at the extremes of the range evaluated.

Hand posture affects force representation and force classification accuracy.

Single-Feature Versus Population Interactions between Force and Grasp

As previously stated, force information was neurally represented, and could be decoded, across multiple hand grasps (Figures 4-3, 4-5 - 4-8). However, hand grasp also appears to influence how force information is represented within and decoded from motor cortex. For example, grasp affected how accurately light, medium, and hard forces were predicted from neural activity (Figure 4-7B, Supplemental Figure 4-7.3).

Furthermore, despite small force-grasp interaction population-level variance (Figure 4-

5B, Supplemental Figure 4-5.1B), as many as 12.0% and 13.8% of neural features exhibited tuning to these interaction effects in participants T8 and T5, respectively

(Figure 4-3), providing further evidence that the force and grasp representation are not entirely independent.

182

When considering the relatively large number of interacting features and the small population-level interaction variance, one might initially conclude that a discrepancy exists between feature- and population-level representation of forces and grasps. However, we note that the amount of variance explained by a parameter of interest does not always correspond directly to the percentage of features tuned to this parameter. Here, the interaction effects detected within individual features likely reached statistical significance with small effect size. In other words, while real interaction effects were present, as shown in the feature data (Figure 4-3), the overall effect was small, as exhibited within the population activity (Figure 4-5). From this perspective, the seemingly incongruous feature- and population-level results actually complement one another and inform our understanding of how forces are represented in motor cortex.

Force and Grasp Have Both Independent and Interacting Representations in Cortex

Thus far, studies of force versus grasp representation have largely fallen into two opposing groups. The first proposes that motor parameters are represented independently (Carmena et al., 2003; Mason et al., 2006; Hendrix et al., 2009; Intveld et al., 2018). Such representation implies that the motor cortex encodes an action separately from its intensity, then combines these two events downstream in order to compute the EMG patterns necessary to realize actions in physical space.

In contrast, the second group suggests that force, grasp, and other motor parameters interact within the neural space (Hepp-Reymond et al., 1999; Degenhart et al., 2011). They propose that motor parameters cannot be fully de-coupled (Kalaska,

2009; Branco et al., 2019), and that it may be more effective to utilize the entire motor output to develop a comprehensive mechanical model, rather than trying to extract single parameters such as force and grasp (Ebner et al., 2009).

183

The current study presents evidence supporting both independent and interacting representations of force and grasp. These seemingly contradictory results actually agree with a previous non-human primate study that recorded from motor areas during six combinations of forces and grasps (Intveld et al., 2018). Intveld and colleagues found that, while force-grasp interactions explained only 0-3% of population variance, roughly

10-20% of recorded neurons exhibited such interactions. These results are highly consistent those outlined in the present study (Figures 4-3, 4-5). Thus, the neural space could consist of two contingents: one that encodes force at a high level independent of grasp and motion, and another that encodes force as low-level tuning to muscle activity, resulting in interactions between force and grasp. The second contingent, however small, significantly impacts how accurately forces and grasps are decoded (Figure 4-7B,

Supplemental Figure 4-7.3), and should thus not be discounted.

Hand grasp is represented to a greater degree than force at the level of the neural population

Go-Phase Grasp Representation

In the present datasets, grasps were decoded more accurately (Figures 4-6 – 4-

7, Supplemental Figure 4-6.1B) and explained more signal variance (Figure 4-5B,

Supplemental Figure 4-5.1B) than forces. This suggests that within the sampled region of motor cortex, grasp is represented to a greater degree than force, which agrees with prior literature (Hendrix et al., 2009; Intveld et al., 2018).

Previous studies suggest several reasons why force may be represented to a lesser degree than grasp in the current work. First, force information may have stronger representation in caudal M1, particularly on the banks of the central sulcus (Kalaska and

Hyde, 1985; Sergio et al., 2005; Hendrix et al., 2009). Second, force-tuned neurons in motor cortex respond more to the direction of applied force rather its magnitude (Kalaska

184 and Hyde, 1985; Kalaska et al., 1989; Taira et al., 1996). Finally, intracortical non-human primate studies (Georgopoulos et al., 1983; Georgopoulos et al., 1992) and fMRI studies in humans (Branco et al., 2019) suggest that motor cortical neurons respond more to the dynamics of force than to static force tasks. The present work, which recorded from rostral motor cortex and studied the representation of static, non-directional forces, may therefore have detected weaker force-related representation than would have been possible from more caudally-placed recording arrays during a dynamic, functional force task.

Additionally, both study participants were paralyzed and deafferented and received no sensory feedback regarding the forces and grasps they attempted. Previous work suggests that in individuals with tetraplegia, discrepancies exist between the representation of kinematic parameters such as grasp – which remain relatively intact due to their reliance on visual feedback – and kinetic parameters such as force (Rastogi et al., 2020). Specifically, since force-related representation relies heavily on proprioceptive and tactile feedback (Tan et al., 2014; Tabot et al., 2015; Schiefer et al.,

2018), whose neural pathways are altered during tetraplegia (Solstrand Dahlberg et al.,

2018), the current study may have yielded weaker force-related representation than if this feedback had been included. Therefore, further investigations of force representation are needed in individuals with tetraplegia during naturalistic, dynamic tasks that incorporate sensory feedback – either from intact sensation or from intracortical microstimulation (Flesher et al., 2016) – in order to determine the full extent of motor cortical force representation and to maximize force decoding performance.

Grasp Representation during Prep and Stop Phases

Unlike forces, which were represented primarily during the active “go” phase of the trial, grasps were represented throughout the entire task (Figures 4-5 – 4-6), even

185 during the preparatory and stop phases. The ubiquitous representation of grasp observed here could be partially explained by the behavioral task. As described in the

Methods, research sessions consisted of multiple data collection blocks, each of which was assigned to a particular hand grasp, and cycled through three attempted force levels within each block (Figure 4-1B). Thus, while attempted force varied from trial to trial, attempted hand grasps were constant over each block and known by participants in advance. When individuals have prior knowledge of one task parameter, but not another other, information about the known parameter can appear within the baseline activity

(Vargas-Irwin et al., 2018). Therefore, grasp-related information may have been represented within the neural space during non-active phases of the trial, simply by virtue of being known in advance.

Additionally, the placement of the recording arrays could have influenced grasp representation in this study. As described in the Methods, two microelectrode arrays were placed within the “hand knob” of motor cortex in each participant (Yousry et al.,

1997). These arrays may have recorded from “visuomotor neurons,” which modulate both to grasp execution and to the presence of graspable objects prior to active grasp

(Carpaneto et al., 2011), or from neurons that are involved with motor planning of grasp

(Schaffelhofer et al., 2015). These neurons have typically been attributed to area F5, a homologue of premotor cortex in non-human primates. Indeed, recent literature in a human participant indicates that the precentral gyrus, rather than belonging to primary motor cortex, is actually part of premotor cortex (Willett et al., 2020). Thus, then the arrays in this study likely recorded from premotor neurons, which modulate to grasp during both visuomotor planning and grasp execution, as was observed here.

186

Implications for Force Decoding

Hand Grasp Affects Force Decoding Performance

Our decoding results demonstrate that, in individuals with tetraplegia, forces can be decoded offline from neural activity across multiple hand grasps (Figures 4-6 – 4-8).

These results agree with the largely independent force and grasp representation of force within single features (Figure 4-3) and the neural population (Figure 4-5). From a functional standpoint, this supports the feasibility of incorporating force control into real- time iBCI applications. On the other hand, grasp affects how accurately discrete forces are predicted from neural data (Figure 4-7B, Supplemental Figure 4-7.3). Therefore, future robust force decoders may need to account for additional motor parameters, including hand grasp, in order to maximize performance.

Decoding Motor Parameters with Dynamic Neural Representation

The present study decoded intended forces from population activity at multiple time points, with the hope that force representation and decoding performance would be preserved throughout the go phase of the task. We found that force-related activity at the single feature (Figure 4-2, Supplemental Figure 4-2.1) and population levels (Figure 4-5,

Supplemental Figure 4-5.1) exhibited both tonic and dynamic characteristics. That is, when study participants attempted to produce static forces, neural modulation to force varied with time to some degree.

The observed dynamic characteristics are consistent with previous results in humans (Murphy et al., 2016; Downey et al., 2018; Rastogi et al., 2020). In particular,

Downey and colleagues found that force decoding during a virtual, open-loop, grasp- and-transport task was above chance during the grasp phase of the task, but no greater than chance during static attempted force production during the transport phase. These results support the idea that motor cortex encodes changes in force, rather than (or in

187 addition to) discrete force levels themselves (Smith et al., 1975; Georgopoulos et al.,

1983; Wannier et al., 1991; Georgopoulos et al., 1992; Picard and Smith, 1992;

Boudreau and Smith, 2001; Paek et al., 2015).

However, the presence of tonic elements agrees with intracortical studies (Smith et al., 1975; Wannier et al., 1991), which demonstrated both tonic and dynamic neural responses to executed forces; and fMRI studies (Branco et al, 2019), which demonstrated a monotonic relationship between the BOLD response and static force magnitudes. Moreover, despite the presence of dynamic response elements, offline force classification performance remained relatively stable throughout the go phase

(Figure 4-6, Supplemental Figure 4-6.1), suggesting that the tonic elements could allow for adequate real-time force decoding using linear techniques alone. This may be especially true when decoding forces during dynamic functional tasks, which have been shown to elicit stronger, more consistent neural responses within motor cortex

(Georgopoulos et al., 1983; Georgopoulos et al., 1992; Branco et al., 2019).

Nonetheless, real-time force decoding would likely benefit from an exploration of a wider range of encoding models. For example, the exploration of a force derivative model, and its implementation within an online iBCI decoder, would be of potential utility.

Decoding of Discrete Versus Continuous Forces

The present work continues previous efforts to characterize discrete force representation in individuals with paralysis (Cramer et al., 2005; Downey et al., 2018;

Rastogi et al., 2020) by accurately classifying these forces across multiple hand grasps

– especially when performing light versus hard force classification (Figure 4-7). This supports the feasibility of enabling discrete (“state”) control of force magnitudes across multiple grasps within iBCI systems, which would allow the end iBCI user to perform functional grasping tasks requiring varied yet precise force outputs. Perhaps because

188 discrete force control alone would enhance iBCI functionality, relatively few studies have attempted to predict forces along a continuous range of magnitudes. Thus far, continuous force control has been achieved in non-human primates (Carmena et al.,

2003; Chen et al., 2014a) and able-bodied humans (Pistohl et al., 2012; Flint et al.,

2014), but not in individuals with tetraplegia. If successfully implemented, continuous force control could restore more nuanced grasping and object interaction capabilities to individuals with motor disabilities.

However, during the present work (Figure 4-7, Supplemental Figure 4-6.1) and additional discrete force studies (Downey et al, 2018; Murphy et al, 2016), intermediate force levels were often confused with their neighbors, and thus more difficult to decode.

Therefore, implementing continuous force control may pose challenges in individuals with tetraplegia. Possibly, enhancing force-related representation in these individuals via aforementioned techniques – including the introduction of dynamic force tasks, closed loop sensory feedback, and derivative force encoding models – may boost overall performance to a sufficient degree to enable continuous force decoding capabilities.

Regardless, more investigations are needed to determine the extent to which continuous force control is possible in iBCI systems for individuals with tetraplegia.

Concluding Remarks

This study found that, while force information was neurally represented and could be decoded across multiple hand grasps in a consistent way, grasp type had a significant impact on force classification accuracy. From a neuroscientific standpoint, these results suggest that force has both grasp-independent and grasp-dependent

(interacting) representations within motor cortex in individuals with tetraplegia. From a functional standpoint, they imply that in order to incorporate force as a control signal in

189 human iBCIs, closed-loop force decoders should ideally account for interactions between force and other motor parameters to maximize performance.

Supplemental Materials

Supplemental Figure 4-2.1. Exemplary threshold crossing (TC) and spike band power (SBP) features tuned to task parameters of interest in participant T5, presented as in Figure 4-2. Note the presence of sharp activity peaks during the prep and stop phases of the trial, which were due to the presence of visual cues (Rastogi et al., 2020).

190

A. B. B. - Summaryofvariances accounted B.

levelactivity patternssessions,all for presented FigureA as in 5 - Neuralpopulation 5.1. -

Supplemental FigureSupplemental 4 Demixedprincipalcomponents (dPCs)isolated all from individual ofneuralsessions data. by20for the dPCs top each from exemplary session.

191

Supplemental Figure 4-6.1. Time-dependent classification accuracies for individual force levels and grasp types. A. Time-dependent classification accuracies for force (row 1) and grasp (row 2), separated by force class and grasp class, respectively. Data traces were smoothed with a 100 millisecond boxcar filter to aid in in visualization. Shaded areas surrounding each data trace indicate the standard deviation across 240 session-runs during most trials in participant T8, 40 session-runs during elbow extension trials in participant T8, and 40-session runs in participant T5. Gray shaded regions indicate the upper and lower bounds of chance performance over S x 100 shuffles of trial data, where S is the number of sessions per participant. Blue shaded regions indicate the time points used to compute go-phase confusion matrices. B. Time- dependent force classification accuracies during individual grasps in participants T8 (row 1) and T5 (row 2). Blue shaded regions indicate the time points used to compute go- phase confusion matrices. Decoding performances were averaged over S x 40 session runs, where S is the number of sessions per participant.

192

Supplemental Figure 4-6.2. Time-dependent grasp classification accuracies by number of grasps attempted per session in participant T8. Data traces were smoothed with a 100 millisecond boxcar filter to aid in in visualization. Shaded areas surrounding each data trace indicate the standard deviation across 40 runs during each session in participant T8. Gray shaded regions indicate the upper and lower bounds of chance performance over 100 shuffles of trial data per session. Intended grasp is classified above chance performance at all trial time points, regardless of the number of grasps to be decoded.

193

Supplemental Figure 4-7.1. Statistics for go-phase force and grasp classifications accuracies. A. Force classification accuracy histograms (row 1) and corrected p values (row 2). Hard and light forces are classified significantly more accurately than medium forces across all sessions (p < 0.05). B. Grasp classification accuracy histograms (row 1) and corrected p values (row 2). Decoding performance differed significantly between grasps across all sessions.

194

Supplemental Figure 4-7.2. Statistics for go-phase force classification accuracies within individual grasp types. A one-way ANOVA was implemented on force classification accuracies achieved during different grasp types. A. Force classification accuracy histograms. B. P values between force pairs, corrected for multiple comparisons across grasps and sessions using the Benjamini-Hochberg procedure. Within each grasp, hard and light forces were classified more accurately than medium forces across all sessions (p < 0.05).

195

Supplemental Figure 4-7.3. Statistics for go-phase force classification accuracies within individual force levels. A one-way ANOVA was implemented on the force classification accuracies achieved during different grasp types. A. Force classification accuracy histograms, color-coded by the grasp type used to produce each force level. B. P values between pairs of grasps used to produce each individual force level, corrected for multiple comparisons across forces and sessions using the Benjamini-Hochberg procedure. The decoding performance for each discrete force level was significantly different across grasps (p < 0.05), indicating that grasp type affected force decoding performance.

196

Chapter 5: Discussion and Conclusions

Here, we outline contributions of the research presented in this dissertation and suggest neuroscientific and neurorehabilitative future directions based on our findings.

Research Contributions

General Contributions

This study expanded upon previous work demonstrating the neural representation of executed forces in able-bodied individuals (Dettmers et al., 1995;

Thickbroom et al., 1998; Cramer et al., 2002; Ehrsson et al., 2002; Kuhtz-Buschbeck et al., 2008; Ward et al., 2008; Degenhart et al., 2011; Paek et al., 2015; Murphy et al.,

2016; Wang et al., 2017) and imagined forces in individuals with paralysis (Cramer et al.,

2005; Downey et al., 2018). Cramer and colleagues showed that imagined forces were represented in multiple individuals with spinal cord injury using non-invasive neuroimaging techniques, while Downey and colleagues successfully decoded four imagined force levels from intracortical signals generated by a single individual with tetraplegia.

To our knowledge, the present study was the first to confirm this force- related neural activity is preserved in multiple individuals with tetraplegia, years after initial injury, at the resolution of intracortical activity. Force-related intracortical activity in individuals with tetraplegia appears to resemble activity patterns demonstrated in non-human primates and able-bodied humans. Namely, static forces have a monotonic relationship with neural activity (Evarts, 1969; Hepp-Reymond et al.,

1978; Thach, 1978; Cheney and Fetz, 1980; Evarts et al., 1983; Dettmers et al., 1995;

Thickbroom et al., 1998; Cramer et al., 2002; Ehrsson et al., 2002; Kuhtz-Buschbeck et

197 al., 2008; Ward et al., 2008), and that single features have varying morphologies composed of phasic and tonic components (Kalaska et al., 1989; Wannier et al., 1991;

Maier et al., 1993). Broadly, the presence of these force-modulating neural signals supports the feasibility of incorporating force as a control signal in future closed-loop iBCIs to restore functional tasks to individuals with tetraplegia.

Force and Volitional State

In Chapter 3, we investigated how force-related neural representation in individuals with tetraplegia was influenced by observed, imagined, and attempted force production. We found that force-related representation was heavily influenced by volitional state: Forces were decoded above chance when attempted by individuals with tetraplegia, but not always when observed or imagined. These results are in agreement with previous kinematic studies, which exhibited stronger, more widespread neural activation patterns during executed and attempted movements versus during observed and imagined movements (Grezes and Decety, 2001; Filimon et al., 2007; Miller et al.,

2010; Filimon et al., 2015; Vargas-Irwin et al., 2018). Thus, the current work shows that volitional state affects kinematic and kinetic parameters in similar ways.

Additionally, we found that volitional state was neurally represented to a greater extent than force in individuals with tetraplegia. This result deviated from what was expected from previous kinematic studies, which showed that movements were neurally represented to a similar (or greater) extent than volitional state in individuals with tetraplegia (Vargas-Irwin et al., 2018). More broadly, since the motor cortex in non- human primates and able-bodied humans has been shown to represent both kinematic and kinetic parameters (Kalaska et al., 1989; Carmena et al., 2003; Sergio et al., 2005;

Hendrix et al., 2009; Degenhart et al., 2011; Suminski et al., 2011; Paek et al., 2015), and since kinematic iBCIs in individuals with tetraplegia have achieved multi-dimensional

198 kinematic control of end-effectors (Hochberg et al., 2012; Collinger et al., 2013b;

Wodlinger et al., 2015; Ajiboye et al., 2017; Pandarinath et al., 2017), we initially expected forces to be neurally represented to a similar degree as kinematic parameters, and thus decoded from offline intracortical neural data with equal or greater accuracy than volitional state. However, in all three participants evaluated within the present work, volitional states were decoded from offline neural data more accurately than even attempted forces. Thus, this study presents a novel insight about how force is represented in individuals with tetraplegia. Specifically, tetraplegia may disproportionately alter the degree to which kinetic (versus kinematic) parameters are represented in motor cortex.

Force and Grasp

In Chapter 4, we investigated the degree to which attempted force representation was affected by hand grasp postures. This study, which was among the first to quantify how kinematic behaviors affect force-related neural representation, was motivated by conflicting evidence that suggested, on the one hand, that forces and grasps had independent cortical representations (Carmena et al., 2003; Hendrix et al., 2009; Intveld et al., 2018) and that, on the other hand, these representations exhibited interactions

(Hepp-Reymond et al., 1999; Degenhart et al., 2011).

Two main contributions emerged from this work. First, we found that grasps were neurally represented to a greater degree than static forces, in that they were decoded more accurately from the neural activity and explained more population-level signal variance. While these results do agree with previous studies in non-human primates (Hendrix et al., 2009; Intveld et al., 2018), they may also support the idea that in individuals with tetraplegia, kinematic parameters are represented more robustly than kinetic parameters in motor cortex, as was suggested in Chapter 3.

199

Second, we found that force has both grasp-independent and grasp-dependent representations within the motor cortex in individuals with tetraplegia. More specifically, while forces were largely represented at a high level, independent of grasp – as evidenced by above-chance force decoding accuracies across multiple hand and arm postures and large population-level variances attributed to non-interacting motor parameters – a small amount of population-level variance was explained by interactions between force and grasp. Furthermore, these interactions affected decoding performance to a statistically significant degree in two participants with tetraplegia.

These results inform our understanding of how kinetic variables are represented across a range of kinematic behaviors in individuals with tetraplegia. Here, we proposed that the neural space could consist of two populations: one that encodes force at a high level independent of grasp and motion, and another that encodes tuning to muscle activity and, therefore, causes interactions between force and grasp. This proposition incorporates both aspects of the scientific debate regarding independent versus interacting neural representations of motor parameters.

Future Directions for Neuroscience: Enhancing Force-

Related Representation in Motor Cortex

As demonstrated in Chapters 3 and 4, the neural representation of force was superseded by other variables, including volitional state and grasp. This relative deficiency of force information contrasts with previous work in non-human primates and able-bodied humans, which exhibited robust neural activation patterns in response to executed forces (Evarts, 1968, 1969; Humphrey, 1970; Smith et al., 1975; Hepp-

Reymond et al., 1978; Georgopoulos et al., 1983; Georgopoulos et al., 1992; Dettmers

200 et al., 1995; Thickbroom et al., 1998; Cramer et al., 2002; Murphy et al., 2016; Wang et al., 2017). Critically, this deficiency in representation impacted our ability to predict forces from neural activity: While force decoding performances often exceeded chance, they were always significantly lower than those attributed to volitional state and grasp.

Therefore, to improve the feasibility of incorporating force control into closed-loop iBCIs, further studies are needed to characterize additional aspects of kinetic representation, with the ultimate goal of enhancing this representation and, by extension, force decoding performance. Here we propose some possible lines of future investigation to address this goal.

Characterizing Deafferentation-Induced Changes in Force

Representation

The current study investigated how observed, imagined, and attempted grasping forces were represented in individuals with tetraplegia, with the understanding that the study participants could not execute these actions. Notably, most participants retained voluntary motor control above the level of their spinal cord injuries, which presents a unique opportunity to investigate the intracortical representation of executed forces, such as of the head and neck. While iBCIs studies have traditionally decoded motor parameters of the upper limb (Bensmaia and Miller, 2014), recent evidence indicates that the “hand knob” of motor cortex (Yousry et al., 1997) actually encodes kinematics for many different body parts, including the head, face, arm, hand, leg, and foot (Willett et al., 2020). Since kinematic and kinetic parameters have overlapping representations in motor cortex (Thach, 1978; Kalaska et al., 1989; Carmena et al., 2003; Sergio et al.,

2005; Suminski et al., 2011; Chhatbar and Francis, 2013; Milekovic et al., 2015; Intveld et al., 2018), the motor cortex may also encode kinetic parameters pertaining to these same body parts. Therefore, the study by Willett and colleagues raises the possibility of

201 comparing kinetic representation of forces executed with body parts above the level of injury (e.g., head and neck) and forces attempted or imagined with body parts below the injury level (e.g., arm and hand). These investigations could serve to inform our understanding of how deafferentation affects kinetic representation.

In particular, these investigations would allow us to elucidate how kinetic representation is affected by intact versus compromised somatosensory feedback. As described in Chapter 3, feedback through somatosensory pathways plays a critical role in able-bodied kinetic control (Brochier et al., 1999; Tan et al., 2014; Tabot et al., 2015;

Carteron et al., 2016; Schiefer et al., 2018); however, these pathways are profoundly altered during tetraplegia (Solstrand Dahlberg et al., 2018), which may explain the relative deficiency in kinetic representation observed within the present work. This is in contrast to kinematic control, which relies largely on visual feedback that remains intact after tetraplegia. Therefore, a second line of inquiry could investigate the degree to which the reintroduction of somatosensory feedback, either through intracortical microstimulation (ICMS) of the somatosensory cortex (Flesher et al., 2016) or by other means (Antfolk et al., 2013), affects kinetic representation in motor cortex.

Somatosensation has been shown to elicit responses in the motor cortex, both in able- bodied individuals (Hatsopoulos and Suminski, 2011) and in an individual with tetraplegia (Shaikhouni et al., 2013). Therefore, it is possible that reintroducing somatosensory feedback would enhance kinetic representation in future studies, thus increasing our ability to predict intended forces from neural activity.

Notably, regardless of our ability decode kinetic from the cortex, the scope of iBCIs – in terms of restoring the ability to grasp, manipulate, and exert fine motor control over objects – will remain limited incorporating somatosensory feedback into the system

(Bensmaia and Miller, 2014). The ideal iBCI system should not only restore both kinetic

202 and kinematic control to users, but also naturalistic feedback to indicate how well users can volitionally control these parameters. In able-bodied individuals, tactile and proprioceptive feedback during grasping and object manipulation conveys information about the location of contact with an object, the amount of force that the object exerts on the hand, and the dynamics of the hand and arm. Recent work has successfully restored some degree of sensation, including tactile and vibratory percepts, to individuals with tetraplegia via ICMS of the somatosensory cortex (Flesher et al., 2016;

Armenta Salas et al., 2018; Weiss et al., 2019). Though further studies are needed to characterize the effects of ICMS on perception, and how these effects in turn impact motor representation, the preliminary efforts outlined here are a promising start to the integration of somatosensory feedback within closed-loop iBCIs.

Characterizing the Representation of Additional Kinetic Parameters

Though deafferentation-induced changes in somatosensory pathways likely contribute to the reduced neural representation of static forces observed in the current study, it is entirely possible that other kinetic parameters retain a greater degree of representation in individuals with tetraplegia. Previous studies in non-human primates indicate that the motor cortex encodes not only the magnitude of static forces force

(Evarts, 1968, 1969; Humphrey, 1970; Smith et al., 1975; Hepp-Reymond et al., 1978;

Thach, 1978), but also the direction of applied force against a load (Kalaska and Hyde,

1985; Kalaska et al., 1989), the rate of change of force production during dynamic force tasks (Evarts, 1968, 1969)}(Hepp-Reymond et al., 1978; Georgopoulos et al., 1983;

Georgopoulos et al., 1992), joint torques (Fagg et al., 2009), and muscle activity (Morrow and Miller, 2003; Santucci et al., 2005; Pohlmeyer et al., 2007). Furthermore, many of these studies indicate that the motor cortex is preferentially involved in the directional and dynamic aspects of kinetic control, and that these aspects take precedence over the

203 control of static force magnitudes when dynamic and static conditions are superimposed

(Branco et al., 2019).

The representation of these dynamic aspects of kinetic control have remained largely unexplored in individuals with tetraplegia, as most such investigations, including the current work, have solely investigated the representation of unidirectional, static forces in the motor cortex (Downey et al., 2018). However, functional tasks such as grasping and object interaction likely involve these other aspects to an equal or greater degree than the static force magnitudes. Thus, further open-loop investigations are warranted to fully characterize the complexity of kinetic representation in motor cortex during functional, dynamic force tasks. An initial approach could be to adapt the behavioral tasks of previous NHP studies into those that could be performed by individuals with tetraplegia, and then characterize how the neural activity is modulated by these tasks. For example, individuals with tetraplegia could conceivably execute head and neck forces that compensate for inertial loads in multiple directions as in

(Kalaska et al., 1989), in order to determine whether neural activity in tetraplegia is modulated by the directionality of kinetic tasks.

Such open-loop studies would benefit from an exploration of various encoding models that account for these additional kinetic parameters, and how they interact with additional parameters such as grasp and volitional state. In Chapter 4, for example, we proposed an additive model that simulated neural activity as a linear combination of intended force level and hand grasp posture, and a scalar model that simulated neural activity as the result of an interaction between these two parameters. From our study results, and from the results of future characterization studies, a more comprehensive encoding model of neural activity could encompass a weighted sum of multiple kinetic terms, including force magnitude and its interaction with grasp, the preferred directional

204

“load axis” (Kalaska et al., 1989) of the neural feature being modeled, and a derivative term that models the neural activity, to some degree, as a function of the rate of change of force (Evarts, 1969). The exploration of such encoding models, and their implementation within a closed-loop iBCI decoder, would be of potential utility.

Future Directions for Neurorehabilitation: Implications for Closed-Loop Force Decoding

Chapters 3 and 4 of this dissertation investigated how well intended forces could be predicted offline from neural activity during various volitional states and hand grasps.

In Chapter 4, we discussed some of the decoding implications of this work, including the effects of decoding static motor parameters from dynamic neural responses, and the possibility of decoding continuous as opposed to discrete force levels from neural activity. Here, we expand on the implications of accounting for the effects of multiple parameters – including but not limited to grasp, and volitional state – during closed-loop iBCI control of force.

In this study, we found that the offline decoding performance of attempted forces exceeded chance across multiple hand grasps, which supports the feasibility of incorporating force as a parameter of control within closed-loop iBCIs. However, we also found that force predictions were influenced both by volitional state – in that attempted forces were decoded more accurately than observed and imagined forces – and by hand grasp. Therefore, future closed-loop force iBCIs will likely need to account for these additional parameters to maximize performance. While accounting for volitional state could be as simple as training closed-loop iBCI decoders on attempted

(as opposed to observed or imagined) force production, accounting for the effects of

205 hand grasp (and other motor parameters) on force output will likely be more complex.

For example, a future iBCI system may conceivably need to decode intended forces and grasps simultaneously, and then alter its force-related predictions based on the intended grasps used to produce those forces. Though the aforementioned example is but one possibility of how a closed-loop force decoder could operate, the ability to control forces in real time will likely require iBCI systems to extract multiple kinetic and kinematic parameters from the neural data simultaneously, and then account for how these parameters may interact to influence the motor output.

This likelihood leads to a larger question of whether the parametrization of the motor output into various kinematic and kinetic outputs is the best approach. While the current study emerges from a long line of work that investigated isolated and specific motor parameters such as force (Evarts, 1968, 1969), a recent body of work suggests that these parameters cannot be fully de-coupled during naturalistic movements due to biological and physical constraints (Ebner et al., 2009; Kalaska, 2009; Reimer and

Hatsopoulos, 2009; Branco et al., 2019). Furthermore, these studies have demonstrated that motor parameters are often spatially and temporally correlated, and that individual neurons often encode different motor parameters to varying degrees over time depending on the task at hand (Ebner et al., 2009; Reimer and Hatsopoulos, 2009).

From this work, some groups have proposed that, rather than decoding separate motor parameters from the cortex, it may be more effective extract the entire motor output and model all of the involved parameters as part of a single motor action (Branco et al.,

2019).

206

Appendix: Electrode Impedance and Signal

Quality

Traditionally, electrophysiological recording studies have attempted to maximize the signal-to-noise ratio of recorded neural data by reducing the impedance of the electrode-tissue interface. Such efforts have stemmed from the idea that the higher the electrode impedance, the greater the noise in the signal. In this dissertation, 2 arrays of

96 microelectrodes were placed in the brain parenchyma of three study participants.

Due to immune-modulated electrode encapsulation that occurs after electrode array placement, the impedances of the individual electrode shafts are prone to rise over time, which leads to an attenuation of the signal and increased noise. Figure A-1 shows the impedance profiles of the electrode arrays placed in participant T8, 511 days post- implant, during recording session 1 of Chapter 3 (see Supplemental Table 3-S1). Many of these electrode impedances are well over 800 k-Ohms, particularly within Array 2.

Here, we sought to determine the degree to which impedance-induced noise influenced the strength of the detected spike band power (SBP) features extracted from the neural data. To this end, we compared the root mean square (RMS) values the SBP signal and the impedance-induced noise values, all averaged over 60 seconds within each electrode. Recall from Chapters 3 and 4 that the SBP feature is computed as the

RMS of the power within the 250-5000 Hz frequency band. The RMS of the broadband signal and the noise were computed using Equation A-1, where V denotes the voltage of the broadband voltage during a single sample, BW denotes the bandwidth of the broadband signal (0.3-7500 Hz), N denotes the number of time samples within 60 seconds, and R denotes the impedance of the electrode. The SNR for each electrode was calculated according to Equation A-2.

207

1 푉 푅푀푆 = 퐵푊 (퐄퐪퐮퐚퐭퐢퐨퐧 퐀 − ퟏ) 푁 푅

푅푀푆 푆푁푅 = 10log (퐄퐪퐮퐚퐭퐢퐨퐧 퐀 − ퟐ) 푅푀푆

A

B

Figure A-1. Electrode impedances for participant T8, Session 1. A. Lateral array impedances. B. Medial array impedances.

208

Figure A-2. Electrode signal characteristics. This plot compares the spike band power RMS and the impedance-induced noise.

Figure A-3. Electrode signal to noise ratios.

209

In Figure A-2, the RMS noise due to electrode impedance is generally smaller than the RMS of the SBP signal across all for some electrodes. The corresponding SNR, according to Figure A-3, is always above 12 decibels. These results indicate that for many channels, the signal strength is much greater than the impedance-induced noise.

For some channels, the noise approaches the signal strength, indicating that the voltage detected by these channels is highly influenced by noise. However, the signal quality is acceptable across a majority of electrodes.

210

References

(2012) Brain-Computer Interfaces: Principles and Practice. New York, NY: Oxford University Press. Abiri R, Borhani S, Sellers EW, Jiang Y, Zhao X (2019) A comprehensive review of EEG-based brain-computer interface paradigms. J Neural Eng 16:011001. Abrams GM, Ganguly K (2015) Management of chronic spinal cord dysfunction. Continuum (Minneap Minn) 21:188-200. Aflalo T, Kellis S, Klaes C, Lee B, Shi Y, Pejsa K, Shanfield K, Hayes-Jackson S, Aisen M, Heck C, Liu C, Andersen RA (2015) Neurophysiology. Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348:906-910. Aggarwal V, Mollazadeh M, Davidson AG, Schieber MH, Thakor NV (2013) State-based decoding of hand and finger kinematics using neuronal ensemble and LFP activity during dexterous reach-to-grasp movements. J Neurophysiol 109:3067- 3081. Aghagolzadeh M, Truccolo W (2016) Inference and Decoding of Motor Cortex Low- Dimensional Dynamics via Latent State-Space Models. IEEE Trans Neural Syst Rehabil Eng 24:272-282. Ajiboye AB, Simeral JD, Donoghue JP, Hochberg LR, Kirsch RF (2012) Prediction of imagined single-joint movements in a person with high-level tetraplegia. IEEE Trans Biomed Eng 59:2755-2765. Ajiboye AB, Willett FR, Young DR, Memberg WD, Murphy BA, Miller JP, Walter BL, Sweet JA, Hoyen HA, Keith MW, Peckham PH, Simeral JD, Donoghue JP, Hochberg LR, Kirsch RF (2017) Restoration of reaching and grasping movements through brain-controlled muscle stimulation in a person with tetraplegia: a proof-of-concept demonstration. The Lancet. Alaerts K, de Beukelaar TT, Swinnen SP, Wenderoth N (2012) Observing how others lift light or heavy objects: time-dependent encoding of grip force in the primary motor cortex. Psychol Res 76:503-513. Alba NA, Du ZJ, Catt KA, Kozai TD, Cui XT (2015) In Vivo Electrochemical Analysis of a PEDOT/MWCNT Neural Electrode Coating. Biosensors 5:618-646. Amirikian B, Georgopoulos AP (2000) Directional tuning profiles of motor cortical cells. Neurosci Res 36:73-79. Andersen RA, Buneo CA (2002) Intentional maps in posterior parietal cortex. Annu Rev Neurosci 25:189-220. Andersen RA, Musallam S, Pesaran B (2004) Selecting the signals for a brain-machine interface. Curr Opin Neurobiol 14:720-726. Andersen RA, Kellis S, Klaes C, Aflalo T (2014) Toward more versatile and intuitive cortical brain-machine interfaces. Curr Biol 24:R885-R897. Antfolk C, D'Alonzo M, Rosen B, Lundborg G, Sebelius F, Cipriani C (2013) Sensory feedback in upper limb prosthetics. Expert Rev Med Devices 10:45-54. Armenta Salas M, Helms Tillery SI (2016) Uniform and Non-uniform Perturbations in Brain-Machine Interface Task Elicit Similar Neural Strategies. Front Syst Neurosci 10:70. Armenta Salas M, Bashford L, Kellis S, Jafari M, Jo H, Kramer D, Shanfield K, Pejsa K, Lee B, Liu CY, Andersen RA (2018) Proprioceptive and cutaneous sensations in humans elicited by intracortical microstimulation. Elife 7. Ashe J (1997) Force and the motor cortex. Behav Brain Res 87:255-269.

211

Atmaramani R, Black BJ, Lam KH, Sheth VM, Pancrazio JJ, Schmidtke DW, Alsmadi NZ (2019) The Effect of Microfluidic Geometry on Myoblast Migration. Micromachines (Basel) 10. Audu ML, Lombardo LM, Schnellenberger JR, Foglyano KM, Miller ME, Triolo RJ (2015) A neuroprosthesis for control of seated balance after spinal cord injury. J Neuroeng Rehabil 12:8. Bacher D, Jarosiewicz B, Masse NY, Stavisky SD, Simeral JD, Newell K, Oakley EM, Cash SS, Friehs G, Hochberg LR (2015) Neural Point-and-Click Communication by a Person With Incomplete Locked-In Syndrome. Neurorehabil Neural Repair 29:462-471. Bansal AK, Vargas-Irwin CE, Truccolo W, Donoghue JP (2011) Relationships among low-frequency local field potentials, spiking activity, and three-dimensional reach and grasp kinematics in primary motor and ventral premotor cortices. J Neurophysiol 105:1603-1619. Bansal AK, Truccolo W, Vargas-Irwin CE, Donoghue JP (2012) Decoding 3D reach and grasp from hybrid signals in motor and premotor cortices: spikes, multiunit activity, and local field potentials. J Neurophysiol 107:1337-1355. Barrese JC, Rao N, Paroo K, Triebwasser C, Vargas-Irwin C, Franquemont L, Donoghue JP (2013) Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates. J Neural Eng 10:066014. Bayly PV, Cohen TS, Leister EP, Ajo D, Leuthardt EC, Genin GM (2005) Deformation of the human brain induced by mild acceleration. J Neurotrauma 22:845-856. Bedell HW, Capadona JR (2018) Anti-inflammatory Approaches to Mitigate the Neuroinflammatory Response to Brain-Dwelling Intracortical Microelectrodes. J Immunol Sci 2:15-21. BeMent SL, Wise KD, Anderson DJ, Najafi K, Drake KL (1986) Solid-state electrodes for multichannel multiplexed intracortical neuronal recording. IEEE Trans Biomed Eng 33:230-241. Benjamini Y, Hochberg Y (1995) Controlling the False Discovery Rate - a Practical and Powerful Approach to Multiple Testing. J R Stat Soc B 57:289-300. Bensmaia SJ, Miller LE (2014) Restoring sensorimotor function through intracortical interfaces: progress and looming challenges. Nat Rev Neurosci 15:313-325. Benyamini M, Zacksenhouse M (2015) Optimal feedback control successfully explains changes in neural modulations during experiments with brain-machine interfaces. Front Syst Neurosci 9:71. Bhandari R, Negi S, Solzbacher F (2010) Wafer-scale fabrication of penetrating neural microelectrode arrays. Biomed Microdevices 12:797-807. Bizzi E, Schiller PH (1970) Single unit activity in the frontal eye fields of unanesthetized monkeys during eye and head movement. Exp Brain Res 10:150-158. Bjornsson CS, Oh SJ, Al-Kofahi YA, Lim YJ, Smith KL, Turner JN, De S, Roysam B, Shain W, Kim SJ (2006) Effects of insertion conditions on tissue strain and vascular damage during neuroprosthetic device insertion. J Neural Eng 3:196- 207. Bleichner MG, Jansma JM, Sellmeijer J, Raemaekers M, Ramsey NF (2014) Give me a sign: decoding complex coordinated hand movements using high-field fMRI. Brain Topogr 27:248-257. Bleichner MG, Freudenburg ZV, Jansma JM, Aarnoutse EJ, Vansteensel MJ, Ramsey NF (2016) Give me a sign: decoding four complex hand gestures based on high- density ECoG. Brain Struct Funct 221:203-216.

212

Bockbrader M (2019) Upper limb sensorimotor restoration through brain–computer interface technology in tetraparesis. Current Opinion in Biomedical Engineering 11:85-101. Boline J, Ashe J (2005) On the relations between single cell activity in the motor cortex and the direction and magnitude of three-dimensional dynamic isometric force. Exp Brain Res 167:148-159. Boudreau MJ, Smith AM (2001) Activity in rostral motor cortex in response to predictable force-pulse perturbations in a precision grip task. J Neurophysiol 86:1079-1085. Bouton CE, Shaikhouni A, Annetta NV, Bockbrader MA, Friedenberg DA, Nielson DM, Sharma G, Sederberg PB, Glenn BC, Mysiw WJ, Morgan AG, Deogaonkar M, Rezai AR (2016) Restoring cortical control of functional movement in a human with quadriplegia. Nature 533:247-250. Branco MP, de Boer LM, Ramsey NF, Vansteensel MJ (2019) Encoding of kinetic and kinematic movement parameters in the sensorimotor cortex: A Brain-Computer Interface perspective. Eur J Neurosci 50:2755-2772. Branco MP, Freudenburg ZV, Aarnoutse EJ, Bleichner MG, Vansteensel MJ, Ramsey NF (2017) Decoding hand gestures from primary somatosensory cortex using high-density ECoG. Neuroimage 147:130-142. Brandman DM, Cash SS, Hochberg LR (2017) Review: Human Intracortical Recording and Neural Decoding for Brain-Computer Interfaces. IEEE Trans Neural Syst Rehabil Eng 25:1687-1696. Brandman DM, Burkhart MC, Kelemen J, Franco B, Harrison MT, Hochberg LR (2018a) Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression. Neural Comput 30:2986-3008. Brandman DM et al. (2018b) Rapid calibration of an intracortical brain-computer interface for people with tetraplegia. J Neural Eng 15:026007. Branner A, Stein RB, Normann RA (2001) Selective stimulation of cat sciatic nerve using an array of varying-length microelectrodes. J Neurophysiol 85:1585-1594. Bremner LR, Andersen RA (2012) Coding of the reach vector in parietal area 5d. Neuron 75:342-351. Brochier T, Boudreau MJ, Pare M, Smith AM (1999) The effects of muscimol inactivation of small regions of motor and somatosensory cortex on independent finger movements and force control in the precision grip. Exp Brain Res 128:31-40. Bullard AJ, Hutchison BC, Lee J, Chestek CA, Patil PG (2019) Estimating Risk for Future Intracranial, Fully Implanted, Modular Neuroprosthetic Systems: A Systematic Review of Hardware Complications in Clinical Deep Brain Stimulation and Experimental Human Intracortical Arrays. Neuromodulation. Burkhart M, Brandman D, Vargas-Irwin C, Harrison M (2016) The discriminative Kalman filter for nonlinear and non-Gaussian sequential Bayesian filtering. Caggiano V, Fogassi L, Rizzolatti G, Thier P, Casile A (2009) Mirror neurons differentially encode the peripersonal and extrapersonal space of monkeys. Science 324:403-406. Caggiano V, Fogassi L, Rizzolatti G, Pomper JK, Thier P, Giese MA, Casile A (2011) View-based encoding of actions in mirror neurons of area f5 in macaque premotor cortex. Curr Biol 21:144-148. Campbell PK, Jones KE, Huber RJ, Horch KW, Normann RA (1991) A silicon-based, three-dimensional neural interface: manufacturing processes for an intracortical electrode array. IEEE Trans Biomed Eng 38:758-768. Carmena JM, Lebedev MA, Crist RE, O'Doherty JE, Santucci DM, Dimitrov DF, Patil PG, Henriquez CS, Nicolelis MA (2003) Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol 1:E42.

213

Carpaneto J, Umilta MA, Fogassi L, Murata A, Gallese V, Micera S, Raos V (2011) Decoding the activity of grasping neurons recorded from the ventral premotor area F5 of the macaque monkey. Neuroscience 188:80-94. Carpaneto J, Raos V, Umilta MA, Fogassi L, Murata A, Gallese V, Micera S (2012) Continuous decoding of grasping tasks for a prospective implantable cortical neuroprosthesis. J Neuroeng Rehabil 9:84. Carteron A, McPartlan K, Gioeli C, Reid E, Turturro M, Hahn B, Benson C, Zhang W (2016) Temporary Nerve Block at Selected Digits Revealed Hand Motor Deficits in Grasping Tasks. Front Hum Neurosci 10:596. Casadio M, Pressman A, Mussa-Ivaldi FA (2015) Learning to push and learning to move: the adaptive control of contact forces. Front Comput Neurosci 9:118. Chang SR, Nandor MJ, Li L, Kobetic R, Foglyano KM, Schnellenberger JR, Audu ML, Pinault G, Quinn RD, Triolo RJ (2017) A muscle-driven approach to restore stepping with an exoskeleton for individuals with paraplegia. J Neuroeng Rehabil 14:48. Chapin JK, Moxon KA, Markowitz RS, Nicolelis MA (1999) Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex. Nat Neurosci 2:664-670. Chase SM, Schwartz AB, Kass RE (2009) Bias, optimal linear estimation, and the differences between open-loop simulation and closed-loop performance of spiking-based brain-computer interface algorithms. Neural Netw 22:1203-1213. Chen C, Shin D, Watanabe H, Nakanishi Y, Kambara H, Yoshimura N, Nambu A, Isa T, Nishimura Y, Koike Y (2014a) Decoding grasp force profile from electrocorticography signals in non-human primate sensorimotor cortex. Neurosci Res 83:1-7. Chen W, Liu X, Litt B (2014b) Logistic-weighted regression improves decoding of finger flexion from electrocorticographic signals. Conf Proc IEEE Eng Med Biol Soc 2014:2629-2632. Cheney PD, Fetz EE (1980) Functional classes of primate corticomotoneuronal cells and their relation to active force. J Neurophysiol 44:773-791. Chestek CA, Gilja V, Blabe CH, Foster BL, Shenoy KV, Parvizi J, Henderson JM (2013) Hand posture classification using electrocorticography signals in the gamma band over human sensorimotor brain areas. J Neural Eng 10:026002. Chestek CA, Gilja V, Nuyujukian P, Kier RJ, Solzbacher F, Ryu SI, Harrison RR, Shenoy KV (2009) HermesC: low-power wireless neural recording system for freely moving primates. IEEE Trans Neural Syst Rehabil Eng 17:330-338. Chestek CA, Batista AP, Santhanam G, Yu BM, Afshar A, Cunningham JP, Gilja V, Ryu SI, Churchland MM, Shenoy KV (2007) Single-neuron stability during repeated reaching in macaque premotor cortex. J Neurosci 27:10742-10750. Chestek CA, Gilja V, Nuyujukian P, Foster JD, Fan JM, Kaufman MT, Churchland MM, Rivera-Alvidrez Z, Cunningham JP, Ryu SI, Shenoy KV (2011) Long-term stability of neural prosthetic control signals from silicon cortical arrays in rhesus macaque motor cortex. J Neural Eng 8:045005. Chhatbar PY, Francis JT (2013) Towards a naturalistic brain-machine interface: hybrid torque and position control allows generalization to novel dynamics. PLoS One 8:e52286. Chib VS, Krutky MA, Lynch KM, Mussa-Ivaldi FA (2009) The separate neural control of hand movements and contact forces. J Neurosci 29:3939-3947. Christie BP, Graczyk EL, Charkhkar H, Tyler DJ, Triolo RJ (2019) Visuotactile synchrony of stimulation-induced sensation and natural somatosensation. J Neural Eng 16:036025.

214

Christie BP, Tat DM, Irwin ZT, Gilja V, Nuyujukian P, Foster JD, Ryu SI, Shenoy KV, Thompson DE, Chestek CA (2015) Comparison of spike sorting and thresholding of voltage waveforms for intracortical brain-machine interface performance. J Neural Eng 12:016009. Churchland MM, Cunningham JP, Kaufman MT, Ryu SI, Shenoy KV (2010) Cortical preparatory activity: representation of movement or first cog in a dynamical machine? Neuron 68:387-400. Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV (2012) Neural population dynamics during reaching. Nature 487:51- 56. Cikajlo I, Matjacic Z, Bajd T, Futami R (2005) Sensory supported FES control in gait training of incomplete spinal cord injury persons. Artif Organs 29:459-461. Cogan SF, Edell DJ, Guzelian AA, Ping Liu Y, Edell R (2003) Plasma-enhanced chemical vapor deposited silicon carbide as an implantable dielectric coating. J Biomed Mater Res A 67:856-867. Colachis SCt, Bockbrader MA, Zhang M, Friedenberg DA, Annetta NV, Schwemmer MA, Skomrock ND, Mysiw WJ, Rezai AR, Bresler HS, Sharma G (2018) Dexterous Control of Seven Functional Hand Movements Using Cortically-Controlled Transcutaneous Muscle Stimulation in a Person With Tetraplegia. Front Neurosci 12:208. Collinger JL, Boninger ML, Bruns TM, Curley K, Wang W, Weber DJ (2013a) Functional priorities, assistive technology, and brain-computer interfaces after spinal cord injury. J Rehabil Res Dev 50:145-160. Collinger JL, Wodlinger B, Downey JE, Wang W, Tyler-Kabara EC, Weber DJ, McMorland AJ, Velliste M, Boninger ML, Schwartz AB (2013b) High-performance neuroprosthetic control by an individual with tetraplegia. Lancet 381:557-564. Cramer SC, Lastra L, Lacourse MG, Cohen MJ (2005) Brain motor system function after chronic, complete spinal cord injury. Brain 128:2941-2950. Cramer SC, Mark A, Barquist K, Nhan H, Stegbauer KC, Price R, Bell K, Odderson IR, Esselman P, Maravilla KR (2002) Motor cortex activation is preserved in patients with chronic hemiplegic stroke. Ann Neurol 52:607-616. Davare M, Kraskov A, Rothwell JC, Lemon RN (2011) Interactions between areas of the cortical grasping network. Curr Opin Neurobiol 21:565-570. Degenhart AD, Collinger JL, Vinjamuri R, Kelly JW, Tyler-Kabara EC, Wang W (2011) Classification of hand posture from electrocorticographic signals recorded during varying force conditions. Conf Proc IEEE Eng Med Biol Soc 2011:5782-5785. Degenhart AD, Hiremath SV, Yang Y, Foldes S, Collinger JL, Boninger M, Tyler-Kabara EC, Wang W (2018) Remapping cortical modulation for electrocorticographic brain-computer interfaces: a somatotopy-based approach in individuals with upper-limb paralysis. J Neural Eng 15:026021. Deku F, Cohen Y, Joshi-Imre A, Kanneganti A, Gardner TJ, Cogan SF (2018a) Amorphous silicon carbide ultramicroelectrode arrays for neural stimulation and recording. J Neural Eng 15:016007. Deku F, Frewin CL, Stiller A, Cohen Y, Aqeel S, Joshi-Imre A, Black B, Gardner TJ, Pancrazio JJ, Cogan SF (2018b) Amorphous Silicon Carbide Platform for Next Generation Penetrating Neural Interface Designs. Micromachines (Basel) 9. Delhaye BP, Saal HP, Bensmaia SJ (2016) Key considerations in designing a somatosensory neuroprosthesis. J Physiol Paris 110:402-408. Delhaye BP, Long KH, Bensmaia SJ (2018) Neural Basis of Touch and Proprioception in Primate Cortex. Compr Physiol 8:1575-1602.

215

Dettmers C, Fink GR, Lemon RN, Stephan KM, Passingham RE, Silbersweig D, Holmes A, Ridding MC, Brooks DJ, Frackowiak RS (1995) Relation between cerebral activity and force in the motor areas of the human brain. J Neurophysiol 74:802- 815. di Pellegrino G, Fadiga L, Fogassi L, Gallese V, Rizzolatti G (1992) Understanding motor events: a neurophysiological study. Exp Brain Res 91:176-180. Dickey AS, Suminski A, Amit Y, Hatsopoulos NG (2009) Single-unit stability using chronically implanted multielectrode arrays. J Neurophysiol 102:1331-1339. Doud AJ, Lucas JP, Pisansky MT, He B (2011) Continuous three-dimensional control of a virtual helicopter using a motor imagery based brain-computer interface. PLoS One 6:e26322. Downey JE, Brane L, Gaunt RA, Tyler-Kabara EC, Boninger ML, Collinger JL (2017) Motor cortical activity changes during neuroprosthetic-controlled object interaction. Sci Rep 7:16947. Downey JE, Weiss JM, Flesher SN, Thumser ZC, Marasco PD, Boninger ML, Gaunt RA, Collinger JL (2018) Implicit Grasp Force Representation in Human Motor Cortical Recordings. Front Neurosci 12:801. Downey JE, Weiss JM, Muelling K, Venkatraman A, Valois JS, Hebert M, Bagnell JA, Schwartz AB, Collinger JL (2016) Blending of brain-machine interface and vision- guided autonomous robotics improves neuroprosthetic arm performance during grasping. J Neuroeng Rehabil 13:28. Dushanova J, Donoghue J (2010) Neurons in primary motor cortex engaged during action observation. Eur J Neurosci 31:386-398. Ebner TJ, Hendrix CM, Pasalar S (2009) Past, present, and emerging principles in the neural encoding of movement. Adv Exp Med Biol 629:127-137. Egan J, Baker J, House PA, Greger B (2012) Decoding dexterous finger movements in a neural prosthesis model approaching real-world conditions. IEEE Trans Neural Syst Rehabil Eng 20:836-844. Ehrsson HH, Kuhtz-Buschbeck JP, Forssberg H (2002) Brain regions controlling nonsynergistic versus synergistic movement of the digits: a functional magnetic resonance imaging study. J Neurosci 22:5074-5080. Eliassen JC, Boespflug EL, Lamy M, Allendorfer J, Chu WJ, Szaflarski JP (2008) Brain- mapping techniques for evaluating poststroke recovery and rehabilitation: a review. Top Stroke Rehabil 15:427-450. Ethier C, Oby ER, Bauman MJ, Miller LE (2012) Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature 485:368-371. Evarts EV (1966) Pyramidal tract activity associated with a conditioned hand movement in the monkey. J Neurophysiol 29:1011-1027. Evarts EV (1968) Relation of pyramidal tract activity to force exerted during voluntary movement. J Neurophysiol 31:14-27. Evarts EV (1969) Activity of pyramidal tract neurons during postural fixation. J Neurophysiol 32:375-385. Evarts EV, Fromm C, Kroller J, Jennings VA (1983) Motor Cortex control of finely graded forces. J Neurophysiol 49:1199-1215. Fabbri-Destro M, Rizzolatti G (2008) Mirror neurons and mirror systems in monkeys and humans. Physiology (Bethesda) 23:171-179. Fagg AH, Ojakangas GW, Miller LE, Hatsopoulos NG (2009) Kinetic trajectory decoding using motor cortical ensembles. IEEE Trans Neural Syst Rehabil Eng 17:487- 496. Fan JM, Nuyujukian P, Kao JC, Chestek CA, Ryu SI, Shenoy KV (2014) Intention estimation in brain-machine interfaces. J Neural Eng 11:016004.

216

Felton EA, Wilson JA, Williams JC, Garell PC (2007) Electrocorticographically controlled brain-computer interfaces using motor and sensory imagery in patients with temporary subdural electrode implants. Report of four cases. J Neurosurg 106:495-500. Fetz EE (1969) Operant conditioning of cortical unit activity. Science 163:955-958. Fetz EE, Cheney PD (1980) Postspike facilitation of forelimb muscle activity by primate corticomotoneuronal cells. J Neurophysiol 44:751-772. Filimon F, Nelson JD, Hagler DJ, Sereno MI (2007) Human cortical representations for reaching: mirror neurons for execution, observation, and imagery. Neuroimage 37:1315-1328. Filimon F, Rieth CA, Sereno MI, Cottrell GW (2015) Observed, Executed, and Imagined Action Representations can be Decoded From Ventral and Dorsal Areas. Cereb Cortex 25:3144-3158. Flesher SN, Collinger JL, Foldes ST, Weiss JM, Downey JE, Tyler-Kabara EC, Bensmaia SJ, Schwartz AB, Boninger ML, Gaunt RA (2016) Intracortical microstimulation of human somatosensory cortex. Sci Transl Med 8:361ra141. Flint RD, Wright ZA, Scheid MR, Slutzky MW (2013) Long term, stable brain machine interface performance using local field potentials and multiunit spikes. J Neural Eng 10:056005. Flint RD, Rosenow JM, Tate MC, Slutzky MW (2017) Continuous decoding of human grasp kinematics using epidural and subdural signals. J Neural Eng 14:016005. Flint RD, Ethier C, Oby ER, Miller LE, Slutzky MW (2012) Local field potentials allow accurate decoding of muscle activity. J Neurophysiol 108:18-24. Flint RD, Wang PT, Wright ZA, King CE, Krucoff MO, Schuele SU, Rosenow JM, Hsu FP, Liu CY, Lin JJ, Sazgar M, Millett DE, Shaw SJ, Nenadic Z, Do AH, Slutzky MW (2014) Extracting kinetic information from human motor cortical signals. Neuroimage 101:695-703. Fraser GW, Chase SM, Whitford A, Schwartz AB (2009) Control of a brain-computer interface without spike sorting. J Neural Eng 6:055004. Freire MA, Morya E, Faber J, Santos JR, Guimaraes JS, Lemos NA, Sameshima K, Pereira A, Ribeiro S, Nicolelis MA (2011) Comprehensive analysis of tissue preservation and recording quality from chronic multielectrode implants. PLoS One 6:e27554. Friedenberg DA, Schwemmer MA, Landgraf AJ, Annetta NV, Bockbrader MA, Bouton CE, Zhang M, Rezai AR, Mysiw WJ, Bresler HS, Sharma G (2017) Neuroprosthetic-enabled control of graded arm muscle contraction in a paralyzed human. Sci Rep 7:8386. Frot M, Magnin M, Mauguiere F, Garcia-Larrea L (2013) Cortical representation of pain in primary sensory-motor areas (S1/M1)--a study using intracortical recordings in humans. Hum Brain Mapp 34:2655-2668. Fu QG, Suarez JI, Ebner TJ (1993) Neuronal specification of direction and distance during reaching movements in the superior precentral premotor area and primary motor cortex of monkeys. J Neurophysiol 70:2097-2116. Fu QG, Flament D, Coltz JD, Ebner TJ (1995) Temporal encoding of movement kinematics in the discharge of primate primary motor and premotor neurons. J Neurophysiol 73:836-854. Gallese V, Fadiga L, Fogassi L, Rizzolatti G (1996) Action recognition in the premotor cortex. Brain 119 ( Pt 2):593-609. Ganguly K, Carmena JM (2009) Emergence of a stable cortical map for neuroprosthetic control. PLoS Biol 7:e1000153.

217

Georgopoulos AP, Caminiti R, Kalaska JF (1984) Static spatial effects in motor cortex and area 5: quantitative relations in a two-dimensional space. Exp Brain Res 54:446-454. Georgopoulos AP, Schwartz AB, Kettner RE (1986) Neuronal population coding of movement direction. Science 233:1416-1419. Georgopoulos AP, Kettner RE, Schwartz AB (1988) Primate motor cortex and free arm movements to visual targets in three-dimensional space. II. Coding of the direction of movement by a neuronal population. J Neurosci 8:2928-2937. Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT (1982) On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. J Neurosci 2:1527-1537. Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT (1983) Interruption of motor cortical discharge subserving aimed arm movements. Exp Brain Res 49:327-340. Georgopoulos AP, Ashe J, Smyrnis N, Taira M (1992) The motor cortex and the coding of force. Science 256:1692-1695. Ghez C, Gordon J, Ghilardi MF (1995) Impairments of reaching movements in patients without proprioception. II. Effects of visual information on accuracy. J Neurophysiol 73:361-372. Gilja V, Moore T (2007) Electrical signals propagate unbiased in cortex. Neuron 55:684- 686. Gilja V, Chestek CA, Diester I, Henderson JM, Deisseroth K, Shenoy KV (2011) Challenges and opportunities for next-generation intracortically based neural prostheses. IEEE Trans Biomed Eng 58:1891-1899. Gilja V, Nuyujukian P, Chestek CA, Cunningham JP, Yu BM, Fan JM, Ryu SI, Shenoy KV (2012a) A brain machine interface control algorithm designed from a feedback control perspective. Conf Proc IEEE Eng Med Biol Soc 2012:1318- 1322. Gilja V, Nuyujukian P, Chestek CA, Cunningham JP, Yu BM, Fan JM, Churchland MM, Kaufman MT, Kao JC, Ryu SI, Shenoy KV (2012b) A high-performance neural prosthesis enabled by control algorithm design. Nat Neurosci 15:1752-1757. Gilja V, Pandarinath C, Blabe CH, Nuyujukian P, Simeral JD, Sarma AA, Sorice BL, Perge JA, Jarosiewicz B, Hochberg LR, Shenoy KV, Henderson JM (2015) Clinical translation of a high-performance neural prosthesis. Nat Med 21:1142- 1145. Gilletti A, Muthuswamy J (2006) Brain micromotion around implants in the rodent somatosensory cortex. J Neural Eng 3:189-195. Gillis WF, Lissandrello CA, Shen J, Pearre BW, Mertiri A, Deku F, Cogan S, Holinski BJ, Chew DJ, White AE, Otchy TM, Gardner TJ (2018) Carbon fiber on polyimide ultra-microelectrodes. J Neural Eng 15:016010. Girbovan C, Morin L, Plamondon H (2012) Repeated resveratrol administration confers lasting protection against neuronal damage but induces dose-related alterations of behavioral impairments after global ischemia. Behav Pharmacol 23:1-13. Goldstein SR, Salcman M (1973) Mechanical factors in the design of chronic recording intracortical microelectrodes. IEEE Trans Biomed Eng 20:260-269. Gordon J, Ghilardi MF, Ghez C (1995) Impairments of reaching movements in patients without proprioception. I. Spatial errors. J Neurophysiol 73:347-360. Gowda S, Orsborn AL, Overduin SA, Moorman HG, Carmena JM (2014) Designing dynamical properties of brain-machine interfaces to optimize task-specific performance. IEEE Trans Neural Syst Rehabil Eng 22:911-920.

218

Graczyk EL, Delhaye BP, Schiefer MA, Bensmaia SJ, Tyler DJ (2018) Sensory adaptation to electrical stimulation of the somatosensory nerves. J Neural Eng 15:046002. Graczyk EL, Schiefer MA, Saal HP, Delhaye BP, Bensmaia SJ, Tyler DJ (2016) The neural basis of perceived intensity in natural and artificial touch. Sci Transl Med 8:362ra142. Graimann B, Huggins JE, Schlogl A, Levine SP, Pfurtscheller G (2003) Detection of movement-related desynchronization patterns in ongoing single-channel electrocorticogram. IEEE Trans Neural Syst Rehabil Eng 11:276-281. Green JB, Sora E, Bialy Y, Ricamato A, Thatcher RW (1999) Cortical motor reorganization after paraplegia: an EEG study. Neurology 53:736-743. Grezes J, Decety J (2001) Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta-analysis. Hum Brain Mapp 12:1-19. Grundfest H, Campbell B (1942) ORIGIN, CONDUCTION AND TERMINATION OF IMPULSES IN THE DORSAL SPINO-CEREBELLAR TRACT OF CATS. Journal of Neurophysiology 5:275-294. Grundfest H, Sengstaken RW, Oettinger WH, Gurry RW (1950) Stainless Steel Micro‐ Needle Electrodes Made by Electrolytic Pointing. Review of Scientific Instruments 21:360-361. Hao Y, Zhang Q, Zhang S, Zhao T, Wang Y, Chen W, Zheng X (2013) Decoding grasp movement from monkey premotor cortex for real-time prosthetic hand control. Chinese Science Bulletin 58:2512-2520. Hao Y, Zhang Q, Controzzi M, Cipriani C, Li Y, Li J, Zhang S, Wang Y, Chen W, Chiara Carrozza M, Zheng X (2014) Distinct neural patterns enable grasp types decoding in monkey dorsal premotor cortex. J Neural Eng 11:066011. Harris JP, Hess AE, Rowan SJ, Weder C, Zorman CA, Tyler DJ, Capadona JR (2011) In vivo deployment of mechanically adaptive nanocomposites for intracortical microelectrodes. J Neural Eng 8:046010. Hatsopoulos NG, Donoghue JP (2009) The science of neural interface systems. Annu Rev Neurosci 32:249-266. Hatsopoulos NG, Suminski AJ (2011) Sensing with the motor cortex. Neuron 72:477- 487. He B, Baxter B, Edelman BJ, Cline CC, Ye WW (2015) Noninvasive Brain-Computer Interfaces Based on Sensorimotor Rhythms. Proceedings of the IEEE 103:907- 925. Hendrix CM, Mason CR, Ebner TJ (2009) Signaling of grasp dimension and grasp force in dorsal premotor cortex and primary motor cortex neurons during reach to grasp in the monkey. J Neurophysiol 102:132-145. Hepp-Reymond M, Kirkpatrick-Tanner M, Gabernet L, Qi HX, Weber B (1999) Context- dependent force coding in motor and premotor cortical areas. Exp Brain Res 128:123-133. Hepp-Reymond MC, Wyss UR, Anner R (1978) Neuronal coding of static force in the primate motor cortex. J Physiol (Paris) 74:287-291. Hermes D, Vansteensel MJ, Albers AM, Bleichner MG, Benedictus MR, Mendez Orellana C, Aarnoutse EJ, Ramsey NF (2011) Functional MRI-based identification of brain areas involved in motor imagery for implantable brain- computer interfaces. J Neural Eng 8:025007. Hetke JF, Lund JL, Najafi K, Wise KD, Anderson DJ (1994) Silicon ribbon cables for chronically implantable microelectrode arrays. IEEE Trans Biomed Eng 41:314- 321.

219

Hinterberger T, Widman G, Lal TN, Hill J, Tangermann M, Rosenstiel W, Scholkopf B, Elger C, Birbaumer N (2008) Voluntary brain regulation and communication with electrocorticogram signals. Epilepsy Behav 13:300-306. Histed MH, Bonin V, Reid RC (2009) Direct activation of sparse, distributed populations of cortical neurons by electrical microstimulation. Neuron 63:508-522. Ho CH, Triolo RJ, Elias AL, Kilgore KL, DiMarco AF, Bogie K, Vette AH, Audu ML, Kobetic R, Chang SR, Chan KM, Dukelow S, Bourbeau DJ, Brose SW, Gustafson KJ, Kiss ZH, Mushahwar VK (2014) Functional electrical stimulation and spinal cord injury. Phys Med Rehabil Clin N Am 25:631-654, ix. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP (2006) Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442:164-171. Hochberg LR, Bacher D, Jarosiewicz B, Masse NY, Simeral JD, Vogel J, Haddadin S, Liu J, Cash SS, van der Smagt P, Donoghue JP (2012) Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485:372- 375. Homer ML, Nurmikko AV, Donoghue JP, Hochberg LR (2013) Sensors and decoding for intracortical brain computer interfaces. Annu Rev Biomed Eng 15:383-405. Homer ML, Perge JA, Black MJ, Harrison MT, Cash SS, Hochberg LR (2014) Adaptive offset correction for intracortical brain-computer interfaces. IEEE Trans Neural Syst Rehabil Eng 22:239-248. Hsu JM, Rieth L, Normann RA, Tathireddy P, Solzbacher F (2009) Encapsulation of an integrated neural interface device with Parylene C. IEEE Trans Biomed Eng 56:23-29. Hubel DH (1957) Tungsten Microelectrode for Recording from Single Units. Science 125:549-550. Humber C, Ito K, Bouton C (2010) Nonsmooth Formulation of the Support Vector Machine for a Neural Decoding Problem. Humphrey DR (1970) A chronically implantable multiple micro-electrode system with independent control of electrode positions. Electroencephalogr Clin Neurophysiol 29:616-620. Hwang EJ, Andersen RA (2009) Brain control of movement execution onset using local field potentials in posterior parietal cortex. J Neurosci 29:14363-14370. Intveld RW, Dann B, Michaels JA, Scherberger H (2018) Neural coding of intended and executed grasp force in macaque areas AIP, F5, and M1. Sci Rep 8:17985. Jackson A, Mavoori J, Fetz EE (2006) Long-term motor cortex plasticity induced by an electronic neural implant. Nature 444:56-60. Jarosiewicz B, Sarma AA, Saab J, Franco B, Cash SS, Eskandar EN, Hochberg LR (2016) Retrospectively supervised click decoder calibration for self-calibrating point-and-click brain-computer interfaces. J Physiol Paris 110:382-391. Jarosiewicz B, Masse NY, Bacher D, Cash SS, Eskandar E, Friehs G, Donoghue JP, Hochberg LR (2013) Advantages of closed-loop calibration in intracortical brain- computer interfaces for people with tetraplegia. J Neural Eng 10:046012. Jarosiewicz B, Sarma AA, Bacher D, Masse NY, Simeral JD, Sorice B, Oakley EM, Blabe C, Pandarinath C, Gilja V, Cash SS, Eskandar EN, Friehs G, Henderson JM, Shenoy KV, Donoghue JP, Hochberg LR (2015) Virtual typing by people with tetraplegia using a self-calibrating intracortical brain-computer interface. Sci Transl Med 7:313ra179. Jeannerod M (2001) Neural simulation of action: a unifying mechanism for motor . Neuroimage 14:S103-109.

220

Johansson RS, Flanagan JR (2009) Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat Rev Neurosci 10:345-359. Johnson MD, Kao OE, Kipke DR (2007) Spatiotemporal pH dynamics following insertion of neural microelectrode arrays. J Neurosci Methods 160:276-287. Jorfi M, Skousen JL, Weder C, Capadona JR (2015) Progress towards biocompatible intracortical microelectrodes for neural interfacing applications. J Neural Eng 12:011001. Joshi-Imre A, Black BJ, Abbott J, Kanneganti A, Rihani R, Chakraborty B, Danda VR, Maeng J, Sharma R, Rieth L, Negi S, Pancrazio JJ, Cogan SF (2019) Chronic recording and electrochemical performance of amorphous silicon carbide-coated Utah electrode arrays implanted in rat motor cortex. J Neural Eng 16:046006. Juric D (2020) MultiClass LDA. In: Matlab Central File Exchange. Kakei S, Hoffman DS, Strick PL (1999) Muscle and movement representations in the primary motor cortex. Science 285:2136-2139. Kalaska JF (2009) From intention to action: motor cortex and the control of reaching movements. Adv Exp Med Biol 629:139-178. Kalaska JF, Hyde ML (1985) Area 4 and area 5: differences between the load direction- dependent discharge variability of cells during active postural fixation. Exp Brain Res 59:197-202. Kalaska JF, Cohen DA, Hyde ML, Prud'homme M (1989) A comparison of movement direction-related versus load direction-related activity in primate motor cortex, using a two-dimensional reaching task. J Neurosci 9:2080-2102. Kao JC, Stavisky SD, Sussillo D, Nuyujukian P, Shenoy KV (2014) Information Systems Opportunities in Brain–Machine Interface Decoders. Proceedings of the IEEE 102:666-682. Karumbaiah L, Saxena T, Carlson D, Patil K, Patkar R, Gaupp EA, Betancur M, Stanley GB, Carin L, Bellamkonda RV (2013) Relationship between intracortical electrode design and chronic recording function. Biomaterials 34:8061-8074. Kaufman MT, Churchland MM, Ryu SI, Shenoy KV (2014) Cortical activity in the null space: permitting preparation without movement. Nat Neurosci 17:440-448. Keefer EW, Botterman BR, Romero MI, Rossi AF, Gross GW (2008) Carbon nanotube coating improves neuronal recordings. Nat Nanotechnol 3:434-439. Keisker B, Hepp-Reymond MC, Blickenstorfer A, Meyer M, Kollias SS (2009) Differential force scaling of fine-graded power grip force in the sensorimotor network. Hum Brain Mapp 30:2453-2465. Kennedy PR (1989) The cone electrode: a long-term electrode that records from neurites grown onto its recording surface. J Neurosci Methods 29:181-193. Kennedy PR, Bakay RA (1997) Activity of single action potentials in monkey motor cortex during long-term task learning. Brain Res 760:251-254. Kennedy PR, Bakay RA (1998) Restoration of neural output from a paralyzed patient by a direct brain connection. Neuroreport 9:1707-1711. Kennedy PR, Bakay RA, Sharpe SM (1992) Behavioral correlates of action potentials recorded chronically inside the Cone Electrode. Neuroreport 3:605-608. Kennedy PR, Bakay RA, Moore MM, Adams K, Goldwaithe J (2000) Direct control of a computer from the human central nervous system. IEEE Trans Rehabil Eng 8:198-202. Kennedy PR, Kirby MT, Moore MM, King B, Mallory A (2004) Computer control using human intracortical local field potentials. IEEE Trans Neural Syst Rehabil Eng 12:339-344. Keysers C, Kohler E, Umilta MA, Nanetti L, Fogassi L, Gallese V (2003) Audiovisual mirror neurons and action recognition. Exp Brain Res 153:628-636.

221

Kilgore KL, Peckham PH, Crish J, Smith B (2007) Implantable Networked Neural System. In. United States of America: Case Western Reserve University, Cleveland, OH (US) Kilgore KL, Peckham PH, Thrope GB, Keith MW, Gallaher-Stone KA (1989) Synthesis of hand grasp using functional neuromuscular stimulation. IEEE Trans Biomed Eng 36:761-770. Kilgore KL, Hoyen HA, Bryden AM, Hart RL, Keith MW, Peckham PH (2008) An implanted upper-extremity neuroprosthesis using myoelectric control. J Hand Surg Am 33:539-550. Kilgore KL, Bryden A, Keith MW, Hoyen HA, Hart RL, Nemunaitis GA, Peckham PH (2018) Evolution of Neuroprosthetic Approaches to Restoration of Upper Extremity Function in Spinal Cord Injury. Top Spinal Cord Inj Rehabil 24:252-264. Kim SP, Simeral JD, Hochberg LR, Donoghue JP, Black MJ (2008) Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. J Neural Eng 5:455-476. Kim SP, Simeral JD, Hochberg LR, Donoghue JP, Friehs GM, Black MJ (2011) Point- and-click cursor control with an intracortical neural interface system by humans with tetraplegia. IEEE Trans Neural Syst Rehabil Eng 19:193-203. Kipke DR, Vetter RJ, Williams JC, Hetke JF (2003) Silicon-substrate intracortical microelectrode arrays for long-term recording of neuronal spike activity in cerebral cortex. IEEE Trans Neural Syst Rehabil Eng 11:151-155. Klaes C, Kellis S, Aflalo T, Lee B, Pejsa K, Shanfield K, Hayes-Jackson S, Aisen M, Heck C, Liu C, Andersen RA (2015) Hand Shape Representations in the Human Posterior Parietal Cortex. J Neurosci 35:15466-15476. Kobak D, Brendel W, Constantinidis C, Feierstein CE, Kepecs A, Mainen ZF, Qi XL, Romo R, Uchida N, Machens CK (2016) Demixed principal component analysis of neural population data. Elife 5. Kohler E, Keysers C, Umilta MA, Fogassi L, Gallese V, Rizzolatti G (2002) Hearing sounds, understanding actions: action representation in mirror neurons. Science 297:846-848. Koyama S, Chase SM, Whitford AS, Velliste M, Schwartz AB, Kass RE (2010) Comparison of brain-computer interface decoding algorithms in open-loop and closed-loop control. J Comput Neurosci 29:73-87. Kozai TD, Kipke DR (2009) Insertion shuttle with carboxyl terminated self-assembled monolayer coatings for implanting flexible polymer neural probes in the brain. J Neurosci Methods 184:199-205. Kozai TD, Marzullo TC, Hooi F, Langhals NB, Majewska AK, Brown EB, Kipke DR (2010) Reduction of neurovascular damage resulting from microelectrode insertion into the cerebral cortex using in vivo two-photon mapping. J Neural Eng 7:046011. Kozai TD, Catt K, Li X, Gugel ZV, Olafsson VT, Vazquez AL, Cui XT (2015) Mechanical failure modes of chronically implanted planar silicon-based neural probes for laminar recording. Biomaterials 37:25-39. Kozai TD, Langhals NB, Patel PR, Deng X, Zhang H, Smith KL, Lahann J, Kotov NA, Kipke DR (2012) Ultrasmall implantable composite microelectrodes with bioactive surfaces for chronic neural interfaces. Nat Mater 11:1065-1073. Kozai TDY (2018) The History and Horizons of Microscale Neural Interfaces. Micromachines (Basel) 9. Kozai TDY, Jaquins-Gerstl AS, Vazquez AL, Michael AC, Cui XT (2016) Dexamethasone retrodialysis attenuates microglial response to implanted probes in vivo. Biomaterials 87:157-169.

222

Kraskov A, Philipp R, Waldert S, Vigneswaran G, Quallo MM, Lemon RN (2014) Corticospinal mirror neurons. Philos Trans R Soc Lond B Biol Sci 369:20130174. Kriegeskorte N, Mur M, Bandettini P (2008) Representational similarity analysis - connecting the branches of systems neuroscience. Front Syst Neurosci 2:4. Kubler A, Nijboer F, Mellinger J, Vaughan TM, Pawelzik H, Schalk G, McFarland DJ, Birbaumer N, Wolpaw JR (2005) Patients with ALS can use sensorimotor rhythms to operate a brain-computer interface. Neurology 64:1775-1777. Kuhtz-Buschbeck JP, Gilster R, Wolff S, Ulmer S, Siebner H, Jansen O (2008) Brain activity is similar during precision and power gripping with light force: an fMRI study. Neuroimage 40:1469-1481. Lacourse MG, Cohen MJ, Lawrence KE, Romero DH (1999) Cortical potentials during imagined movements in individuals with chronic spinal cord injuries. Behav Brain Res 104:73-88. LaFleur K, Cassady K, Doud A, Shades K, Rogin E, He B (2013) Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain- computer interface. J Neural Eng 10:046003. Lagang M, Srinivasan L (2013) Stochastic optimal control as a theory of brain-machine interface operation. Neural Comput 25:374-417. Lanzilotto M, Ferroni CG, Livi A, Gerbella M, Maranesi M, Borra E, Passarelli L, Gamberini M, Fogassi L, Bonini L, Orban GA (2019) Anterior Intraparietal Area: A Hub in the Observed Manipulative Action Network. Cereb Cortex 29:1816-1833. Lee B, Kramer D, Armenta Salas M, Kellis S, Brown D, Dobreva T, Klaes C, Heck C, Liu C, Andersen RA (2018) Engineering Artificial Somatosensation Through Cortical Stimulation in Humans. Front Syst Neurosci 12:24. Lefebvre B, Yger P, Marre O (2016) Recent progress in multi-electrode spike sorting methods. J Physiol Paris 110:327-335. Lehew G, Nicolelis MAL (2008) State-of-the-Art Microwire Array Design for Chronic Neural Recordings in Behaving Animals. In: Methods for Neural Ensemble Recordings (nd, Nicolelis MAL, eds). Boca Raton (FL). Lehmann SJ, Scherberger H (2013) Reach and gaze representations in macaque parietal and premotor grasp areas. J Neurosci 33:7038-7049. Leo A, Handjaras G, Bianchi M, Marino H, Gabiccini M, Guidi A, Scilingo EP, Pietrini P, Bicchi A, Santello M, Ricciardi E (2016) A synergy-based hand control is encoded in human motor cortical areas. Elife 5. Leuthardt EC, Schalk G, Wolpaw JR, Ojemann JG, Moran DW (2004) A brain-computer interface using electrocorticographic signals in humans. J Neural Eng 1:63-71. Leuthardt EC, Schalk G, Roland J, Rouse A, Moran DW (2009) Evolution of brain- computer interfaces: going beyond classic motor physiology. Neurosurg Focus 27:E4. Lever J, Krzywinski M, Altman N (2017) Points of Significance: Principal component analysis. Nature Methods 14:641-642. Lewicki MS (1998) A review of methods for spike sorting: the detection and classification of neural action potentials. Network 9:R53-78. Li Z, O'Doherty JE, Lebedev MA, Nicolelis MA (2011) Adaptive decoding for brain- machine interfaces through Bayesian parameter updates. Neural Comput 23:3162-3204. Liu X, McCreery DB, Bullara LA, Agnew WF (2006) Evaluation of the stability of intracortical microelectrode arrays. IEEE Trans Neural Syst Rehabil Eng 14:91- 100. Lu CW, Patil PG, Chestek CA (2012) Current challenges to the clinical translation of brain machine interface technology. Int Rev Neurobiol 107:137-160.

223

Ludwig KA, Uram JD, Yang J, Martin DC, Kipke DR (2006) Chronic neural recordings using silicon microelectrode arrays electrochemically deposited with a poly(3,4- ethylenedioxythiophene) (PEDOT) film. J Neural Eng 3:59-70. Maier MA, Bennett KM, Hepp-Reymond MC, Lemon RN (1993) Contribution of the monkey corticomotoneuronal system to the control of force in precision grip. J Neurophysiol 69:772-785. Mallat S (1999) I - Introduction to a transient world. In: A Wavelet Tour of Signal Processing (Second Edition) (Mallat S, ed), pp 1-19. San Diego: Academic Press. Malouin F, Richards CL, Jackson PL, Lafleur MF, Durand A, Doyon J (2007) The Kinesthetic and Visual Imagery Questionnaire (KVIQ) for assessing motor imagery in persons with physical disabilities: a reliability and construct validity study. J Neurol Phys Ther 31:20-29. Maranesi M, Livi A, Bonini L (2017) Spatial and viewpoint selectivity for others' observed actions in monkey ventral premotor mirror neurons. Sci Rep 7:8231. Mason CR, Hendrix CM, Ebner TJ (2006) Purkinje cells signal hand shape and grasp force during reach-to-grasp in the monkey. J Neurophysiol 95:144-158. Mason CR, Theverapperuma LS, Hendrix CM, Ebner TJ (2004) Monkey hand postural synergies during reach-to-grasp in the absence of vision of the hand and object. J Neurophysiol 91:2826-2837. Maynard EM, Nordhausen CT, Normann RA (1997) The Utah intracortical Electrode Array: a recording structure for potential brain-computer interfaces. Electroencephalogr Clin Neurophysiol 102:228-239. Mazurek KA, Rouse AG, Schieber MH (2018) Mirror Neuron Populations Represent Sequences of Behavioral Epochs During Both Execution and Observation. J Neurosci 38:4441-4455. McCarthy PT, Rao MP, Otto KJ (2011) Simultaneous recording of rat auditory cortex and thalamus via a titanium-based, microfabricated, microelectrode device. J Neural Eng 8:046007. McDonald JW, Sadowsky C (2002) Spinal-cord injury. Lancet 359:417-425. McFarland DJ, Sarnacki WA, Wolpaw JR (2010) Electroencephalographic (EEG) control of three-dimensional movement. J Neural Eng 7:036007. McGuire LM, Sabes PN (2011) Heterogeneous representations in the superior parietal lobule are common across reaches to visual and proprioceptive targets. J Neurosci 31:6661-6673. Mehring C, Rickert J, Vaadia E, Cardosa de Oliveira S, Aertsen A, Rotter S (2003) Inference of hand movements from local field potentials in monkey motor cortex. Nat Neurosci 6:1253-1254. Menz VK, Schaffelhofer S, Scherberger H (2015) Representation of continuous hand and arm movements in macaque areas M1, F5, and AIP: a comparative decoding study. J Neural Eng 12:056016. Milan JdR, Carmena JM (2010) Invasive or Noninvasive: Understanding Brain-Machine Interface Technology [Conversations in BME]. IEEE Engineering in Medicine and Biology Magazine 29:16-22. Milekovic T, Truccolo W, Grun S, Riehle A, Brochier T (2015) Local field potentials in primate motor cortex encode grasp kinetic parameters. Neuroimage 114:338- 355. Milekovic T, Sarma AA, Bacher D, Simeral JD, Saab J, Pandarinath C, Sorice BL, Blabe C, Oakley EM, Tringale KR, Eskandar E, Cash SS, Henderson JM, Shenoy KV, Donoghue JP, Hochberg LR (2018) Stable long-term BCI-enabled

224

communication in ALS and locked-in syndrome using LFP signals. J Neurophysiol 120:343-360. Milekovic T, Bacher D, Sarma AA, Simeral JD, Saab J, Pandarinath C, Yvert B, Sorice BL, Blabe C, Oakley EM, Tringale KR, Eskandar E, Cash SS, Shenoy KV, Henderson JM, Hochberg LR, Donoghue JP (2019) Volitional control of single- electrode high gamma local field potentials by people with paralysis. J Neurophysiol 121:1428-1450. Miller KJ, Zanos S, Fetz EE, den Nijs M, Ojemann JG (2009) Decoupling the cortical power spectrum reveals real-time representation of individual finger movements in humans. J Neurosci 29:3132-3137. Miller KJ, Schalk G, Fetz EE, den Nijs M, Ojemann JG, Rao RP (2010) Cortical activity during motor execution, motor imagery, and imagery-based online feedback. Proc Natl Acad Sci U S A 107:4430-4435. Mizuguchi N, Nakamura M, Kanosue K (2017) Task-dependent engagements of the primary visual cortex during kinesthetic and visual motor imagery. Neuroscience letters 636:108-112. Moffitt MA, McIntyre CC (2005) Model-based analysis of cortical recording with silicon microelectrodes. Clin Neurophysiol 116:2240-2250. Moran D (2010) Evolution of brain-computer interface: action potentials, local field potentials and electrocorticograms. Curr Opin Neurobiol 20:741-745. Moran DW, Schwartz AB (1999) Motor cortical representation of speed and direction during reaching. J Neurophysiol 82:2676-2692. Moritz CT, Fetz EE (2011) Volitional control of single cortical neurons in a brain-machine interface. J Neural Eng 8:025017. Moritz CT, Perlmutter SI, Fetz EE (2008) Direct control of paralysed muscles by cortical neurons. Nature 456:639-642. Morrow MM, Miller LE (2003) Prediction of muscle activity by populations of sequentially recorded primary motor cortex neurons. J Neurophysiol 89:2279-2288. Mukamel R, Ekstrom AD, Kaplan J, Iacoboni M, Fried I (2010) Single-neuron responses in humans during execution and observation of actions. Curr Biol 20:750-756. Mulliken GH, Musallam S, Andersen RA (2008) Decoding trajectories from posterior parietal cortex ensembles. J Neurosci 28:12913-12926. Murata A, Gallese V, Luppino G, Kaseda M, Sakata H (2000) Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP. J Neurophysiol 83:2580-2601. Murphy BA, Miller JP, Gunalan K, Ajiboye AB (2016) Contributions of Subsurface Cortical Modulations to Discrimination of Executed and Imagined Grasp Forces through Stereoelectroencephalography. PLoS One 11:e0150359. Neely KA, Coombes SA, Planetta PJ, Vaillancourt DE (2013) Segregated and overlapping neural circuits exist for the production of static and dynamic precision grip force. Hum Brain Mapp 34:698-712. Negi S, Bhandari R, Rieth L, Van Wagenen R, Solzbacher F (2010) Neural electrode degradation from continuous electrical stimulation: comparison of sputtered and activated iridium oxide. J Neurosci Methods 186:8-17. Nguyen JK, Park DJ, Skousen JL, Hess-Dunning AE, Tyler DJ, Rowan SJ, Weder C, Capadona JR (2014) Mechanically-compliant intracortical implants reduce the neuroinflammatory response. J Neural Eng 11:056014. Normann RA, Maynard EM, Rousche PJ, Warren DJ (1999) A neural interface for a cortical vision prosthesis. Vision Res 39:2577-2587. NSCISC (2020) Spinal Cord Injury Facts and Figures at a Glance. In. Birmingham, AL: University of Alabama and Birmingham.

225

Nuyujukian P, Fan JM, Gilja V, Kalanithi PS, Chestek CA, Shenoy KV (2011) Monkey models for brain-machine interfaces: the need for maintaining diversity. Conf Proc IEEE Eng Med Biol Soc 2011:1301-1305. Nuyujukian P, Albites Sanabria J, Saab J, Pandarinath C, Jarosiewicz B, Blabe CH, Franco B, Mernoff ST, Eskandar EN, Simeral JD, Hochberg LR, Shenoy KV, Henderson JM (2018) Cortical control of a tablet computer by people with paralysis. PLoS One 13:e0204566. O'Doherty JE, Lebedev MA, Li Z, Nicolelis MA (2012) Virtual active touch using randomly patterned intracortical microstimulation. IEEE Trans Neural Syst Rehabil Eng 20:85-93. O'Doherty JE, Shokur S, Medina LE, Lebedev MA, Nicolelis MAL (2019) Creating a neuroprosthesis for active tactile exploration of textures. Proc Natl Acad Sci U S A 116:21821-21827. O'Shea DJ, Shenoy KV (2018) ERAASR: an algorithm for removing electrical stimulation artifacts from multielectrode array recordings. J Neural Eng 15:026020. Obien ME, Deligkaris K, Bullmann T, Bakkum DJ, Frey U (2014) Revealing neuronal function through microelectrode array recordings. Front Neurosci 8:423. Oby ER, Ethier C, Bauman MJ, Perreault EJ, Ko JH, Miller LE (2010) Prediction of Muscle Activity from Cortical Signals to Restore Hand Grasp in Subjects with Spinal Cord Injury. In: Statistical Signal Processing for Neuroscience and , pp 369-406: Elsevier Inc. Odle BM, Lombardo LM, Audu ML, Triolo RJ (2019) Experimental Implementation of Automatic Control of Posture-Dependent Stimulation in an Implanted Standing Neuroprosthesis. Appl Bionics Biomech 2019:2639271. Paek AY, Gailey A, Parikh P, Santello M, Contreras-Vidal J (2015) Predicting hand forces from scalp electroencephalography during isometric force production and object grasping. Conf Proc IEEE Eng Med Biol Soc 2015:7570-7573. Page SJ, Levine P, Leonard A (2007) Mental practice in chronic stroke: results of a randomized, placebo-controlled trial. Stroke 38:1293-1297. Page SJ, Levine P, Sisto S, Johnston MV (2001) A randomized efficacy and feasibility study of imagery in acute stroke. Clin Rehabil 15:233-240. Pandarinath C, Ames KC, Russo AA, Farshchian A, Miller LE, Dyer EL, Kao JC (2018) Latent Factors and Dynamics in Motor Cortex and Their Application to Brain- Machine Interfaces. J Neurosci 38:9390-9401. Pandarinath C, Nuyujukian P, Blabe CH, Sorice BL, Saab J, Willett FR, Hochberg LR, Shenoy KV, Henderson JM (2017) High performance communication by people with paralysis using an intracortical brain-computer interface. Elife 6. Pandarinath C, Gilja V, Blabe CH, Nuyujukian P, Sarma AA, Sorice BL, Eskandar EN, Hochberg LR, Henderson JM, Shenoy KV (2015) Neural population dynamics in human motor cortex during movements in people with ALS. Elife 4:e07436. Paninski L, Fellows MR, Hatsopoulos NG, Donoghue JP (2004) Spatiotemporal tuning of motor cortical neurons for hand position and velocity. J Neurophysiol 91:515-532. Patel PR, Na K, Zhang H, Kozai TD, Kotov NA, Yoon E, Chestek CA (2015) Insertion of linear 8.4 mum diameter 16 channel carbon fiber electrode arrays for single unit recordings. J Neural Eng 12:046009. Patel PR, Zhang H, Robbins MT, Nofar JB, Marshall SP, Kobylarek MJ, Kozai TD, Kotov NA, Chestek CA (2016) Chronic in vivo stability assessment of carbon fiber microelectrode arrays. J Neural Eng 13:066002. Patrick E, Orazem ME, Sanchez JC, Nishida T (2011) Corrosion of tungsten microelectrodes used in neural recording applications. J Neurosci Methods 198:158-171.

226

Peckham PH (1981) Functional neuromuscular stimulation. Physics in Technology 12:114-121. Peckham PH, Knutson JS (2005) Functional electrical stimulation for neuromuscular applications. Annu Rev Biomed Eng 7:327-360. Peckham PH, Kilgore KL (2013) Challenges and opportunities in restoring function after paralysis. IEEE Trans Biomed Eng 60:602-609. Peckham PH, Keith MW, Freehafer AA (1988) Restoration of functional control by electrical stimulation in the upper extremity of the quadriplegic patient. J Bone Joint Surg Am 70:144-148. Perge JA, Zhang S, Malik WQ, Homer ML, Cash S, Friehs G, Eskandar EN, Donoghue JP, Hochberg LR (2014) Reliability of directional information in unsorted spikes and local field potentials recorded in human motor cortex. J Neural Eng 11:046007. Picard N, Smith AM (1992) Primary motor cortical responses to perturbations of prehension in the monkey. J Neurophysiol 68:1882-1894. Pistohl T, Ball T, Schulze-Bonhage A, Aertsen A, Mehring C (2008) Prediction of arm movement trajectories from ECoG-recordings in humans. J Neurosci Methods 167:105-114. Pistohl T, Schulze-Bonhage A, Aertsen A, Mehring C, Ball T (2012) Decoding natural grasp types from human ECoG. Neuroimage 59:248-260. Pohlmeyer EA, Solla SA, Perreault EJ, Miller LE (2007) Prediction of upper limb muscle activity from motor cortical discharge during reaching. J Neural Eng 4:369-379. Pohlmeyer EA, Oby ER, Perreault EJ, Solla SA, Kilgore KL, Kirsch RF, Miller LE (2009) Toward the restoration of hand use to a paralyzed monkey: brain-controlled functional electrical stimulation of forearm muscles. PLoS One 4:e5924. Porro CA, Francescato MP, Cettolo V, Diamond ME, Baraldi P, Zuiani C, Bazzocchi M, di Prampero PE (1996) Primary motor and sensory cortex activation during motor performance and motor imagery: a functional magnetic resonance imaging study. J Neurosci 16:7688-7698. Potter-Baker KA, Nguyen JK, Kovach KM, Gitomer MM, Srail TW, Stewart WG, Skousen JL, Capadona JR (2014) Development of Superoxide Dismutase Mimetic Surfaces to Reduce Accumulation of Reactive Oxygen Species for Neural Interfacing Applications. J Mater Chem B 2:2248-2258. Potter KA, Buck AC, Self WK, Callanan ME, Sunil S, Capadona JR (2013) The effect of resveratrol on neurodegeneration and blood brain barrier stability surrounding intracortical microelectrodes. Biomaterials 34:7001-7015. Prasad A, Sanchez JC (2012) Quantifying long-term microelectrode array functionality using chronic in vivo impedance testing. J Neural Eng 9:026028. Quallo MM, Price CJ, Ueno K, Asamizuya T, Cheng K, Lemon RN, Iriki A (2009) Gray and white matter changes associated with tool-use learning in macaque monkeys. Proc Natl Acad Sci U S A 106:18379-18384. Quiroga RQ, Nadasdy Z, Ben-Shaul Y (2004) Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural Comput 16:1661-1687. Rajan AT, Boback JL, Dammann JF, Tenore FV, Wester BA, Otto KJ, Gaunt RA, Bensmaia SJ (2015) The effects of chronic intracortical microstimulation on neural tissue and fine motor behavior. J Neural Eng 12:066018. Ramos-Murguialday A, Schurholz M, Caggiano V, Wildgruber M, Caria A, Hammer EM, Halder S, Birbaumer N (2012) Proprioceptive feedback and brain computer interface (BCI) based neuroprostheses. PLoS One 7:e47048. Rastogi A, Vargas-Irwin CE, Willett FR, Abreu J, Crowder DC, Murphy BA, Memberg WD, Miller JP, Sweet JA, Walter BL, Cash SS, Rezaii PG, Franco B, Saab J,

227

Stavisky SD, Shenoy KV, Henderson JM, Hochberg LR, Kirsch RF, Ajiboye AB (2020) Neural Representation of Observed, Imagined, and Attempted Grasping Force in Motor Cortex of Individuals with Chronic Tetraplegia. Sci Rep 10:1429. Rearick MP, Johnston JA, Slobounov SM (2001) Feedback-dependent modulation of isometric force control: an EEG study in visuomotor integration. Brain Res Cogn Brain Res 12:117-130. Reimer J, Hatsopoulos NG (2009) The problem of parametric neural coding in the motor system. Adv Exp Med Biol 629:243-259. Rennaker RL, Street S, Ruyle AM, Sloan AM (2005) A comparison of chronic multi- channel cortical implantation techniques: manual versus mechanical insertion. J Neurosci Methods 142:169-176. Rennaker RL, Miller J, Tang H, Wilson DA (2007) Minocycline increases quality and longevity of chronic neural recordings. J Neural Eng 4:L1-5. Renshaw B, Forbes A, Morison BR (1940) ACTIVITY OF ISOCORTEX AND HIPPOCAMPUS: ELECTRICAL STUDIES WITH MICRO-ELECTRODES. Journal of Neurophysiology 3:74-105. Rey HG, Pedreira C, Quian Quiroga R (2015) Past, present and future of spike sorting techniques. Brain Res Bull 119:106-117. Rickert J, Oliveira SC, Vaadia E, Aertsen A, Rotter S, Mehring C (2005) Encoding of movement direction in different frequency ranges of motor cortical local field potentials. J Neurosci 25:8815-8824. Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27:169- 192. Rizzolatti G, Fadiga L, Gallese V, Fogassi L (1996) Premotor cortex and the recognition of motor actions. Brain Res Cogn Brain Res 3:131-141. Rousche PJ, Normann RA (1992) A method for pneumatically inserting an array of penetrating electrodes into cortical tissue. Ann Biomed Eng 20:413-422. Rousche PJ, Normann RA (1998) Chronic recording capability of the Utah Intracortical Electrode Array in cat sensory cortex. J Neurosci Methods 82:1-15. Rushworth MF, Krams M, Passingham RE (2001) The attentional role of the left parietal cortex: the distinct lateralization and localization of motor attention in the human brain. J Cogn Neurosci 13:698-710. Russ MO, Mack W, Grama CR, Lanfermann H, Knopf M (2003) Enactment effect in memory: evidence concerning the function of the supramarginal gyrus. Exp Brain Res 149:497-504. Rutishauser U, Aflalo T, Rosario ER, Pouratian N, Andersen RA (2018) Single-Neuron Representation of Memory Strength and Recognition Confidence in Left Human Posterior Parietal Cortex. Neuron 97:209-220 e203. Ryu SI, Shenoy KV (2009) Human cortical prostheses: lost in translation? Neurosurg Focus 27:E5. Sainburg RL, Poizner H, Ghez C (1993) Loss of proprioception produces deficits in interjoint coordination. J Neurophysiol 70:2136-2147. Sakata H, Taira M, Kusunoki M, Murata A, Tanaka Y (1997) The TINS Lecture. The parietal association cortex in depth perception and visual control of hand action. Trends Neurosci 20:350-357. Sakellaridi S, Christopoulos VN, Aflalo T, Pejsa KW, Rosario ER, Ouellette D, Pouratian N, Andersen RA (2019) Intrinsic Variable Learning for Brain-Machine Interface Control by Human Anterior Intraparietal Cortex. Neuron 102:694-705 e693. Salcman M, Bak MJ (1973) Design, fabrication, and in vivo behavior of chronic recording intracortical microelectrodes. IEEE Trans Biomed Eng 20:253-260.

228

Salcman M, Bak MJ (1976) A new chronic recording intracortical microelectrode. Med Biol Eng 14:42-50. Sanes JN, Donoghue JP (1993) Oscillations in local field potentials of the primate motor cortex during voluntary movement. Proc Natl Acad Sci U S A 90:4470-4474. Sanes JN, Donoghue JP (2000) Plasticity and primary motor cortex. Annu Rev Neurosci 23:393-415. Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV (2006) A high-performance brain- computer interface. Nature 442:195-198. Santhanam G, Linderman MD, Gilja V, Afshar A, Ryu SI, Meng TH, Shenoy KV (2007) HermesB: a continuous neural recording system for freely behaving primates. IEEE Trans Biomed Eng 54:2037-2050. Santucci DM, Kralik JD, Lebedev MA, Nicolelis MA (2005) Frontal and parietal cortical ensembles predict single-trial muscle activity during reaching movements in primates. Eur J Neurosci 22:1529-1540. Sburlea AI, Muller-Putz GR (2018) Exploring representations of human grasping in neural, muscle and kinematic signals. Sci Rep 8:16669. Schaffelhofer S, Agudelo-Toro A, Scherberger H (2015) Decoding a wide range of hand configurations from macaque motor, premotor, and parietal cortices. J Neurosci 35:1068-1081. Schalk G, Leuthardt EC (2011) Brain-computer interfaces using electrocorticographic signals. IEEE Rev Biomed Eng 4:140-154. Schalk G, Miller KJ, Anderson NR, Wilson JA, Smyth MD, Ojemann JG, Moran DW, Wolpaw JR, Leuthardt EC (2008) Two-dimensional movement control using electrocorticographic signals in humans. J Neural Eng 5:75-84. Schalk G, Kubanek J, Miller KJ, Anderson NR, Leuthardt EC, Ojemann JG, Limbrick D, Moran D, Gerhardt LA, Wolpaw JR (2007) Decoding two-dimensional movement trajectories using electrocorticographic signals in humans. J Neural Eng 4:264- 275. Scherberger H, Jarvis MR, Andersen RA (2005) Cortical local field potential encodes movement intentions in the posterior parietal cortex. Neuron 46:347-354. Schiefer MA, Graczyk EL, Sidik SM, Tan DW, Tyler DJ (2018) Artificial tactile and proprioceptive feedback improves performance and confidence on object identification tasks. PLoS One 13:e0207659. Schmidt EM, McIntosh JS, Bak MJ (1988) Long-term implants of Parylene-C coated microelectrodes. Med Biol Eng Comput 26:96-101. Schroeder KE, Chestek CA (2016) Intracortical Brain-Machine Interfaces Advance Sensorimotor Neuroscience. Front Neurosci 10:291. Schwartz AB, Cui XT, Weber DJ, Moran DW (2006) Brain-controlled interfaces: movement restoration with neural prosthetics. Neuron 52:205-220. Schwarz A, Ofner P, Pereira J, Sburlea AI, Muller-Putz GR (2018) Decoding natural reach-and-grasp actions from human EEG. J Neural Eng 15:016005. Sergio LE, Kalaska JF (1998) Changes in the temporal pattern of primary motor cortex activity in a directional isometric force versus limb movement task. J Neurophysiol 80:1577-1583. Sergio LE, Kalaska JF (2003) Systematic changes in motor cortex cell activity with arm posture during directional isometric force generation. J Neurophysiol 89:212-228. Sergio LE, Hamel-Paquet C, Kalaska JF (2005) Motor cortex neural correlates of output kinematics and kinetics during isometric-force and arm-reaching tasks. J Neurophysiol 94:2353-2378. Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP (2002) Instant neural control of a movement signal. Nature 416:141-142.

229

Seymour JP, Kipke DR (2007) Neural probe design for reduced tissue encapsulation in CNS. Biomaterials 28:3594-3607. Seymour JP, Langhals NB, Anderson DJ, Kipke DR (2011) Novel multi-sided, microelectrode arrays for implantable neural applications. Biomed Microdevices 13:441-451. Shaikhouni A, Donoghue JP, Hochberg LR (2013) Somatosensory responses in a human motor cortex. J Neurophysiol 109:2192-2204. Shain W, Spataro L, Dilgen J, Haverstick K, Retterer S, Isaacson M, Saltzman M, Turner JN (2003) Controlling cellular reactive responses around neural prosthetic devices using peripheral and local intervention strategies. IEEE Trans Neural Syst Rehabil Eng 11:186-188. Shanechi MM, Orsborn AL, Carmena JM (2016) Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering. PLoS Comput Biol 12:e1004730. Shanechi MM, Williams ZM, Wornell GW, Hu RC, Powers M, Brown EN (2013) A real- time brain-machine interface combining motor target and trajectory intent using an optimal feedback control design. PLoS One 8:e59049. Sharma A, Rieth L, Tathireddy P, Harrison R, Solzbacher F (2010) Long term in vitro stability of fully integrated wireless neural interfaces based on Utah slant electrode array. Appl Phys Lett 96:73702. Sharma G, Annetta N, Friedenberg D, Blanco T, Vasconcelos D, Shaikhouni A, Rezai AR, Bouton C (2015) Time Stability and Coherence Analysis of Multiunit, Single- Unit and Local Field Potential Neuronal Signals in Chronically Implanted Brain Electrodes. Bioelectronic Medicine 2:63-71. Shenoy KV, Sahani M, Churchland MM (2013) Cortical control of arm movements: a dynamical systems perspective. Annu Rev Neurosci 36:337-359. Shepard RN, Chipman S (1970) Second-Order Isomorphism of Internal Representations - Shapes of States. Cognitive Psychol 1:1-17. Shin HC, Aggarwal V, Acharya S, Schieber MH, Thakor NV (2010) Neural decoding of finger movements using Skellam-based maximum-likelihood decoding. IEEE Trans Biomed Eng 57:754-760. Simeral JD, Kim SP, Black MJ, Donoghue JP, Hochberg LR (2011) Neural control of cursor trajectory and click by a human with tetraplegia 1000 days after implant of an intracortical microelectrode array. J Neural Eng 8:025027. Simpson LA, Eng JJ, Hsieh JT, Wolfe DL, Spinal Cord Injury Rehabilitation Evidence Scire Research T (2012) The health and life priorities of individuals with spinal cord injury: a systematic review. J Neurotrauma 29:1548-1555. Skomrock ND, Schwemmer MA, Ting JE, Trivedi HR, Sharma G, Bockbrader MA, Friedenberg DA (2018) A Characterization of Brain-Computer Interface Performance Trade-Offs Using Support Vector Machines and Deep Neural Networks to Decode Movement Intent. Front Neurosci 12:763. Skousen JL, Merriam SM, Srivannavit O, Perlin G, Wise KD, Tresco PA (2011) Reducing surface area while maintaining implant penetrating profile lowers the brain foreign body response to chronically implanted planar silicon microelectrode arrays. Prog Brain Res 194:167-180. Smith AM, Hepp-Reymond MC, Wyss UR (1975) Relation of activity in precentral cortical neurons to force and rate of force change during isometric contractions of finger muscles. Exp Brain Res 23:315-332. Smith B, Crish TJ, Buckett JR, Kilgore KL, Peckham PH (2005) Development of an implantable networked neuroprosthesis. In: Conference Proceedings. 2nd International IEEE EMBS Conference on Neural Engineering, 2005., pp 454-457.

230

Solstrand Dahlberg L, Becerra L, Borsook D, Linnman C (2018) Brain changes after spinal cord injury, a quantitative meta-analysis and review. Neurosci Biobehav Rev 90:272-293. Sridharan A, Rajan SD, Muthuswamy J (2013) Long-term changes in the material properties of brain tissue at the implant-tissue interface. J Neural Eng 10:066001. Sridharan A, Nguyen JK, Capadona JR, Muthuswamy J (2015) Compliant intracortical implants reduce strains and strain rates in brain tissue in vivo. J Neural Eng 12:036002. Stark E, Abeles M (2007) Predicting movement from multiunit activity. J Neurosci 27:8387-8394. Stark E, Asher I, Abeles M (2007a) Encoding of reach and grasp by single neurons in premotor cortex is independent of recording site. J Neurophysiol 97:3351-3364. Stark E, Drori R, Asher I, Ben-Shaul Y, Abeles M (2007b) Distinct movement parameters are represented by different neurons in the motor cortex. Eur J Neurosci 26:1055-1066. Stavisky SD, Rezaii P, Willett FR, Hochberg LR, Shenoy KV, Henderson JM (2018) Decoding Speech from Intracortical Multielectrode Arrays in Dorsal "Arm/Hand Areas" of Human Motor Cortex. Conf Proc IEEE Eng Med Biol Soc 2018:93-97. Stevens JA (2005) Interference effects demonstrate distinct roles for visual and motor imagery during the mental representation of human action. Cognition 95:329- 350. Stiller AM, Black BJ, Kung C, Ashok A, Cogan SF, Varner VD, Pancrazio JJ (2018) A Meta-Analysis of Intracortical Device Stiffness and Its Correlation with Histological Outcomes. Micromachines (Basel) 9. Subbaroyan J, Martin DC, Kipke DR (2005) A finite-element model of the mechanical effects of implantable microelectrodes in the cerebral cortex. J Neural Eng 2:103- 113. Suminski AJ, Willett FR, Fagg AH, Bodenhamer M, Hatsopoulos NG (2011) Continuous decoding of intended movements with a hybrid kinetic and kinematic brain machine interface. Conf Proc IEEE Eng Med Biol Soc 2011:5802-5806. Suner S, Fellows MR, Vargas-Irwin C, Nakata GK, Donoghue JP (2005) Reliability of signals from a chronically implanted, silicon-based electrode array in non-human primate primary motor cortex. IEEE Trans Neural Syst Rehabil Eng 13:524-541. Szarowski DH, Andersen MD, Retterer S, Spence AJ, Isaacson M, Craighead HG, Turner JN, Shain W (2003) Brain responses to micro-machined silicon devices. Brain Res 983:23-35. Tabot GA, Kim SS, Winberry JE, Bensmaia SJ (2015) Restoring tactile and proprioceptive sensation through a brain interface. Neurobiol Dis 83:191-198. Tabot GA, Dammann JF, Berg JA, Tenore FV, Boback JL, Vogelstein RJ, Bensmaia SJ (2013) Restoring the sense of touch with a prosthetic hand through a brain interface. Proc Natl Acad Sci U S A 110:18279-18284. Taira M, Boline J, Smyrnis N, Georgopoulos AP, Ashe J (1996) On the relations between single cell activity in the motor cortex and the direction and magnitude of three-dimensional static isometric force. Exp Brain Res 109:367-376. Tan DW, Schiefer MA, Keith MW, Anderson JR, Tyler J, Tyler DJ (2014) A neural interface provides long-term stable natural touch perception. Sci Transl Med 6:257ra138. Taylor DM, Tillery SI, Schwartz AB (2002a) Direct cortical control of 3D neuroprosthetic devices. Science 296:1829-1832. Taylor P, Esnouf J, Hobby J (2002b) The functional impact of the Freehand System on tetraplegic hand function. Clinical Results. Spinal Cord 40:560-566.

231

Thach WT (1978) Correlation of neural discharge with pattern and force of muscular activity, joint position, and direction of intended next movement in motor cortex and cerebellum. J Neurophysiol 41:654-676. Thickbroom GW, Phillips BA, Morris I, Byrnes ML, Mastaglia FL (1998) Isometric force- related activity in sensorimotor cortex measured with functional MRI. Exp Brain Res 121:59-64. Tkach D, Reimer J, Hatsopoulos NG (2007) Congruent activity during action and action observation in motor cortex. J Neurosci 27:13241-13250. Todorova S, Sadtler P, Batista A, Chase S, Ventura V (2014) To sort or not to sort: the impact of spike-sorting on neural decoding performance. J Neural Eng 11:056005. Townsend BR, Subasi E, Scherberger H (2011) Grasp movement decoding from premotor and parietal cortex. J Neurosci 31:14386-14398. Trautmann EM, Stavisky SD, Lahiri S, Ames KC, Kaufman MT, O'Shea DJ, Vyas S, Sun X, Ryu SI, Ganguli S, Shenoy KV (2019) Accurate Estimation of Neural Population Dynamics without Spike Sorting. Neuron 103:292-308 e294. Treder MS, Blankertz B (2010) (C)overt attention and visual speller design in an ERP- based brain-computer interface. Behav Brain Funct 6:28. Triolo RJ, Bailey SN, Miller ME, Rohde LM, Anderson JS, Davis JA, Jr., Abbas JJ, DiPonio LA, Forrest GP, Gater DR, Jr., Yang LJ (2012) Longitudinal performance of a surgically implanted neuroprosthesis for lower-extremity exercise, standing, and transfers after spinal cord injury. Arch Phys Med Rehabil 93:896-904. Truccolo W, Friehs GM, Donoghue JP, Hochberg LR (2008) Primary motor cortex tuning to intended movement kinematics in humans with tetraplegia. J Neurosci 28:1163-1178. Umilta MA, Kohler E, Gallese V, Fogassi L, Fadiga L, Keysers C, Rizzolatti G (2001) I know what you are doing. a neurophysiological study. Neuron 31:155-165. van der Maaten L, Hinton G (2008) Visualizing Data using t-SNE. Journal of Machine Learning Research 9:2579-2605. Vargas-Irwin CE, Brandman DM, Zimmermann JB, Donoghue JP, Black MJ (2015) Spike train SIMilarity Space (SSIMS): a framework for single neuron and ensemble data analysis. Neural Comput 27:1-31. Vargas-Irwin CE, Shakhnarovich G, Yadollahpour P, Mislow JM, Black MJ, Donoghue JP (2010) Decoding complete reach and grasp actions from local primary motor cortex populations. J Neurosci 30:9659-9669. Vargas-Irwin CE, Feldman JM, King B, Simeral JD, Sorice BL, Oakley EM, Cash SS, Eskandar EN, Friehs GM, Hochberg LR, Donoghue JP (2018) Watch, Imagine, Attempt: Motor Cortex Single-Unit Activity Reveals Context-Dependent Movement Encoding in Humans With Tetraplegia. Frontiers in Human Neuroscience 12. Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB (2008) Cortical control of a prosthetic arm for self-feeding. Nature 453:1098-1101. Vetter RJ, Otto KJ, Marzullo TC, Kipke DR (2003) Brain-machine interfaces in rat motor cortex: neuronal operant conditioning to perform a sensory detection task. In: First International IEEE EMBS Conference on Neural Engineering, 2003. Conference Proceedings., pp 637-640. Vetter RJ, Williams JC, Hetke JF, Nunamaker EA, Kipke DR (2004) Chronic neural recording using silicon-substrate microelectrode arrays implanted in cerebral cortex. IEEE Trans Biomed Eng 51:896-904.

232

Vigneswaran G, Philipp R, Lemon RN, Kraskov A (2013) M1 corticospinal mirror neurons and their role in movement suppression during action observation. Curr Biol 23:236-243. Vinjamuri R, Weber DJ, Mao ZH, Collinger JL, Degenhart AD, Kelly JW, Boninger ML, Tyler-Kabara EC, Wang W (2011) Toward synergy-based brain-machine interfaces. IEEE Trans Inf Technol Biomed 15:726-736. Volkova K, Lebedev MA, Kaplan A, Ossadtchi A (2019) Decoding Movement From Electrocorticographic Activity: A Review. Front Neuroinform 13:74. Wang K, Wang Z, Guo Y, He F, Qi H, Xu M, Ming D (2017) A brain-computer interface driven by imagining different force loads on a single hand: an online feasibility study. J Neuroeng Rehabil 14:93. Wang W, Chan SS, Heldman DA, Moran DW (2007) Motor cortical representation of position and velocity during reaching. J Neurophysiol 97:4258-4270. Wang W, Chan SS, Heldman DA, Moran DW (2010) Motor cortical representation of hand translation and rotation during reaching. J Neurosci 30:958-962. Wang W, Collinger JL, Degenhart AD, Tyler-Kabara EC, Schwartz AB, Moran DW, Weber DJ, Wodlinger B, Vinjamuri RK, Ashmore RC, Kelly JW, Boninger ML (2013) An electrocorticographic brain interface in an individual with tetraplegia. PLoS One 8:e55344. Wang W, Degenhart AD, Collinger JL, Vinjamuri R, Sudre GP, Adelson PD, Holder DL, Leuthardt EC, Moran DW, Boninger ML, Schwartz AB, Crammond DJ, Tyler- Kabara EC, Weber DJ (2009) Human motor cortical activity recorded with Micro- ECoG electrodes, during individual finger movements. Conf Proc IEEE Eng Med Biol Soc 2009:586-589. Wannier TM, Maier MA, Hepp-Reymond MC (1991) Contrasting properties of monkey somatosensory and motor cortex neurons activated during the control of force in precision grip. J Neurophysiol 65:572-589. Ward MP, Rajdev P, Ellison C, Irazoqui PP (2009) Toward a comparison of microelectrodes for acute and chronic recordings. Brain Res 1282:183-200. Ward NS, Swayne OB, Newton JM (2008) Age-dependent changes in the neural correlates of force modulation: an fMRI study. Neurobiol Aging 29:1434-1446. Ware T, Simon D, Hearon K, Liu C, Shah S, Reeder J, Khodaparast N, Kilgard MP, Maitland DJ, Rennaker RL, 2nd, Voit WE (2012) Three-Dimensional Flexible Electronics Enabled by Shape Memory Polymer Substrates for Responsive Neural Interfaces. Macromol Mater Eng 297:1193-1202. Wark HA, Sharma R, Mathews KS, Fernandez E, Yoo J, Christensen B, Tresco P, Rieth L, Solzbacher F, Normann RA, Tathireddy P (2013) A new high-density (25 electrodes/mm(2)) penetrating microelectrode array for recording and stimulating sub-millimeter neuroanatomical structures. J Neural Eng 10:045003. Weiss JM, Gaunt RA, Franklin R, Boninger ML, Collinger JL (2019) Demonstration of a portable intracortical brain-computer interface. Brain-Computer Interfaces 6:106- 117. Wellman SM, Eles JR, Ludwig KA, Seymour JP, Michelson NJ, McFadden WE, Vazquez AL, Kozai TDY (2018) A Materials Roadmap to Functional Neural Interface Design. Adv Funct Mater 28. Wessberg J, Stambaugh CR, Kralik JD, Beck PD, Laubach M, Chapin JK, Kim J, Biggs SJ, Srinivasan MA, Nicolelis MA (2000) Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature 408:361-365. Westling G, Johansson RS (1984) Factors influencing the force control during precision grip. Exp Brain Res 53:277-284.

233

Wilcox RR (2017) Introdction to Robust Estimation and Hypothesis Testing, 3 Edition. Cambridge, MA: Academic Press. Willett FR, Deo DR, Avansino DT, Rezaii P, Hochberg LR, Henderson JM, Shenoy KV (2020) Hand Knob Area of Premotor Cortex Represents the Whole Body in a Compositional Way. Cell 181:396-409 e326. Willett FR, Murphy BA, Memberg WD, Blabe CH, Pandarinath C, Walter BL, Sweet JA, Miller JP, Henderson JM, Shenoy KV, Hochberg LR, Kirsch RF, Ajiboye AB (2017a) Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts' law. J Neural Eng 14:026010. Willett FR, Pandarinath C, Jarosiewicz B, Murphy BA, Memberg WD, Blabe CH, Saab J, Walter BL, Sweet JA, Miller JP, Henderson JM, Shenoy KV, Simeral JD, Hochberg LR, Kirsch RF, Ajiboye AB (2017b) Feedback control policies employed by people using intracortical brain-computer interfaces. J Neural Eng 14:016001. Willett FR, Murphy BA, Young DR, Memberg WD, Blabe CH, Pandarinath C, Franco B, Saab J, Walter BL, Sweet JA, Miller JP, Henderson JM, Shenoy KV, Simeral JD, Jarosiewicz B, Hochberg LR, Kirsch RF, Ajiboye AB (2018) A Comparison of Intention Estimation Methods for Decoder Calibration in Intracortical Brain- Computer Interfaces. IEEE Trans Biomed Eng 65:2066-2078. Willett FR, Young DR, Murphy BA, Memberg WD, Blabe CH, Pandarinath C, Stavisky SD, Rezaii P, Saab J, Walter BL, Sweet JA, Miller JP, Henderson JM, Shenoy KV, Simeral JD, Jarosiewicz B, Hochberg LR, Kirsch RF, Bolu Ajiboye A (2019) Principled BCI Decoder Design and Parameter Selection Using a Feedback Control Model. Sci Rep 9:8881. Williams JC, Rennaker RL, Kipke DR (1999) Long-term neural recording characteristics of wire microelectrode arrays implanted in cerebral cortex. Brain Res Brain Res Protoc 4:303-313. Wise KD (2005) Silicon microsystems for neuroscience and neural prostheses. IEEE Eng Med Biol Mag 24:22-29. Wise KD, Angell JB, Starr A (1970) An integrated-circuit approach to extracellular microelectrodes. IEEE Trans Biomed Eng 17:238-247. Wodlinger B, Downey JE, Tyler-Kabara EC, Schwartz AB, Boninger ML, Collinger JL (2015) Ten-dimensional anthropomorphic arm control in a human brain-machine interface: difficulties, solutions, and limitations. J Neural Eng 12:016011. Wolbarsht ML, Macnichol EF, Jr., Wagner HG (1960) Glass Insulated Platinum Microelectrode. Science 132:1309-1310. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM (2002) Brain- computer interfaces for communication and control. Clin Neurophysiol 113:767- 791. Wu W, Gao Y, Bienenstock E, Donoghue JP, Black MJ (2006) Bayesian population decoding of motor cortical activity using a Kalman filter. Neural Comput 18:80- 118. Xie T, Zhang D, Wu Z, Chen L, Zhu X (2015) Classifying multiple types of hand motions using electrocorticography during intraoperative awake craniotomy and seizure monitoring processes-case studies. Front Neurosci 9:353. Yanagisawa T, Hirata M, Saitoh Y, Goto T, Kishima H, Fukuma R, Yokoi H, Kamitani Y, Yoshimine T (2011) Real-time control of a prosthetic hand using human electrocorticography signals. J Neurosurg 114:1715-1722. Young D, Willett F, Memberg WD, Murphy B, Walter B, Sweet J, Miller J, Hochberg LR, Kirsch RF, Ajiboye AB (2018) Signal processing methods for reducing artifacts in

234

microelectrode brain recordings caused by functional electrical stimulation. J Neural Eng 15:026014. Young D, Willett F, Memberg WD, Murphy B, Rezaii P, Walter B, Sweet J, Miller J, Shenoy KV, Hochberg LR, Kirsch RF, Ajiboye AB (2019) Closed-loop cortical control of virtual reach and posture using Cartesian and joint velocity commands. J Neural Eng 16:026011. Yousry TA, Schmid UD, Alkadhi H, Schmidt D, Peraud A, Buettner A, Winkler P (1997) Localization of the motor hand area to a knob on the precentral gyrus. A new landmark. Brain 120 ( Pt 1):141-157. Zhang CY, Aflalo T, Revechkis B, Rosario ER, Ouellette D, Pouratian N, Andersen RA (2017) Partially Mixed Selectivity in Human Posterior Parietal Association Cortex. Neuron 95:697-708 e694. Zhuang J, Truccolo W, Vargas-Irwin C, Donoghue JP (2010) Decoding 3-D reach and grasp kinematics from high-frequency local field potentials in primate primary motor cortex. IEEE Trans Biomed Eng 57:1774-1784.

235