APPLYING BLOCK-BASED PROGRAMMING TO NEUROFEEDBACK APPLICATION DEVELOPMENT

By CHRIS S. CRAWFORD

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2017 ⃝c 2017 Chris S. Crawford ACKNOWLEDGMENTS I would like to thank my Lord and Savior Jesus Christ for carrying me to heights I never imagined. To my loving mother, Dolores Crawford, for always believing in me. My father, Chris S. Crawford Sr., for ensuring I never strayed off a path of success. To my grandparents, Lubertha and O.D Crawford who taught me how to think critically and work hard. To my Aunt, Ida Tyree-Hyche, for providing me love and knowledge that truly changed my life. I would also like to express my extreme gratitude to my advisor, Dr. Juan E. Gilbert, who has guided me through my years as a Ph.D. student. I want to thank Dr. Monica Anderson for introducing me to research. I would like to thank my committee members, Dr. Kyla McMullen, Dr. Christina Gardner-McCune, and Dr. James Oliverio, for their feedback and support of my research. Lastly, I would like to thank Intel Corporation, GEM, and NSF for their financial support.

3 TABLE OF CONTENTS page ACKNOWLEDGMENTS ...... 3 LIST OF ...... 7 LIST OF FIGURES ...... 8 ABSTRACT ...... 11

CHAPTER 1 INTRODUCTION ...... 13 1.1 Motivation ...... 14 1.2 Objective ...... 16 1.3 Challenges ...... 16 1.4 Thesis Statement ...... 18 1.5 Overview of Approach ...... 19 1.6 Contributions and Implications for Design ...... 21 1.7 Dissertation Organization ...... 22 2 LITERATURE REVIEW ...... 23 2.1 BCI Software Platforms ...... 23 2.1.1 Method Focused BCI Software Platforms ...... 25 2.1.2 Application Focused BCI Software Platforms ...... 31 2.2 Block-Based Programming (BBP) ...... 36 2.2.1 Block-Based Programming Environments ...... 37 2.2.2 End-User Programming with BBP Environments ...... 44 3 DESIGN AND IMPLEMENTATION OF A BLOCK-BASED NEUROFEEDBACK APPLICATION DEVELOPMENT ENVIRONMENT ...... 47 3.1 Challenges ...... 49 3.1.1 System Configuration ...... 50 3.1.2 Text-based Languages ...... 52 3.1.3 Feedback ...... 53 3.2 NeuroBlock Design Methodology ...... 54 3.2.1 EEG Apparatus ...... 55 3.2.2 EEG Data Communication ...... 57 3.2.3 Web Application ...... 59 3.2.3.1 Blocks ...... 62 3.2.3.2 Neurofeedback ...... 69 3.3 Implementing NeuroBlock ...... 71 3.3.1 EEG Data Communication ...... 71 3.3.2 Feedback ...... 74

4 3.3.3 Stage and Workspace Management ...... 75 3.4 Pilot Study ...... 76 3.4.1 Population and Procedure ...... 76 3.4.2 Observations ...... 80 3.4.3 Discussions ...... 84 3.4.4 Pilot Study Conclusion ...... 85 3.5 Chapter Summary ...... 85 3.6 Conclusion ...... 86 4 EVALUATION OF NEUROBLOCK ...... 87 4.1 Participants ...... 87 4.2 Procedures ...... 88 4.3 Methodology ...... 93 4.4 Results ...... 94 4.4.1 Session One ...... 94 4.4.1.1 Programming efficacy ...... 94 4.4.1.2 Learning barriers ...... 95 4.4.1.3 Usability ...... 96 4.4.1.4 Self-Efficacy, effectiveness, and efficiency ...... 98 4.4.1.5 Interviews ...... 100 4.4.2 Session Two ...... 106 4.4.2.1 Programming efficacy ...... 106 4.4.2.2 Learning barriers ...... 107 4.4.2.3 Self-Efficacy, effectiveness, and efficiency ...... 108 4.4.2.4 Interviews ...... 110 4.4.3 Session Three ...... 114 4.4.3.1 Programming efficacy ...... 114 4.4.3.2 Learning barriers ...... 115 4.4.3.3 Self-Efficacy, effectiveness, and efficiency ...... 116 4.4.3.4 Interviews ...... 119 4.4.4 Summary ...... 119 5 SUMMARY AND FUTURE DIRECTIONS ...... 121 5.1 Research Questions Revisited ...... 122 5.2 Contributions ...... 125 5.3 Limitations ...... 126 5.4 Future Work ...... 126 5.5 Conclusion ...... 127

APPENDIX A STUDY PROTOCOL AND MATERIALS ...... 128 A.1 Study Procedures ...... 128 A.2 Recruitment Flyer ...... 129

5 A.3 Screening Questionnaire ...... 130 A.4 BCI Self-Efficacy Survey ...... 131 A.5 Programming Self-Efficacy Survey ...... 132 A.6 Session One Pre-Task Instructions ...... 132 A.7 Session One Task Instructions ...... 133 A.8 Session Two Pre-Task Instructions ...... 134 A.9 Session Two Task Instructions ...... 136 A.10 Session Three Pre-Task Instructions ...... 139 A.11 Session Three Task Instructions ...... 141 B SELECTED NEUROBLOCK PROGRAMS ...... 145 B.1 Session One Pre-Task Program ...... 145 B.2 Session One Task Program ...... 146 B.3 Session Two Pre-Task Program ...... 147 B.4 Session Two Task Program ...... 149 B.5 Session Three Pre-Task Program ...... 151 B.6 Session Three Task Program ...... 153 REFERENCES ...... 155 BIOGRAPHICAL SKETCH ...... 164

6 LIST OF TABLES Table page 3-1 EEG Frequency Bands related to various mental states...... 58 3-2 Stage Components...... 62 3-3 Block Components...... 62 4-1 Learning barriers coding scheme...... 93 4-2 Programming self-efficacy questions...... 94 4-3 SUS questions ...... 97 4-4 BCI self-efficacy questions ...... 98 4-5 Session one selected responses related to the five main categories...... 101 4-6 Session two selected responses related to the five main categories...... 111

7 LIST OF FIGURES Figure page 2-1 BCI system design...... 24 2-2 Operator, data acquisition, signal processing, and user application programs which are the core of BCI2000...... 26 2-3 Configuration menu for BCI pipeline steps...... 27 2-4 Screenshot of Simulink with rtsBCI and feedback application...... 28 2-5 BCILAB GUI panels used for visualizing EEG data, calibrating models, scripting, and modifying evaluation approaches...... 30 2-6 Pyff GUI that BCI experimenters may use to select, modify, and control feedback applications...... 31 2-7 2D visualization of signals and time-frequency dynamics...... 33 2-8 OpenViBE visual programing GUI and 3D spatial topography...... 34 2-9 Student showing BCI-based science fair project which used OpenViBE to control robotic arm at the United States Nation’s capital...... 36 2-10 Alice 3 Scene editor, program preview, and block editor...... 38 2-11 Scratch interface...... 40 2-12 App Inventor designer interface...... 42 2-13 App Inventor block interface...... 42 2-14 Block-based programming environment designed for clinicians...... 44 2-15 Block-based programming environment designed for physical prototyping. .... 45 2-16 A Visual Programming Framework for Wireless Sensor Networks in Smart Home Applications...... 45 3-1 Interaxon Muse. A) EEG headset, B) Muse electrode positions, and C) User wearing Muse...... 49 3-2 System design...... 50 3-3 2D signal visualization...... 53 3-4 3D topographic map...... 53 3-5 EEG apparatuses. A) Neurosky Mindwave, B) Interaxon Muse, C) Emotiv Insight, D) Emotiv Epoch, E) OpenBCI Ultra Cortex “Mark IV”, and F) g.Nautilus .. 55

8 3-6 User wearing Interaxon Muse EEG apparatus...... 56 3-7 User building neurofeedback application with NeuroBlock...... 57 3-8 Web application interface with affective state data viewer selected...... 60 3-9 Web application interface with sprite viewer selected...... 60 3-10 Stage figure...... 62 3-11 Example of blocks separated...... 63 3-12 Connected blocks...... 63 3-13 Motion blocks...... 64 3-14 Sound blocks...... 65 3-15 Event blocks...... 65 3-16 Control blocks...... 67 3-17 Simple script that moves an object based on a user’s relaxation level...... 67 3-18 Sensing blocks...... 68 3-19 Operator blocks...... 69 3-20 Affective state line graphs...... 70 3-21 Signal quality feedback. A) Good signal quality, B) OK signal quality, C) Bad signal quality, D) Varying signal quality, and E) No Signal ...... 70 3-22 Javascript code snippet that passes information captured from the Interaxon Muse to the web application ...... 72 3-23 Javascript code snippet that receives data sent from the server...... 73 3-24 Object management pane...... 75 3-25 object selection menu...... 75 3-26 Pre-task effectiveness...... 80 3-27 Task effectiveness...... 80 4-1 Screenshot of the stage component during session one task...... 90 4-2 Screenshot of the stage component during session two task...... 91 4-3 Screenshot of the stage component during session three task...... 92 4-4 Session one average programming self-efficacy scores for each question...... 95

9 4-5 Session one barriers...... 95 4-6 Session one average SUS scores for each question...... 97 4-7 Session one effectiveness...... 98 4-8 Session one BCI self-efficacy scores for each question...... 99 4-9 The relationship between BCI self-efficacy and time on task during session one...... 99 4-10 Session two average programming self-efficacy scores for each question...... 107 4-11 Session two barriers...... 107 4-12 Session two average SUS scores for each question...... 108 4-13 Session two average BCI self-efficacy scores for each question...... 109 4-14 Session two effectiveness...... 109 4-15 The relationship between BCI self-efficacy and time on task during session two...... 110 4-16 Simple script...... 113 4-17 Session three average programming self-efficacy scores for each question. .... 115 4-18 Session three barriers...... 115 4-19 Session three average SUS scores for each question...... 117 4-20 Session three effectiveness...... 117 4-21 Session three average BCI self-efficacy scores for each question...... 118 4-22 The relationship between BCI self-efficacy and time on task during session three...... 118 4-23 Total barriers encountered for each session...... 119 4-24 SUS scores for each session...... 120

10 Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy APPLYING BLOCK-BASED PROGRAMMING TO NEUROFEEDBACK APPLICATION DEVELOPMENT By Chris S. Crawford August 2017 Chair: Juan E. Gilbert Major: Human-Centered Computing Non-medical Brain-Computer Interfaces (BCIs) are gaining popularity as neurophysiological devices, capable of measuring users state, become more affordable and effective. BCI technology has been integrated into applications for various purposes such as device control, user-state monitoring, training and education, cognitive improvement, neurophysiological evaluation, and gaming. Although there are BCI software platforms (BCISP) that assist experienced developers with creating applications, there are few options designed for novice BCI feedback application developers wanting to create custom applications. The current lack of novice-friendly BCI development tools may present a barrier to entry for users interested in getting started with developing neurofeedback applications. This research explores the concept of using block-based programming (BBP) to aid novice programmers with creating BCI feedback applications. This work presents a BBP environment with electroencephalogram (EEG) sensing features that was designed and implemented based on insights gained from previous BCI and visual programming languages literature. This system was evaluated in a formal study that tasked forty-two novice programmers with developing neurofeedback applications. Three exercises were designed to investigate participants’ efficiency and effectiveness with the developed BBP environment. BCI self-efficacy, end-user programming barriers, and usability were also evaluated during the study. The results of this study suggest that although

11 novice programmers encounter barriers related to creating block-based affective state mathematical functions, they find creating neurofeedback applications with the visual environment easy. These results also suggest that BCI self-efficacy may serve as a predictor of the amount of time it takes novice developers to create neurofeedback applications prior to being exposed to the BBP environment.

12 CHAPTER 1 INTRODUCTION Brain-Computer Interface (BCI) systems are used to measure central nervous system (CNS) activity. BCI systems convert CNS activity into artificial output that is then used to replace, restore, enhance, supplement, or improve natural CNS output [1]. BCI functions by acquiring brain signals, identifying patterns, and producing actions based on the observed patterns. This process allows users to interact with their environment without having to use their peripheral nerves and muscles [2]. Electrical signals from the brain were first captured from the cortical surface of animals in 1875 [3]. In 1929 Hans Berger reported the first successful attempt to capture brain signals from the human scalp [4]. Despite this significant accomplishment, technology necessary to efficiently measure and process brain signals remained extremely limited during this era. As technology advanced during the 20th century, precise measurements of electrical activity became more feasible. In 1973, Jacques Vidal coined the term Brain- Computer Interface after developing a computer-based system that acquired electrical signals captured from the scalp and converted it to commands [5, 6]. Although Vidal presented a breakthrough with his early work, BCI research experienced slow progress between the mid 70s and late 90s. During this time many research labs developed their own custom BCI systems. Unfortunately, this resulted in a lack of standardization and limited opportunities to replicate and compare published studies [2]. During the early 2000s, computational power of personal computers began to increase. Software development environments also began to improve. These factors assisted developers with building more efficient and flexible software. Therefore, new BCI tools began to appear [7–9]. Since the introduction of general-purpose BCI systems several open-source BCI platforms have been released [10].

13 These platforms have mainly been used to develop BCI technology that falls into the following categories: basic research, clinical/translational research, consumer products, and emerging applications [11]. Clinical/translational applications often provide assistive care for persons with clinical conditions. Examples of these BCI applications include BCI controlled wheelchairs [12–14], prosthetic devices [6, 15] and virtual keyboards [16, 17]. Along with aiding communication and control, BCI is also used for motor recovery training during rehabilitation therapy [18, 19]. The main goal of these applications is to improve the quality of life for individuals who suffer from motor impairments. Recent BCI research, however, has resulted in non-medical applications [20]. Applications deriving from the non-medial domain usually fall into the consumer products or emerging application categories. Non-medical BCI applications include drowsiness detection while driving [21], workload estimation [22, 23], gaming [24], engagement monitoring [25, 26] neurofeedback training [27], and neuromarketing [28]. Although BCI hardware, such as caps and headsets, provide the capability to acquire brain signals, BCI software is essential to the process of coordinating actual output [29]. Without the software component many of the applications mentioned above would not be possible. For example, consider the standard gaming system that consists of a main system unit (console) and a controller. Without software the main system would be unable to provide intelligent responses (visual feedback, auditory feedback, etc) to input provided by users via the controller. With no way to utilize input effectively this game system would be practically useless. Similarly, BCI systems would experience the same issues if software were nonexistent. 1.1 Motivation

Recently researchers from various disciplines have become interested in exploring the use of BCI beyond medical applications [20]. However, the process of configuring BCI software development systems can be difficult for non-technical users. Often, prior knowledge about configuring and compiling packages are required before users can start

14 developing BCI applications. Many of the existing BCI software platforms depend on environments such as MATLAB, which requires additional technical knowledge. Other BCI software platforms allow users to modify pre-designed BCI applications by changing a parameters. This approach restricts users to a limited set of feedback designs. Presently, users aiming to create novel feedback applications must first learn text-based languages such as Python or leverage advanced packages such as OpenGL. Requiring extensive technical knowledge may present a barrier of entry for users from non-technical disciplines. Considerable research exist on environments that assist novice developers with building applications. These works have often investigated the effectiveness of visual programming languages [30–32]. Block-Based Programming (BBP) is a relatively new form of visual programming designed with novice programmers in mind. BBP environments usually leverage a puzzle piece (blocks) metaphor that assists users with direct manipulation of language primitives. With these environments novice programmers can build applications simply by snapping puzzle styled blocks together. Each blocks’ form, colors, and labels provides visual cues to the users about its potential function. Studies commonly report that novice programmers find BBP easier than text-based alternatives [33]. In its infancy phase, BBP was mainly used to assist young learners between the stages of elementary and high school. However, recent research has started investigating the use of visual programming for end-user programming purposes that span across all ages [32]. This work aims to extend previous work by investigating the applicability of a BBP approach to neurofeedback applications development. BBP approaches have been developed for end-user programming from various domains [34–36]. However, there has been limited research that investigates the use of a BBP environment for neurofeedback application development. Crawford et al. [37] presented work that discussed potential benefits of interacting with electroencephalography (EEG) signals using BBP. However, the authors only reported a preliminary investigation. Further exploration of this approach is important as interest in BCI is beginning to extend beyond

15 the traditional medical domain [20]. This approach could present an opportunity to extend BCI development to novice programmers and individuals from disciplines that do not prioritize programming skills. This may promote additional multidisciplinary work that leads to the discovery of novel ways to use BCI technology. 1.2 Objective

The objective of this dissertation is to address the lack of novice-friendly BCI application development tools by investigating a BBP BCI approach. This work aims to explore the design, implementation, and empirical evaluation of a novice-friendly BCI development environment by leveraging insights from previous BCI and visual languages research. The major goal of this research is to better understand the effectiveness and efficiency of a BBP BCI feedback application development for novice programmers. Along with lacking programing experience, new users interested in BCI may also have minimal experience working with BCI devices. Therefore, an additional goal of this research is to determine if novice programmers’ BCI efficacy levels are related to reported user performance or satisfaction levels after using the implemented system. Furthermore, this research seeks to extract user needs using a user-centered design approach that provide insights to future BCI software platforms designers. Specifically, the three following questions are addressed:

• (RQ1) What barriers do novice programmers face when developing neurofeedback applications using a block-based programming approach?

• (RQ2) How do novice programmers perceive the usability of a block-based neurofeedback development tool?

• (RQ3) What is the relationship between BCI self-efficacy and novice programmers ability (efficiency and effectiveness) to develop neurofeedback applications using a visual neurofeedback development tool? 1.3 Challenges

Although BCI platforms have made much progress over the past years, challenges still exist that hinder end users [38]. One challenge is the relatively high barriers to

16 entry presented by many BCI platforms. Often this is due to BCI software platforms dependency on programming languages such as C++. This approach can make it difficult for non-computer scientist to develop BCI applications [39]. Along with hindering programmers’ ability to developing BCI applications efficiently, languages such as C++ may also negatively impact programmers efficacy. Although the requirement of programming presents a challenge to developing BCI application, it is unavoidable. In the case where the goal is to create novel applications that are driven by a set of rules, the ability to use logical structures is essential. Previous approaches have investigated using text-based languages such as Python. However, text-based languages in general may present barriers associated with syntax to novice programmers. These issues range from the difficulty of reading and composing code to the lack of memory aids. The ultimate goal of this work is to design a BCI software platform that addresses these drawbacks. To accomplish this a visual block-based programming interface will be integrated within the BCI system. The emergence of non-medical BCI applications have begun to present solutions that are less domain-specific. As this trend continues, the ability to easily connect neurophysiological data with various technologies will become essential. However, the underlying structure of most BCI software platforms presents challenges to integrating new devices. Previous BCI software platform designers did not prioritize creating modular systems that take advantage of software ecosystems. As a result, the process of extending BCI-based functionalities to multiple other technologies can present issues. These problems include (but are not limited to) installing and integrating multiple libraries, managing version conflicts, and other software maintenance related issues. To address these issues, the BCI software platform presented in this work is designed to leverage node package manager (npm) [40], a software ecosystem which hosts more than 400,000 packages (as of March 31st, 2017). This software ecosystem focuses on making to process

17 of publishing and using packages (software libraries) easier for developers [41]. This system design approach could make it easier to develop novel BCI applications in the future. Most existing BCI software platforms that features a customizable feedback component restrict users to predefined visual templates. Users seeking to design custom visual applications must develop feedback applications with languages such as C++ or Java. In addition, they must also possess knowledge of communication protocols such as the Transmission Control Protocol (TCP) to send commands between the BCI software platforms and the custom feedback application. This could present serious problems for non-computer scientists and novice programmers. To address this issue, the open source Scratch-VM package was integrated into the BCI software platform. This package is used to maintain the feedback applications and enables users to use blocks to leverage logical structures that drives feedback presented in a web application. This approach was choosen over the Blockly library since it provided additional useful functionality. The neurophysiological data acquired from the BCI device is also passed to this management system which allows users to use blocks that store information about a users affective state. This design allows users to develop neurofeedback applications combining affective state and logic structure blocks. As a result, novice programmers interested in building BCI applications can avoid many of the barriers presented by alternative text-based approaches. 1.4 Thesis Statement

This work introduces the novel concept of a BBP approach to neurofeedback application development. This involves the novel integration of a BCI device capable of acquiring neurophysiological data and a visual BBP interface. Subsequently, users can focus on the logic of a neurofeedback application rather than the syntax of a text-based language. This merging allows novice programmers to quickly begin developing applications that are designed to adapt to a user’s affective state. My thesis statement is the following: exposure to a BBP neurofeedback development environment will cause

18 novice programmers to become more confident in their ability to develop neurofeedback applications. Novice programmers BCI self-efficacy will be predictive of their efficiency. 1.5 Overview of Approach

To evaluate this thesis, the following approach was applied. A BBP environment was designed and implemented for novice programmers. This system extends a previous system that uses offline EEG data to influence visualizations [37]. Although, the previous system presented an important initial step, it was not appropriate for addressing the goals of this research. Explicitly, the following goals were unsupported in the previous system: allowing users to create near real-time neurofeedback applications, customizable visual feedback, and hybrid BCI support. The BBP environments presented in this work includes many features that are absent in the previous system. These features address the goals mentioned previously and will be discussed in more details in Chapter 3. After the BBP neurofeedback application development environment was developed, a set of neurofeedback applications were designed based on Bacteria Hunt, a multimodal BCI game. Bacteria hunt has been used previously to investigate the effects of feedback on users’ ability to manipulate their mental state of relaxation [42]. The approach presented in this work can be divided into two categories: system design and feedback application design. The system design described in the Bacteria Hunt study prioritizes optimal classification performance which requires multiple subsystems. This includes a BCI cap, EEG recorder, and two PCs for feedback presentation and signal analysis. The system presented in this work prioritizes a light flexible design that uses a single machine and a wearable BCI device. While the previous system approach offers performance trade-offs, the system presented in this work was designed with non-technical users in mind. It was vital to design a system that reduces the technical knowledge required to start creating neurofeedback applications. To address concerns about optimal performance, a web based approach was chosen. This will allow the system to leverage emerging cloud-based EEG

19 processing systems provided by companies such as Qusp in the future [43]. Chapter 3 provides additional details concerning the system design. Although the design of the system presented in this work differs from the original Bacteria Hunt system, the feedback application rules and objects used during the studies are very similar. To accomplish this the mathematical functions and rules featured in Bacteria Hunt were converted to instructions that aided study participants with recreating the Bacteria Hunt application using the BBP environments. This approach ensured that the applications novice programmers developed during the study were similar to previously investigated neurofeedback applications. A total of six neurofeedback applications were designed for the study. Participants built three of these applications during the pretest phase of the study. The other three applications served as tasks. Each of the applications featured concepts from the Bacteria Hunt application. Additional details about this process will be provided in Chapter 4. After the BBP environments and tasks were designed a study was conducted. Although most previous BCI studies investigate subjects from a BCI user prospective, this work presents the concept of investigating BCI subjects from an end-user programming perspective. Studies that evaluate participants from a BCI user prospective often focus on evaluating users while they are using a BCI device. When this approach is used researchers commonly analyze information such as completion time, errors, and neurophysiological data collected while participants use BCI applications that were developed prior to the study. In contrast, the end-user programming perspective presented in this study focuses on evaluating users when they are presented with a BCI device and asked to develop a neurofeedback application. This approach does not focus on evaluating how participants use a neurofeedback application, instead it focuses on their experiences while trying to develop a neurofeedback application. To accomplish this the study investigated how novice programmers developed neurofeedback applications using a BBP environment over three sessions. During each

20 session participants used the BBP environments to develop neurofeedback applications. Each of the sessions varied in the number of objects required to successfully create the application. The amount of mathematical functions and rules also varied for each session. These metrics were used as a measure of difficulty. Sessions were scheduled by difficulty in ascending order (Easy, Intermediate, and Advanced) with the advanced session featuring the most mathematical functions, rules, and objects. One-day gaps were scheduled between each session to investigate how participants’ interaction with the BBP environments changed over time. Behavioral data was collected using screen capture videos of the BBP environment captured while participants develop neurofeedback applications. Participants were also asked to think-aloud while completing the tasks. Qualitative analysis of the audio-visual data was based on a learning barriers coding scheme that has been widely adopted in previous end-user programming studies [44]. Participants time on task and effectiveness were also evaluated. Effectiveness was categorized in three levels: pass, pass with help, and fail. This approach was adopted from a previous BBP study [45]. A slightly modified version of Compeau and Higgins validated scale was used to assess the BBP environmentss influence on participants self-efficacy [46]. The analysis process is discussed in further details in Chapter 4. 1.6 Contributions and Implications for Design

The major goal of this research is to explore a BBP design alternative for BCI feedback application development that improves the experiences of novice programmers. The main expected contribution is presenting a novel BCI feedback application development tool for novice programmers. Insights gathered from this research could also inform the design of future BBP BCI application development software. The proposed solution presents the possibility of combining a low threshold entry to programming with the ability to create biofeedback applications. If effective this development tool could open

21 doors for biofeedback application tinkering on a scale that is not possible with existing BCI software platforms. 1.7 Dissertation Organization

The subsequent sections of this dissertation are organized as follows:

• Chapter 2, Literature Review, covers relevant BCI software platform and block-based programming literature.

• Chapter 3, Design and Implementation of a Block-Based Neurofeedback Application Development Environment, describes the process of designing and developing a block-based neurofeedback application development environment.

• Chapter 4, Evaluation of a Block-Based Neurofeedback Application Development Environment, describes the process of evaluating the system presented in this dissertation.

• Chapter 5, Summary and Future Directions, summarizes insights for future BCI software platform designers gained from this research.

22 CHAPTER 2 LITERATURE REVIEW Literature used to inform this research exists mainly in two bodies of previous work. The first body of literature covers previous work on Brain-Computer Interface (BCI) software platform. This research primarily focuses on the design, technical specifications, and goals of software platforms used to create BCI applications. It is important to note that although several tools exists that enable offline analysis (post analysis), these software packages do not enable the creation of a true BCI system. Only BCI software platforms that allow the development of closed-loop application that provide visual or auditory feedback to users will be reviewed extensively in this section. This definition falls in line with the perspective currently adopted by many BCI professionals [47]. An additional body of literature that will be discussed in this section is BBP. Work that discusses Block-Based Programming (BBP) environments in general, comparisons of BBP and text-based alternatives, and the concept of leveraging BBP for end-user programming will be reviewed in this section. 2.1 BCI Software Platforms

BCI software platforms are software frameworks that assist developers with creating BCI systems. Their designs slightly differ across literature. For example, Kothe et al. [48] suggests that a BCI system consists of five consecutive stages: signal acquisition, preprocessing or signal enhancement, feature extraction, classification, and control interface. However, Venthur describes a system that consists of three stages: signal acquisition, signal processing, and feedback/stimulus presentation [39]. Multiple other descriptions exist, but all designs are similar to the framework presented by Wolpaw. In the early days of general-purpose BCI systems, Wolpaw classified BCI systems into the following key phases: signal acquisition, feature extraction, feature translation, and commands/applications [1]. Figure 2-1 illustrates this general design.

23 Figure 2-1. BCI system design.

Signal acquisition is the first step of any BCI software platform. During this phase a BCI device is used to capture brain activity. The most common signal acquisition methods measure three types of activity: metabolic, magnetic, and electrical activity. Metabolic activity measures blood flow patterns in the brain. Magnetic activity measures the brains magnetic field. Electrical activity measures electrical signals caused by neuronal firing. Compared to the magnetic and metabolic signal acquisition methods, the electrical signal acquisition method tends to be the least obtrusive, inexpensive, and user friendly. Although other methods may offer better temporal or spatial resolution, EEG, a non-invasive electrical monitoring technique, is currently the most promising signal acquisition for everyday use. As a result, this research will also focus on EEG-based BCI approaches. Raw signals captured during signal acquisition are often noisy and difficult to interpret. Brain signals offer ample information, however much of the information is often noisy and difficult to interpret. To address this issue, brain signals go through multiple processing stages prior to being translated into a command. During the first stage, brain signals are preprocessed. Applying signal filtering techniques during this phase reduces the signal to noise ratio (SNR). Examples of methods used during this process include high

24 pass filters, low pass filters, and squaring. Afterwards feature extraction is performed. The main goal of this step is to decrease the preprocessed data dimensionality by extracting specific characteristics of brain signals such as amplitudes and frequencies. After brain signals are preprocessed and the desired features are extracted, translation algorithms can be used to detect and classify various brain states. Many translation methods currently exist that are used to interpret brain signal features. These methods range from simple linear equations to complex machine learning algorithms [49]. Signal translation procedures are mainly used in online BCI systems. In online BCI systems users cognitively produce commands in real-time. After interpretations are made from the brain signal features, commands are produced and sent to an application or device. BCI software platforms assist developers with creating systems that involve each of these steps. These platforms can be placed into two categories: method focused BCI software platforms and application focused BCI software platforms The main distinction between method and application focused BCI software platforms is the degree to which they aid developers with interfacing brain signals with feedback applications. BCI feedback applications are computer programs that provide real-time visual or auditory feedback to BCI users that correspond to instructions derived from raw brain signals. BCI developers are responsible for constructing these instructions and connecting the output to a feedback application. Application focused BCI software platforms are often designed to optimize this task. However, method focused BCI software platforms usually concentrate on signal processing sub tasks applied to signals prior to being passed to feedback applications. Commonly used BCI software platforms that fall into both categories are discussed in the following sections. 2.1.1 Method Focused BCI Software Platforms

Method focused BCI software platforms are mainly designed with BCI experts in mind. These platforms are mostly concerned with providing BCI developers the ability to optimize underlying processing and feature extraction methods. They are also often

25 used by BCI researchers to compare existing methods. Although these platforms can be used to develop systems that feature a graphical component, underlying low-level signal processing methods are usually the focus of method focused BCI software platforms. Most of these platforms aim to make it easier to create new designs for the BCI pipeline. The BCI pipeline consists of a sequence of processes (as shown in Fig. 2-1) applied to raw brain signals that are eventually translated into a command.

Figure 2-2. Operator, data acquisition, signal processing, and user application programs which are the core of BCI2000.

BCI2000 is one of the earliest general-purpose BCI software platforms to apply the BCI pipeline concept [9]. The main goals of BCI2000 are to provide researchers with a flexible BCI software platform and provide standard tools for BCI research. Prior to BCI2000 most experiments were done using highly customized systems that depended on specific BCI parameters. Unlike many of these earlier systems, BCI2000 provides a generic structure that utilizes a modular software design approach. As a result, various BCI protocol designs can be used with BCI2000 without having to make changes to the core software modules. Although BCI2000 is very popular amongst BCI experts it may be very difficult for novice BCI developers. BCI2000 provides only a few preconfigured BCI pipeline processes. Users may interact with these using the interfaces shown in Fig. 2-2 and 2-3.

26 Figure 2-3. Configuration menu for BCI pipeline steps. Source: Gerwin Schalk and Jurgen Melleinger. 2010. A Practical Guide to Brain Computer Interfacing with BCI2000. Springer Science & Business Media.

Fig. 2-2 illustrates four programs that handle different tasks in the BCI pipeline. Each of these programs allows users to configure connection settings for the signal acquisition (red background), signal processing (white background), and user application (blue background) stages of the BCI pipeline. The operator application (gray background) allows users to modify parameters for each of the BCI pipeline steps. Fig. 2-3 shows the configuration menu that can be loaded with the operator application. The configuration menu can be used to modify BCI application parameters such as visual feedback (visualize), signal filtering methods (filter), input modality settings (joystick), and data acquisition settings (source). Although the graphical user interfaces (GUIs) shown in Fig. 2-2 and 2-3 can be used to modify feedback applications, typical out-of-the-box users cannot create custom feedback applications by changing these parameters. As a result

27 non-programmers are limited to preconfigured feedback applications that are packaged with BCI2000. Accordingly, users desiring to develop custom feedback applications that communicate BCI2000 need C++ programming experience.

Figure 2-4. Screenshot of Simulink with rtsBCI and feedback application. Source: Alois Schlogl, et al. 2007. An Open-Source Software Library for BCI Research. Toward Brain-computer Interfacing, (2007), 347.

BioSig is another method based BCI software platform that was released around the same time as BCI2000 [7, 50]. It was initially released as a tool for offline signal analysis. The first releases were based on MATLAB and Octave. In 2004, a package named rtsBCI was released which enabled real-time data acquisition, storage, signal processing and visualization based on MATLAB/Simulink. The main goals of this platform are to address the lack of standardized approaches to BCI development and collaboration. BioSig also aims to improve reproducibility of pre-processing methods, to reduce the need for researchers to reinvent the wheel on multiple projects. Similar to BCI2000, BioSig mostly caters to BCI experts and may be difficult for novice developers. This BCI

28 software platform consists of multiple coherent parts, which include BioSig for Octave and Matlab (biosig4octmat), BioSig for C/C++ (biosig4c++), and rtsBCI. The biosig4octmat package is responsible for most of BioSigs core functionality. It handles tasks such as importing and exporting files, feature extraction methods, classification methods, and signal visualization. Biosig4octmat works as a toolbox for Octave and Matlab. To leverage Biosig4octmat’s functions, users must use MATLAB code. The biosig4c++ component enables data conversion for numerous formats. It also provides improved performance and flexibility in comparison to what is possible solely with MATLAB. This package also includes code that enables network communication of signals. As with biosig4octmat, users need programming experience to leverage biosig4c++’s features. The rtsBCI package provides support for real-time BCI applications. It is important to note that although the previous mentioned packages are vital for signal evaluation, rtsBCI is critical to developing a complete real-time closed-loop BCI system. Figure 2-4 illustrates an example of Biosigs rtsBCI components. As shown in Figure 2-4, rtsBCI depends on Simulink, a block diagram environment that is used to design a real-time model of a BCI application. RtsBCI consists of multiple modules (blockset) that assist users with creating various BCI pipeline designs. Configuration files can be used to define the BCI application settings. BioSig includes a ready to use application where users use a BCI to guide a falling ball into a basket (Figure 2-4). However, users are limited to this application if they do not have programming experience. BCILAB is a method based BCI software platform that was recently released. It was released in 2010 as an extension to the Swartz Center for Computational Neuroscience (SCCN) software suite. One of the primary emphasis of BCILAB is assisting BCI experts with creating new BCI approaches and implementations. In contrast to other BCI platforms, BCILAB focuses less on out-of-the-box support for acquisition hardware and stimulus- presentation. Instead, it focuses on leveraging these features from existing software as much as possible. BCILAB also focuses on BCI research rather than end user

29 Figure 2-5. BCILAB GUI panels used for visualizing EEG data, calibrating models, scripting, and modifying evaluation approaches. Source: Christian Kothe, et al. 2013. BCILAB: a platform for brain-computer interface development. Journal of neural engineering 10, 5 (Aug. 2013), 056014. deployment. Therefore, it does not provide support for developing custom feedback applications without the installation of additional packages. To address this issue, BCILAB offers a framework that interfaces with other BCI software platforms such as BCI2000 and OpenViBE that offer additional support for feedback applications. Figure 2-5 illustrates an example of graphical user interface (GUI) panels and the scripting environment provided by BCILAB. These panels can be used to modify various evaluation methods, machine learning parameters, and EEG visualizations. As with the BCI software platforms discussed earlier, BCILAB would not be an effective tool for novice BCI developers wanting to create custom feedback applications. Method focused BCI software platforms tend to focus on low-level components of a BCI system than on the feedback application. However application focused BCI software platforms often abstract these lower-level processes and lean towards a software design that favors BCI application

30 rapid prototyping. The following section discusses a few application focused BCI software platform currently available. 2.1.2 Application Focused BCI Software Platforms

Application focused BCI software platforms are often designed for BCI developers interested in applied uses of BCI. Although these platforms allow users to manipulate some underlying signal processing methods, they mostly attempt to make the development of feedback applications easier. This is accomplished by abstracting lower level graphics engine methods and signal processing tasks. Some application focused BCI software platforms also focus on easing the process of passing signals to external applications.

Figure 2-6. Pyff GUI that BCI experimenters may use to select, modify, and control feedback applications. Source: Vethur Bastian, et al. 2010. Pyff—A Pythonic Framework for Feedback Applications and Stimulus Presentation in Neuroscience. Frontiers in Neuroscience 4, (Dec. 2010), 179.

Venthur et al. [51] were one of the earliest researchers to investigate approaches to ease BCI feedback application development. Their work eventually produced Bastians BCI software platform, the first Python-based BCI software platform. This platform appeared as interest in Python began to grow within the neuroscience community. Many BCI platforms prior to this system required users to create custom BCI applications using C++, which could be a tedious process for non-programmers. Python is often considered

31 easier to learn than C++ and preferable to users new to programming. Bastians BCI system aims to leverage this advantage of Python to ease the process of developing BCI applications. This BCI software platform is split into three components that can be combined to create a complete BCI software platform or separately as toolboxes that handle tasks specific to various stages of the BCI pipeline. Pyff, a Python-based framework for developing feedback applications and stimulus presentations, was the first component released [51]. Figure 2-6 shows the GUI presented to users while working with Pyff. From this view users may select a specific feedback application by using the drop down menu illustrated in Figure 2-6. As shown by this figure many feedback application commonly used for BCI research (Pong, Cursor Control, Soccer Goalie, etc) are included out-of-the-box. A large portion of the GUI is dedicated to feedback application variables such as frames per seconds (FPS), objects size, objects speed, and other settings that control visual/auditory elements. To address the needs of developers desiring to create custom feedback applications, Pyff offers Python base classes modeled after various types of common feedback and stimuli applications. Although developers may also create a new feedback application from scratch, using base classes may speed up development. The base classes provided by Pyff assist with tasks such as managing the applications main loop, handling events, and interfacing with Python Game Engine (PyGame). Mushu, the signal acquisition component of Bastians BCI software platform, was released shortly after Pyff [52]. Mushus main goal is to provide a unified interface to EEG data from multiple signal acquisition devices. Many BCI systems prior to Mushu only operated in Microsoft Windows. As a result Mushu was designed to run on all major operating systems. Similar to Pyff, Mushu can be connected to an existing BCI system pipeline or used as a standalone component. In the latter case, Mushu acts as a server capable of communicating acquired EEG data over network sockets. Wyrm, an

32 open source BCI toolbox, was the third component of Venthurs BCI system [53]. This component handles the signal processing responsibilities of the system. Wyrm can be used as a toolbox for analysis and visualization of neurophysiological data. It also supports online BCI experiments. Together Pyff, Mushu, Wyrm, can provide a complete, free and open source BCI system implemented with Python. Although Mushu and Wyrm are critical to the BCI software platform, much of this functionality was previously available in other languages such as C++. The key contribution that this BCI software platform provided was the structured process of customizing a feedback application. By presenting a flexible approach that ranges from coding, completely from scratch, to simply changing input values in a GUI, Bastians BCI software platform serves as one of the first serious attempts to bridge the gap between BCI experts and novice BCI developers. Although this BCI software platform aspires to decrease the difficulty to develop BCI feedback applications by leveraging Python, non-programmers may still find it difficult to create custom feedback applications that vary drastically from existing programs provided out-of-the-box.

Figure 2-7. 2D visualization of signals and time-frequency dynamics. Source: Yann Renard, et al. 2010. Openvibe: An open-source software platform to design, test, and use brain-computer interfaces in real and virtual environments. Presence 19, 4 (2010), 35-53.

33 Figure 2-8. OpenViBE visual programing GUI and 3D spatial topography. Source: Yann Renard, et al. 2010. Openvibe: An open-source software platform to design, test, and use brain-computer interfaces in real and virtual environments. Presence 19, 4 2010, 35-53.

OpenViBE is another BCI software platform designed with non-programmers in mind. The goal of OpenViBE is to assist Virtual Reality (VR) developers, clinicians, and BCI researchers with designing testing, and using BCIs [54]. OpenViBE was designed to ease the development of BCI applications. Unlike many other BCI platforms, OpenViBE allows users to create complete scenarios using a graphical language. As a result, prior programming knowledge is not required to create basic BCI applications. This was a fairly novel concept for BCI development when OpenViBE was first released. All BCI software platforms released prior to OpenViBE required some form of text-based programming to develop complete BCI applications. Figure 2-8 illustrates OpenViBEs

34 GUI that developers used to create BCI applications. The window with the grey background is the designer GUI. The OpenViBE designer allows users to design a BCI system by connecting boxes. Each of these boxes corresponds to various stages of the BCI processing pipeline. For example, the generic stream reader box in the top right corner of the designer GUI presents a task that would be classified under the signal acquisition stage of the pipeline. The channel preprocessing, spatial filtering, temporal filtering, and other boxes located below the generic stream reader could be grouped in the preprocessing/feature extraction stage of the BCI pipeline. After the signal processing steps are complete the resulting information can be connected to a preconfigured visualization box that presents visualizations such as the 3D spatial topographic map shown in Figure 2-8. Boxes are also available that provide 2D visualization like the ones shown in Figure 2-7. The visualizations shown in Figure 2-7 and 2-8 are often great for BCI experts wanting to evaluate EEG phenomena. However these out-of-the-box visualization components provided by OpenViBE do not allow users to create their own custom feedback applications. To address this issue OpenViBE provides a network box that is used to communicate signals over a network. Even though this is a great tool for developers, non-programmers will probably be incredibly frustrated and confused if faced with the task of capturing these signals over a network connection and developing a feedback application. As a result, the issue of not being able to create a custom feedback application without programming experience also applies to OpenViBE. Researchers such as Nijholt et al. [38] have discussed the barriers to entry presented by BCI software tools currently available. Therefore, further work is still needed to support potential future BCI experts who are currently novices in programming and BCI. OpenViBEs graphical language approach serves as a promising way to lower the barrier to entry of BCI system development. Figure 2-9 shows a student presenting a project at the 2011 science fair, using OpenViBE, at the United States Nation’s capital. This serves as an example of how interest in BCI software platforms is moving

35 Figure 2-9. Student showing BCI-based science fair project which used OpenViBE to control robotic arm at the United States Nation’s capital. Source: Jozef. 2011. Anand Srinivasans project, using OpenViBE and Emotiv EPOC, felicitated by the US President. Retrieved March 27, 2017 from http://openvibe.inria.fr/

beyond the traditional research lab setting. As mentioned earlier, OpenViBE currently requires programming skills if users wish to create applications similar to the one shown in Figure 2-9. This is also true for creating simple custom feedback applications such as BCI powered games. However, research from the block-based programming (BBP) community may assist with eliminating this constraint. The following section will discuss BBP and conclude with how it could be used to address the current issues faced by non-programmers working with BCI software platforms. 2.2 Block-Based Programming (BBP)

Research on languages designed to aid novice programmers can be traced back to the Logo programming language presented by Feurzeig et al. [55]. This language introduced concepts such as controlling onscreen objects with motion commands, programming artwork, developing games, and creating interactive stories that have inspired many BBP environments. BBP environments take a highly visual approach to programming. These

36 environments often leverage a primitives-as-puzzle-pieces metaphor. This puzzle piece design aims to assist novice programmers with creating computer programs without prior knowledge of mainstream languages such as C++ or Java. Instead of using text to create a program, users interact with drag-and-drop editors to snap instruction puzzle blocks together to build an application. Afterwards users receive visual/audio feedback, which lets users know whether the connection was valid. BBP blocks usually provide visual clues on its functions and where it may be used based on the color and shape. Labels are also included on these blocks to inform users of its purpose. Allowing novice users to program without typing codes often reduces complexity and prevents frustration caused by parentheses, commas and semicolons. Numerous programming environments have been recently released that leverage a block-based approach. 2.2.1 Block-Based Programming Environments

One well-known BBP environment is Alice. Alice was originally introduced as a tool to assist end users lacking 3D graphics training with creating 3D interactive content [56]. Although this version offers ways to create simple animations by clicking GUI buttons, providing precise instructions to objects in an animation requires coding. Alice 2, the successor of this original version, features a no-type editing mechanism [57]. It relies on visual formatting by interacting with blocks using a drag-and-drop editor. This design aims to assist novice programmers with learning introductory object-oriented programming concepts along with creating 3-D movies and simple games. Alice 2 has been used in hundreds of colleges and secondary schools including pre-CS1, CS1, pre-AP and non-major courses [58]. Alice 3 is the latest version of the Alice lineage. This BBP environment was developed in response to a demand from educators for a more abstract version of Alice. Educators also desired a system that was more effective at assisting students with migrating from the BBP environment to production-level languages such as Java, C++, Python, and C#. These demands resulted in a set of richer primitive animations

37 Figure 2-10. Alice 3 Scene editor, program preview, and block editor. Source: Wanda P. Dann and Stephen Cooper. 2011. Education Alice 3: concrete to abstract. Communications of the ACM 52, 8 (2011), 27-29. that enable programmers to design and implement animations at a higher more abstract level. The ability to open the hood and type Java code corresponding to blocks was also added to Alice 3. Alice 3 designers aimed to incorporate these features to better prepare programmers for mainstream languages. Figure 2-10 shows an example of a program created using Alice 3. The interface shown in this figure features a scene editor (upper left), which shows the objects in the current world. The code editor is in the upper right corner of the interface. Users can drag-and-drop blocks in this editor to create various games or animations. Programmers can view the results of their code using the program preview panel located at the bottom of the interface.

38 Many BBP environments focus on 2D games and animations. As a result, Alice has always been attractive to users seeking to develop 3D interactive content. Many studies have investigated the use of Alice in various environments. Most of this work has consisted of using Alice in introductory college courses and CS courses for non-majors. Moskal et al. [59] presented an often cited study that investigated the impacts of Alice on at-risk students in a collegiate environment. Results from this research reported improved performance, retention, and attitudes towards computer science. Work presented by Mullins et al. reported similar findings [60]. This research studied the effects of introducing Alice 2 as students first language. Researchers saw a 4% increase in the number of women who enrolled, 4% increase in the number of students who passed, and 4% decrease in withdrawals in comparison to a traditional C++ course. Johnsgard and Mcdonald compared the experience of students who were exposed to Alice in a CS0 course prior to matriculating in a CS1 C++ course [61]. They reported an improved success rate (received grade of C or higher) for students who took the CS0 course before moving on to C++. Overall most studies investigating Alice report novice programmers, using the tool, grasp coding principles quicker and find it easier than traditional text-based languages. Consequently over 10% of US colleges and universities adopted Alice by 2009 [62]. Scratch is another popular tool that features a BBP environment [63]. It was designed to be more tinkerable, more meaningful and more social than tools like Alice that preceded it. It aimed to make the floor even lower for novice programmers lacking coding experience. In an attempt to achieve this goal, Scratch focuses on the 2D instead of 3D interactive content. Scratch also allows users to add music to creations. To encourage a more social interaction, Scratch features a website that allows users to upload, share, and remix each other projects. This web-based approach allows users to use Scratch without having to download and install files. Instead, Scratch runs entirely in a web browser.

39 Figure 2-11. Scratch interface. Source: John Maloney. 2011. The scratch programming language and environment. ACM Transactions on Computing Education (TOCE) 10, 4 (2010), 16.

Figure 2-11 illustrates the Scratch user interface (UI). As shown in Figure 2-11, the Scratch interface consists of four main panes. The left pane holds the command palette. Using the command palette users can select various categories. These categories allow users to switch between blocks with various functions including animations (motion), object aesthetics (looks), sound, sensing, program logic (control), program operators, variables, and drawing functions (pen). Users can drag-and-drop the blocks from the left pane to the middle pane (scripting area) to create scripts for a selected sprite. Details about the selected sprite are shown in the upper section of the middle pane. Tabs in the middle pane enable switching between views that enable users to modify costumes (images) and sounds associated with the selected sprite. The upper right pane holds the stage area where the results of the program are displayed. The bottom-right pane shows the current thumbnails of objects in the project. Scratch code can be executed in small chunks. Figure 2-11 shows an example of two small chunks in the scripting area that control the movement of a sprite. Each of these small program fragments can be

40 executed individually. As a result, Scratch programs are more flexible which present less restrictions (lowering the floor) for novice programmers to explore. This often results in creative games and animations. Since its release, Scratch has been used in various studies. Booth and Stumpf [64] compared experiences of novice programmers using Java and an application based on Scratch. They observed that users generally perceive the BBP environment environment easier than Java. Users also considered the Scratch-like system more user friendly. They also reported lower perceived workload and higher perceived success. Malan and Leitner investigated students using Scratch in a summer CS college course. They observed that Scratch resulted in gained experience of first-time programmers [65]. Meerbaum-Salant et al. examined the use of Scratch in two ninth-grade classrooms [66]. Results from pre and post tests given to students indicated significant improvement in student knowledge of CS concepts. They also observed that students who had previously learned Scratch had fewer learning difficulties and achieved higher cognitive levels of understanding CS concepts [66]. Along with assisting novice users with learning CS, studies have also reported that students used Scratch voluntarily and more often than other available design software [67]. Overall, studies have shown that along with being easy to learn CS knowledge learned in Scratch translates well to higher-level languages. Scratch now has over 18 million registered users [68]. As of April 2017, it is ranked 21 according to the TIOBE index, which measures programming languages popularity [69]. (AIA) is another tool that features a BBP environment [70]. Instead of focusing on creating 2D or 3D environments that run on desktop systems, AIA allows users to create fully functional mobile apps for Android. With this tool users can create mobile applications incorporating social networking, location awareness, and web-based services offered through Google. The initial version, App Inventor 1 (AI1), provided great initial steps for mobile development using a BBP environment. However, users experienced issues with programming concepts such as declaring and referencing global variables. Users also had issues with event parameters and procedures. App

41 Figure 2-12. App Inventor designer interface.

Figure 2-13. App Inventor block interface.

Inventor 2 (AI2), the latest version of AIA, has addressed many of these issues. An additional issue with AI1 was its dependence on Java. This required AI1 to run as a Java web start application. AI2 uses the blockly framework, which allows it to run completely in the web browser [71]. This results in a more seamless process during testing and rids the need to install additional files. AIA consists of two parts: Designer (Figure 2-12) and Block Editor (Figure 2-13). The designer view consists of 4 panes. The left pane provides a palette where users can add various components to a mobile application. With this palette UI elements such as

42 buttons, labels, list pickers, and images can be selected and added to a project. Features such as texting, Near Field Communication (NFC), and Bluetooth are also available in this pane. The app viewer is to the right of the palette. In this pane users may preview how elements are displayed in the app. The next pane to the right of the app viewer is the components pane. This section provides a tree diagram showing all elements present in an application. Users can select an element in this pane and change its settings in the properties pane located on the far right side of AIAs interface. Buttons tied to functions such as saving projects, adding screens, and creating checkpoints are located above the four main panes. The block editor shown in Figure 2-13 is the view used to develop scripts for the mobile application. The pane on the left allows users to select scripting blocks used to implement application components such as control, logic, math, text, lists, colors, variables, and procedures. Users may also select blocks related to UI elements. The large pane to the right serves as the block editor area (Viewer) where users drag-and-drop and connect blocks to build scripts. The emulator used to preview the results of applications is illustrated on the far right of Figure 2-13. Since its initial release in 2009, students and teachers from middle schools, high schools, and universities have used AIA. It has also been successful in summer camps, road shows, and teacher workshops [72]. Wolber introduced AIA in a course focused on language-facilitated interactions in real-world scenarios [73]. During this course, students used AIA to develop mobile applications that addressed issues ranging from communication in developing countries to texting while driving. Roy used AIA in a summer camp and reported AIA was an effective tool to introduce novice programmers to programming [74]. A slight increase in favorable disposition towards computing was also observed during this study. Wagner et al. also investigated the use of AIA in a summer camp setting [75]. In this study students initially began with App Inventor. Afterwards, block code was directly mapped to the Java equivalent in efforts to prepare novice programmers for more advanced languages. Researchers observed that most participants

43 preferred the block-based language due to the ease of creating GUIs within the BBP environment. AIA now has over 5 million registered users in over 190 different countries. Many other BBP environment exists that span a wide target audience. These platforms include Kodu [76], Greenfoot [77], StarLogo TNG [78], and many more [79]. Topics examined with these platforms range from lawfulness citeRefWorks:119 to 3D game design [80]. Although these BBP environment have traditionally aimed at teaching K-12 and introductory level undergraduate students computing concepts, recent research has also begun to investigate the applicability of BBP environments for end-user programming. The following section will discuss this approach and highlight insights gained from previous work that investigated this concept. 2.2.2 End-User Programming with BBP Environments

Figure 2-14. Block-based programming environment designed for clinicians. Source: Dave Krebs, et al. 2012. Combining Visual Block Programming and Graph Manipulation for Clinical Alert Rule Building. In CHI’12 Extended Abstracts on Human Factors in Computing Systems. ACM, 2453-2458.

Traditionally programming has mainly been a skill necessary for professional developers who are paid to create and maintain software. This skill mainly consists of planning or writing a collection of specifications that may take variable inputs, and can be executed (or interpreted) by a device with computational capabilities [32]. Due to

44 Figure 2-15. Block-based programming environment designed for physical prototyping. Source: Amon Millner and Edward Baafi. 2011. Modkit: blending and extending approachable platforms for creating computer programs and interactive objects. In Proceedings of the 10th International Conference on Interaction Design and Children. ACM, 250-253.

Figure 2-16. A Visual Programming Framework for Wireless Sensor Networks in Smart Home Applications. Source: M. ngeles Serna, et al. 2015. Intelligent Sensors, Sensor Networks and Information Processing. IEEE, 1-6.

45 the growing number of domains that involve various forms of computation, end-user programming is becoming increasingly important. Unlike professional developers, end user programmers main goals are not centered on selling software. Instead, these users objectives range from configuring home appliances [34] to analyzing biological data [35]. Users that fall into this category often do not possess skills in traditional text-based languages such as C++ and Java. However, the domains that they work in often benefit from programmatic processes. Recently researchers have investigated the use of BBP environments to address the issues end users with limited programming experience encounter. Millner and Baafi presented ModKit (Fig. 2-15), a BBP environment designed to assist novices and experienced designers with creating interactive tangible user interfaces [36]. Results from study by Booth and Stumpf indicated that the BBP approach provided a more positive user experience, reduced perceive workload, and higher perceived success when compared to textual programming [64]. Krebs et al. demonstrated how BBP environments can be utilized in the clinical domain by designing a block-based alert programming system for healthcare experts [81] (Fig. 2-14). Clinicians who participated in the study found the system was easy to learn and could discover errors efficiently. Researchers have also applied BBP to smart home systems. Serna et al. recently presented a web-based BBP environment framework that enables users to define rules for various smart home services [82] (Fig. 2-16).

46 CHAPTER 3 DESIGN AND IMPLEMENTATION OF A BLOCK-BASED NEUROFEEDBACK APPLICATION DEVELOPMENT ENVIRONMENT BCI developers are tasked with creating applications that are influenced by information acquired from a neurophysiological device. Comprehending how to assess and use neurophysiological data acquired from a neurophysiological device is a vital step in the process of creating BCI applications. For example, to create an application that adapts to a user’s level of relaxation, a developer must understand how to direct data that reflects a user’s affective state into a development environment. Once the data is collected in a development environment the developer must understand how to use logical structures to create applications that provide meaningful feedback to users based on their emotional state. The ability to achieve these tasks often depends on developers’ prior experience using Application Program Interfaces (APIs) or configuring BCI software platforms. Each of these tasks can require quite a bit of technical skills. This dependency of technical skills presents major challenges for novice programmers and non-technical users in general that are interested in getting started with BCI development. Even after successfully configuring the signal acquisition and development environment developers must also be knowledgeable in languages such as C++ before creating custom neurofeedback applications. For example, consider OpenViBE, a user-friendly graphical language designed for non-programmers, uses a visual box concept to design a real-time BCI pipeline from scratch. OpenViBE uses the visual box concept to assist non-programmers with designing a BCI pipeline. This approach allows developers to manipulate boxes related to various phases of the BCI pipeline including preprocessing, feature extraction, translation into a command, and feedback. Although the visual box concept eases the process of creating and modifying BCI pipelines, it assumes that the developer has signal processing experience. In addition, developers seeking to create BCI applications that feature novel types of visual feedback must also understand how to use

47 software development kits (SDKs) to develop external applications that use OpenViBE as a library. This requires additional basic networking knowledge. This chapter discusses the combination of EEG signal acquisition and block-based programming using modern web technology. To demonstrate this approach, this chapter presents Neuroblock, a BCI development tool that ties together neurophysiological-based affective state information, visual feedback, and application development tools using the node.js JavaScript runtime environment. Although BCI software platforms have traditionally focused on providing users tools to manipulate the signal processing components of the BCI pipeline, NeuroBlock focuses on engaging users with the feedback component instead. This approach enables users to design feedback applications that leverage affective state data provided by the underlying BCI pipeline. Consequently, developers may leverage the dynamic nature of humans’ emotional states to influence visual and auditory feedback provided by an application. To integrate the EEG signal acquisition and block-based programming, scripts were created to streamline configuration of the Bluetooth connection responsible for transporting EEG data from the EEG device to the computer. After the host machines receives the EEG data, it is routed to a server that passes the information to a web application. Once the data reaches the web application it is presented to users as line graph visualizations and puzzle styled block used in combination with logical block to created feedback applications. This chapter is organized as follows. Section 3.1 discusses common challenges with existing BCI software platforms. Section 3.2 describes how NeuroBlock aims to address these challenges. Section 3.3 provides additional details about the process of developing NeuroBlock. Section 3.4 discusses insights gained from a pilot study that was used to improve NeuroBlock. The proposed system will feature a BBP environment designed to assist novice programmers with developing neurofeedback applications. Neurofeedback applications

48 monitor a user’s mental state via EEG and provides feedback (often visual or auditory) based on varying levels of cognitive states such as relaxation or attention. The following sections discuss the proposed design of a system that enables novice programmers to create neurofeedback applications in a BBP environment. The signal acquisition section describes how signals will be acquired from the brain. The mental state identification section discusses how raw EEG data will be processed to provide affective state information. The neurofeedback component section will discuss components that will assist users with integrating neurofeedback functionality into applications. The web application section will discuss features of the web application that participants will use during the study. Fig. 3-2 provides an illustration of how these components will communicate with each other. This technical design is informed by BCI software platforms such as OpenViBE [54] and Berlin BCI [83]. The design section will discuss how Human-Computer Interaction (HCI) design principles will be applied. The last sections will cover research questions, experimental design, and research methodology.

A B C

Figure 3-1. Interaxon Muse. A) EEG headset, B) Muse electrode positions, and C) User wearing Muse.

3.1 Challenges

The work presented in this dissertation aims to address issues that may hinder novice programmers interested in creating neurofeedback applications. Prior to diving

49 Figure 3-2. System design. into the technical details about how the system was implemented the following sections will describe some of the challenges that could present barriers for non-technical users. Although various issues exist, this work highlights three themes that are common across most BCI software platforms: system configuration, text-based languages, and feedback. 3.1.1 System Configuration

Although BCI platforms have made much progress over the past years, challenges still exist that hinder end users [38]. One challenge is the relatively high barriers to entry present in many of the BCI platforms. Vital resources ranging from documentation on BCI platform options to user-friendly getting started documents are currently missing or

50 hard to locate online. Many of the resources provided by BCI platforms target end users with technical backgrounds (developers, neuroscientist, etc). These documents could be very difficult to follow by someone without a strong technical background. For example, consider the following instructions provided to users interested in creating near real-time BCI application using BCILAB. Download the code from ftp://sccn.ucsd.edu/pub/bcilab. Extract the file to some folder that is not your EEGLAB folder. Start MATLAB (2008a+ required for the GUI), and reset you path to default settings by clicking File / Set Path / Default, and then Save. If you have misc toolboxes in your path, you run the risk of creating unexpected errors (due to name conflicts), but you can add your directories back later once you know what outputs to expect. Enter in MATLAB’s command line: cd your/path/to/bcilab; bcilab. The text shown above provides an example dependency on other platforms such as MATLAB. It also assumes users are familiar with the EEGLAB MATLAB plugin which requires additional configuration knowledge. Users must also be familiar with managing directories used by MATLAB and basic command line knowledge. This provides a good example of the type of information users need to get started with existing BCI software platforms. Although this approach can be useful for controlled neurophysiological experiments it could be confusing for non-technical users aiming to develop exploratory neurofeedback prototypes. Other BCI software platforms such as BCI 2000 and OpenViBE do not entirely depend on platforms such as MATLAB. These BCI software platforms can be installed using basic executable files. Instructions are usually provided that get users started with common applications such as the P300 speller and 2D visualizations of processed EEG data. However, developing custom feedback applications requires additional technical knowledge. For example, OpenViBE suggests users use tools such as Ogre3D, a 3D engine written in C++, to develop external applications. Furthermore, users must understand how to connect the application to OpenViBE using communication protocols such as Virtual-Reality Peripheral Network (VRPN), TCP/IP, Lab Streaming Layer (LSL), and

51 Open Sound Control (OSC). Similar technical skills are required to create custom feedback applications that communicate with BCI2000. This process can be very discouraging to novice programmers without strong technical backgrounds. 3.1.2 Text-based Languages

As mentioned in the previous section many of the existing BCI software platforms require users to use text-based languages such as C++ to create custom feedback applications. However, text-based languages may present barriers to novice programmers. Many text-based challenges are related to syntax. Most text-based languages require programmers to recall numerous keywords. Furthermore, programmers also must memorize the correct placement of characters such as semicolons, commas, and brackets. These requirements may present serious cognitive challenges for novice programmers. The cognitive demand of text-based languages is also associated with the command memorization requirement. For example, elements known as classes and methods are often necessary to provide commands or instructions to applications using text-based languages. These elements are usually mapped to unique keywords. Programmers typically need to commit numerous keywords to memory to efficiently develop programs [84]. This could lead to syntax errors which can be frustrating for novice programmers [85]. As the complexity of a program’s logic increases, organization becomes more essential. Scoping rules are used by many text-based languages to assist with this goal. These rules manage which regions of a program have access to program elements. Programmers must learn how to use techniques such as proper indentation or brackets to effectively use scoping rules. Failure to use these correctly could result in errors or bugs. Text-based languages typically provide programmers error to communicate issues caused by these errors. These messages have a reputation for being vague and misleading which presents additional challenges to novice programmers. Although text-based solutions provide more flexibility than block-based programming, the alternative visual approach may be more optimal for novice programmers.

52 3.1.3 Feedback

Figure 3-3. 2D signal visualization.

Figure 3-4. 3D topographic map.

The feedback component of the BCI pipeline plays a critical role in neurofeedback applications. Previous work has investigated novel visual designs for neurofeedback applications. Existing BCI software platforms often include prebuilt versions of popular

53 types of feedback such as 2D line graphs (shown in Fig. 3-3 and Fig. 3-4). These feedback applications often are derived from prototypes developed using text-based languages. Currently, the use of text-based languages is required to create novel variants of these types of feedback applications. Researchers recently addressed this issues by providing users access to parameters. This approach allows users to modify characteristics of a feedback application by updating values via a GUI application. For example, consider the parameter settings offered by Pyff, as shown in Fig. 2-6. Pyff enables users to modify parameters related to objects’ speed and size. This interface also takes a visual approach to enabling users to modify a feedback application’s settings related to control and presentation (i.e. frames per second). However, users seeking to add more complex logic to a feedback application may encounter issues. For example, users seeking to add custom conditions or loops to their application with Pyff would have to use a text-based language such as Python. Although this visual parameter approach is a step in the right direction, the lack of complex logic limits flexibility and could bound the amount of unique applications novice developers can create. Once the feedback application is developed, it must be connected to signal processing components. This is usually done using text-based network communications libraries, which can require a moderate amount of programming skills. Exploring creative ways to present neurophysiological information in a visual-auditory form improves our understanding of ways to enhance methods such as cognitive training. However, the current process of creating these novel visualization applications can be troublesome for novice programmers. Additional details about how Neuroblock leverages these insights will be provided in the following sections. 3.2 NeuroBlock Design Methodology

NeuroBlock combines a Block-Based Programming Environment with neurophysiological data acquired from an EEG apparatus. The design process of this system can be organized into the following categories: EEG apparatus, EEG data communication,

54 signal processing, block-based programming, neurofeedback, and file handling. This section describes how each of these components intertwine by following EEG data as it is transported through a BCI pipeline. 3.2.1 EEG Apparatus

A B C

D E F

Figure 3-5. EEG apparatuses. A) Neurosky Mindwave, B) Interaxon Muse, C) Emotiv Insight, D) Emotiv Epoch, E) OpenBCI Ultra Cortex “Mark IV”, and F) g.Nautilus.

EEG data must first be captured from a user’s brain using an EEG apparatus prior to being provided to a computer. This device measures electrical activity from the brain using sensors. As shown in Fig. 3-5 multiple EEG apparatuses exist. These devices range from medical grade devices that are mainly used for clinical translational research (similar to the g.nautilus shown in Fig. 3-5) to consumer grade wearable devices that are often

55 Figure 3-6. User wearing Interaxon Muse EEG apparatus. used for less critical applications. Although consumer grade devices can be less accurate than their medical counterparts, they tend to be more affordable. For example, the devices shown in Fig. 3-5(A-E) cost a few hundred dollars. However, more precise devices similar to the one shown in Fig. 3-5(F) can cost thousands of dollars. The recent emergence of affordable BCI devices is a vital step towards making the BCI technology accessible to the general population. The work discussed in this dissertation aims to leverage this momentum by presenting accessible and easy to use feedback development software for EEG apparatuses. To communicate the importance of the relationship between BCI software and hardware, consider conventional input devices such as a keyboard and mouse. Now imagine users needing to learn an entirely new language to efficiently complete word processing task. Although this may not be a problem for some users it would present a clear challenge for others. Fortunately, this is not the state of word processing technology. However, this is a relatively accurate depiction of the current state of BCI software in reference to neurofeedback application development tasks. It is important to also note that word processing technology reached its current state through exploration of novel ways to

56 leverage input modalities. In an effort to apply this concept to BCI, the Interaxon Muse device shown in Fig. 3-5(B) was used as an EEG-based input modality during the study presented in this dissertation. However, any BCI device capable of communicating with a computer can be integrated into NeuroBlock. Brain signals acquired with this device are represented in microvolts (µV) which provide information about the brain’s electrical activity. As illustrated in Fig. 3-1(B) this device consists of four channels (TP9, AF7, AF8, TP10) and one reference (Fpz) based on the international 10-20 electrode positioning system [86]. The reference electrode is used as a reference for measurement by the other electrodes. The muse is designed to be mounted on the forehead as shown in Fig. 3-6. This allows it to be easily mounted without hair causing significant signal quality issues. In addition, this area is related to measurements of engagement and attention which are associated with the states featured in NeuroBlock [87]. Participants used the Interaxon Muse EEG headset to capture attention levels (or relaxation levels) which plays a vital role in driving feedback in the neurofeedback applications. Specific details concerning the communication protocol used to send EEG data from the BCI device to the computer is discussed in the following section. 3.2.2 EEG Data Communication

Figure 3-7. User building neurofeedback application with NeuroBlock.

57 As shown in Fig. 3-7, EEG data was communicated wirelessly using a Bluetooth 2.0 connection between the EEG apparatus and the computer. Specifically, a Dell Latitude E6530 laptop with a quad core 2.9Ghz Intel i7 CPU was used during the study. The Muse device performs at a sampling rate of 220Hz. A notch filter is applied at 60Hz to remove artifacts such as power line interference. The first step in establishing a connection to the BCI device is pairing it to the computer via Bluetooth. This consists of initiating the computer’s Bluetooth discovery mode and making the Muse discoverable by holding the Muse’s power button down for five seconds. Afterwards, the Muse device appears as a nearby Bluetooth device. Once the EEG device is paired via Bluetooth to the computer, raw EEG signals are acquired on the computer using MuseIO, a research client application provided by Interaxon. Given this work is more focus on the general experience, MuseIO was considered sufficient for the initial prototype presented in this dissertation. It is important to note that prior to making this decision the authors confirmed that EEG data collected from the Muse could be passed to other programs such as OpenViBE if further processing is required in the future. To accomplish this, data is passed from MuseIO to OpenViBE using the Lab Streaming Layer (LSL) [88] communication protocol. Either of these approaches can be used to create new affective state blocks for novice programmers in the future. Furthermore, Neuroblock can support alternative BCI hardware that provides OSC or LSL communication options. Table 3-1. EEG Frequency Bands related to various mental states. Band name Frequency (Hz) Description Delta 0 - 4 Deep sleep Theta 4 - 8 Creativity, drifting thoughts, dream sleep Alpha 8 - 13 Relaxation, calmness, abstract thinking Beta 13 - 30 Focused, high alertness

For this study, the EEG data was transported from MuseIO to a server application developed using the node.js JavaScript runtime environment. Components known as “Muse Elements” were used to access data relating to users’ affective states. Muse

58 Elements are a collection of algorithms and signal processing methods. These tools are used to convert EEG signals collected from the Muse device to more useful information such as EEG frequency bands. EEG frequency bands are commonly associated with various affective states [89]. NeuroBlock leverages the band power session scores element to capture alpha, beta, and theta frequency bands data. Band power session scores are scaled between 0 and 1 using a linear function. This linear function returns 0 if the most recent band power value is less than the 10th percentile of the distribution of band powers. The linear function returns 1 if the current value is equal to or higher than the 10th percentile of the distribution of band powers [90]. This approach assists with representing gradual shifts of affective states. This assists with mapping these values to control signals intended for visual objects. Converting values derived from volatile trends to control signals could lead to unintended erratic visual feedback. This would lead to additional work for the developer which this works aims to avoid. The band power scores are communicated to the server application at 10 Hz using the Open Sound Control (OSC) protocol [88]. Information about the EEG device’s channel quality is also communicated to the web application using the OSC protocol. OSC is a communication protocol designed for communication between devices such as sound synthesizers and computers. However, it can also be used to transport EEG data. Once the server receives these scores they are passed to a client web application via WebSockets. The following section provide additional details of how the web applications uses this information. 3.2.3 Web Application

The web application component of NeuroBlock was also informed by previous BBP environments such as Scratch [91] and OpenBlocks [92]. The following sections will discuss how these established designed principles will be supported in the BBP neurofeedback development tool. The web interface uses a single-window, multi-pane design to make

59 Figure 3-8. Web application interface with affective state data viewer selected.

Figure 3-9. Web application interface with sprite viewer selected. locating features and navigating the interface easy for users. This approach ensures that core components of the system are always visible. Fig. 3-8 and 3-9 show the single-window interface which has four panes. The view shown in Fig. 3-8 reflects the interface when a user selects the affective state data viewer. Fig. 3-9 reflects the interface when the sprite viewer is selected. Interface controls are clearly marked using labels and icons related to the controls function. For example, controls associated with the blocks used to develop applications have labels such as motion, sound, data, events, control, sensing,

60 and operators. The functions of each of these controls will be discussed in the block-based programming section. Integrating constraints into the neurofeedback application development tool assists with making the development process as simple as possible. In many existing BCI software platforms, users can easily cause a system to enter an invalid state. For example, a user desiring to develop a neurofeedback application driven by relaxation levels is allowed to enter the wrong frequency range. This would cause the application to function improperly and may cause a novice developer to become frustrated. Anecdotal observations by the author have shown that people interested in learning BCI often get overwhelmed with the numerous signal processing options available in existing tools. Constraining the available options to specific frequency band ranges related to affective states (ex. Alpha/Relaxation (8-13 Hz), Beta/Attention (13-30 Hz) can prevent users from experiencing this issue. This design principle was utilized as much as possible (without limiting users too much) in an effort to simplify the process of integrating mental state levels into applications. The physical puzzle styled characteristics of the blocks shown in Fig. 3-8 and 3-9 also provide constraints. For example, if users try to join blocks that cannot be syntactically joined, the two blocks will not physically connect. This prevents users from introducing syntax errors in their programs. This also allows the users to focus more on debugging logic instead of syntax errors. The neurofeedback development tool presented in this work aims to be consistent both internally and across other applications that users may use. For example, all block types (sprite, data, mental state, etc) have the same operations to achieve goals such as adding, deleting, and modifying blocks. This tool also provides copy and paste shortcut keys for block manipulation that are consistent with common applications such a Microsoft Word. The stage component (Table 3-2), located towards the top left corner of the interface contains programmable objects (sprites) that can be designed to respond to an user’s affective state. Users can add and remove sprites from this stage area. Fig. 3-10 shows

61 Figure 3-10. Stage figure.

Table 3-2. Stage Components. Name Description Input Handling Handles mouse and keyboard input Scene Manager Assists with organizing game sprites and settings (ex. background color) Sprites 2D bitmap images that can be integrated into scenes an example of shrimp, amoeba, and bacteria objects that have been added to the stage area. The stage component was implemented using the open-source Scratch-VM library. This library assists with maintaining the state of the block-based developed application. Examples of this includes managing the location of objects and the current value of variables. This allows users to add logic that can be used to provide instructions to objects featured in the application. 3.2.3.1 Blocks Table 3-3. Block Components. Category Description Control Add control logic such as loops and conditional statements Data Create / Modify Variables Events Capture events such as a mouse click or keypress. Handles communication between sprites. Operators Mathematical operators Affective State Blocks reflecting mental state levels (Ex. Engagement), Frequency Band Power (ex. alpha, beta). Sound Plays audio feedback Sensing Handles functions such as object collision or cursor position

62 Figure 3-11. Example of blocks separated.

Figure 3-12. Connected blocks.

The block interface component provides block elements that will be used to create neurofeedback applications. This component draws inspiration from Scratch [91]. It features a command palette that is used to switch between block categories. As shown in Table II, seven block categories are available in the system: motions, sound, data, events, control, sensing, and operators These block components were implemented using the Scratch-VM library discussed earlier. This library was selected based on its ability to effectively maintain and support various states of a web-based feedback application. To use these blocks, users move blocks from the block toolbox section to the block workspace by performing a drag-and-drop operation. Afterwards blocks are combined by connecting

63 the puzzle styled blocks together. For example, Fig. 3-11 shows a group of blocks prior to being assembled. Fig. 3-12 shows these blocks after the user has assembled them together. In the above example the user created instructions that moves an object at a speed relative to the current alpha band power session score. However, only a few types of blocks are shown in this example.

Figure 3-13. Motion blocks.

Fig. 3-13 shows other blocks that belong to the motion category. As shown in the previous example, the motion block category contains blocks that allow users to apply movement instructions to objects in the stage area. For example, direct motion commands can be provided to the objects using the go to x: y: block. The white circular openings in these blocks serve as parameters. In this example, parameters allow users to create specific location-based instructions for objects in the stage area. The “go to random position” block can be used to move objects to a random position in the stage area. Users seeking to progressively move an object can use the change x by and change y by blocks. This comes in handy when developers are wish to show more gradual visual changes. The set x and set y blocks can be used to move a stage object to an exact x and y location. An example of this is directly mapping a user’s affective state to the y or x axis. A “if on edge, bounce

64 block” block is also included in the motion block category. This block detects if an object is touching the edge of the stage area. It assists users with making sure objects do not disappear from the visible stage area. The x position and y position blocks store the current position of stage objects.

Figure 3-14. Sound blocks.

Figure 3-15. Event blocks.

The sound block category contains blocks responsible for adding sound to a neurofeedback application. The sound category currently features the “play sound” block

65 which can be used to trigger sounds (Fig. 3-14). Sounds can also be stopped using the stop all sounds block. During the study the system supported a splash sound. However, the system can be expanded to support multiple sounds. The data block category enables users to define and manage variables. This category features a create variable button which can be used to define new variables. Once a variable is created, this block category presents users with user-defined blocks responsible for setting and changing the variable value. As shown in Fig. 3-8, the values of user-defined variables are displayed at the top of the interface. Users can also use the show and hide variable blocks to control a variable’s visibility. Blocks in the event block category are used to capture various types of triggered events. Currently, users can leverage the when “green flag” clicked button. As shown in Fig. 3-15, the blocks can also be used to detect when keyboard keys are pressed. These blocks can be combined with other blocks to trigger custom instructions. Fig. 3-12 shows an example of event blocks being used to detect when arrow keys are pressed. Furthermore, motion and the alpha sensing blocks are combined with the event block to add basic hybrid BCI control functionality to the application. The control block category contains blocks associated with the logical structure of the program. These instructions enable developers to determine when, how, and under what conditions instructions are executed. For example, the repeat block shown in Fig. 3-16 can be used to respectively execute instructions. By default, this block is set to repeat 10 consecutive times. However, this value can be modified by the user. Users seeking to consecutively repeat a set of instructions indefinitely may use the forever block. Along with specifying iteration, control blocks can be used to execute instructions under specific conditions. For example, consider the script shown in Fig. 3-17. The if block is used in this script along with a greater than operator block to create a condition that checks whether the alpha band power session score is greater than 0.5. The change x by motion block is also used to move the object 10 units whenever this condition is true.

66 Figure 3-16. Control blocks.

Figure 3-17. Simple script that moves an object based on a user’s relaxation level.

67 This example translates to the object moving whenever the BCI device detects mental states related to high relaxation levels. The else block expands the if block functionality by providing a gap that can be used to apply instructions when the condition evaluates as false. The wait block pauses a script for a user-defined number of seconds. This block can be used to delay the execution of instructions. The block category also provides special blocks associated with duplicating stage objects. Users can create or delete these duplicate objects using “when I start as a clone” or “delete this clone” blocks. The “when I start as a clone” block can be used to execute instructions that start whenever a new clone is created. This block is also used to provide instructions for the newly created duplicates. For example, developers wishing to move stage object duplicates can connect the “go to random position” block from the motion category to the “when I start as a clone” block.

Figure 3-18. Sensing blocks.

The sensing block category holds blocks associated with affective state blocks and the mouse. The affective state blocks include alpha, beta, and engagement blocks. The alpha and beta blocks shown in Fig. 3-18 hold values ranging from 0 to 1. These values are derived from a process based on processing EEG frequency band information as discussed in the EEG data communication section. The engagement block holds values that also range from 0 to 1 and are calculated using a commonly used formula [93]. The touching

68 block is a Boolean block that handles basic collision functions and can be used to test whether two objects are touching each other. For example, this block holds a value true if a stage object is touching the mouse pointer or another object. Object can be selected using the drop down menu triggered by clicking the downward facing arrow displayed on the block. Once the drop down menu is opened all stage objects that have been added to the application are displayed.

Figure 3-19. Operator blocks.

The operators block category holds blocks associate with mathematical operations including addition, subtraction, multiplication, and division. Greater than, less than, and equal to comparison operators are also included in the operator block category. These blocks can be used in conjunction with control blocks to execute instructions under specific conditions. The operator block category also contains a “pick random” block. This block returns a random value that falls between two user-defined values. For example, the default values are 1 and 10 as shown in Fig. 3-19. In this case the pick random block will return a random value between 1 and 10 each time it is executed. 3.2.3.2 Neurofeedback

Data collected from the EEG apparatus influences multiple feedback components in the interface. This is common in various types of BCI software platforms. It is also common in commercial EEG software such as Emotiv Epoc. These platform often include some form of affective state and channel quality feedback. The affective state viewer is

69 Figure 3-20. Affective state line graphs. a component that provides users feedback about their current affective state. As shown in Fig. 3-20, affective state data is presented as line graphs. These graphs display the recent band power session scores values passed from the server as discussed in the EEG data communication section. This feedback serves as a way to check how affective state levels influence the neurofeedback application. Line graphs visualizations are provided for calculations of engagement. These visualizations are also provided for alpha and beta EEG frequency bands.

A B C D E

Figure 3-21. Signal quality feedback. A) Good signal quality, B) OK signal quality, C) Bad signal quality, D) Varying signal quality, and E) No Signal.

70 The channel quality viewer assists users with making sure the BCI device is mounted properly. This primarily consists of ensuring the device has proper contact with a user’s forehead. The state of contact is organized into three levels: bad, ok, and good. Each of these levels are presented to users visually as red, yellow, and green indicators respectively. As shown in Fig. 3-21, these indicators are positioned over a top-down view of a head. This image is used to assist users with mapping sensor indicators shown in the interface to the physical sensors on the EEG apparatus. Channel quality information is passed to the web application using the same pipeline discussed in the EEG data communication section. Fig. 3-21(A) shows an example of when all channels have good signal quality. The channels in Fig. 3-21(B) have moderately good signal quality. The red indicators shown in Fig. 3-21(C) indicate poor signal quality. Channels’ signal quality can also vary as shown in Fig. 3-21(D). When no signal is detected the channel quality feedback is black as shown in Fig. 3-21(E). 3.3 Implementing NeuroBlock

This section discusses the engineering challenges confronted during the development of NeuroBlock. The key challenges faced are organized in the following categories: EEG data communication, feedback, stage management, object management, and workspace management. This section will describe approaches used during the implementation process of NeuroBlock to address challenges associated with each of these categories. NeuroBlock is designed to ease the process of creating the feedback component of a BCI system. Accordingly, the lower-level approaches discussed in the following sections were implemented to allow users to leverage lower level functions via visual elements presented in the interface. 3.3.1 EEG Data Communication

BCI device manufacturers often offer software developer kits (SDK) for EEG apparatuses. Although these kits offer many tools, they usually do not include ways to directly send EEG data directly to a web application. Instead, communication protocols

71 Figure 3-22. Javascript code snippet that passes information captured from the Interaxon Muse to the web application such as OSC are commonly used. Although multiple approaches could be used to address this constraint, this work also aims to design a hardware agnostic system architecture. To address this challenge, a server capable of receiving OSC messages was developed. This was accomplished by using the osc.js package [94]. OSC’s message-based design was leveraged to route data from the EEG apparatus device to appropriate server-side function. This message-based design features an address pattern which consists of a combination of strings and forward slashes. For example, consider the address pattern shown in Fig. 3-22. This illustration shows how the address pattern associated with alpha

72 band power session scores is used to access band power scores related to the alpha EEG frequency band. Specifically, a switch case is used to test if an address pattern matches an address containing data utilized by NeuroBlock. The “touching forehead” case is used to detect whether the BCI device is mounted. The “horseshoe” case manages channel signal quality information. The code snippet shown in Fig. 3-22 also sends captured band power session scores to the web application.

Figure 3-23. Javascript code snippet that receives data sent from the server.

Once the server receives the necessary data, routing the received message to the appropriate client-side function presents an additional challenge. Ideally, the server should also be capable of handling communication between multiple clients. This would be useful in scenarios where the server handles transport of EEG-data between multiple EEG apparatuses and multiple web applications. To address these challenges, the socket.io

73 package was utilized. Socket.io primarily uses the WebSocket protocol which allows it to support real-time bidirectional event based communication. It has a client-side and server-side library which is used to transport band session scores between the server and web application. Similar to OSC, socket.io allows developers to assign unique identifiers to messages passed between the server and client. One key difference is that socket.io considers these identifiers as event names. Consequently, each time the server receives an OSC message, it triggers an event using socket.io’s emit method. Each time this event is triggered a client-side event listener receives band power session scores. Fig. 3-23 shows a snippet of client side JavaScript code that receives data sent from the server side code shown in Fig. 3-22. Once signal quality and band power session scores are received, it is passed to functions that update the affective state visualizations, update current state of affective state blocks, and signal quality visualizations. 3.3.2 Feedback

Presenting data communicated from the EEG apparatus visually was an additional challenge encountered while developing NeuroBlock. EEG time series data is commonly presented as line graphs. However, NeuroBlock’s multi-pane design the presented the challenge of integrating this visualization along side the stage and workspace area. Flot, a JavaScript plotting library for JQuery, was used to address this challenge. This library offers dynamic line charts which were used to provide visual feedback of engagement levels and band power session scores. An additional challenge related to feedback is channel quality visualizations. This information is often presented visually as shown in Fig. 3-21. This design inspired the channel quality feedback featured in NeuroBlock. This feedback design was implemented by creating black, red, yellow, and green circle images. These circles are placed over the top-down image of the head based on the channel quality status information. To appropriately update the channel quality visualization two parameters are important: channel ID and quality level. Channel IDs are used to make sure the appropriate

74 channels quality indicators are updated. For example, as shown in Fig. 3-21 each channel corresponds to locations illustrated in Fig. 3-1 (TP9, AF7, AF8, TP10). Accordingly, each circle is linked to a channel ID and updates when changes are communicated to the web applications as event messages from the server. The body of the messages contains channel quality information which are used to determine which circle graphic to load. 3.3.3 Stage and Workspace Management

Figure 3-24. Object management pane.

Figure 3-25. Object selection menu.

Presently, the Scratch-VM library is designed to work as an extension of the Scratch project. Although this is sufficient for developers looking to work on projects that were built using Scratch it could present challenges for those aiming to implement a platform independent of Scratch. One specific challenge was loading local graphics into the stage environment. This issue was addressed by loading documents from a load directory instead

75 of fetching it from a server hosting scratch projects. Based on observation during the early testing phases of NeuroBlock it is natural for users to interact with objects in the stage environment via the mouse. However, once a new object was selected the workspace originally did not dynamically change. To address this challenge, input event listener methods offered by the Scratch-VM, were leveraged to capture mouse events in the stage area. Once these events were captured additional Scratch-VM methods were used to change the active script displayed in the workspace. It was also necessary to implement features that assist users with managing objects in the stage area This was addressed by creating a sprite pane area below the stage area as shown in Fig. 3-24. Users may add objects by clicking the green plus sign in the sprite pane. Once the green plus sign is clicked the menu shown in Fig. 3-25 presents users with options to add objects to the application. Users may click on an object to add it to the stage area. Objects can also be removed by clicking the red X button in the top right corner of the object icon. Clicking on an object icon in the sprite pane will also change the scripts being displayed in the workspace. When an object is selected, its background is set to green to provide feedback about which object is active. 3.4 Pilot Study

A pilot study was conducted to investigate how users interacted with NeuroBlock. The main goal of this investigation was to explore the usability of NeuroBlock and identify any major issues with the system. 3.4.1 Population and Procedure

Seven students from the University of Florida’s Department of Computer and Information Science and Engineering participated in the pilot studies. Although each of the participants had programming experience, none of the students had experience building a BCI application. There was a total of 5 females and 2 males. The average age of participants was 25. Each of the participants completed a total of 3 sessions over the span of 7 days. Due to the length of each neurofeedback development task,

76 participants completed 1 session per day. After the first session, the participants returned for their second session two days later. The third session was completed 3 days after the second session to complete the full week cycle. Each session began with a pre-session questionnaire that collected general demographic and programming self-efficacy information. The questionnaire also gathered self-efficacy information based on a modified version of Compeau and Higgins’ validated scale [46]. The modifications made the questionnaire task-specific to neurofeedback application development. Participants answered the same questionnaire while completing the post-session questionnaire. Once the pre-session questionnaire was complete participants watched a 13-minute tutorial video that explained how to use NeuroBlock’s core features. This tutorial only focused on basic examples and did not imply any strategies concerning the best way to use NeuroBlock’s features. Prior to the task, participants completed a pre-task exercise. The 13-minute tutorial video was only shown during session one. During session two participants watched an approximately 2-minute tutorial that introduced the concept of collision. A 2-minute video tutorial was shown during session three that discussed how to create clones. Participants had 20 minutes to complete the pre-task. During the pre-task, participants were instructed to build a neurofeedback application. Each session pre-tasks were designed to ensure participants were proficient enough to start the session task. During the session task, participants were instructed to build an additional neurofeedback application. Participants had 45 minutes to complete the session task. This featured a different application with more instructions and objects. Additional information about the session pre-task and task will be provided in the following sections. Once participants completed the session task, they were given a post-session questionnaire followed by a semi-structured interview to learn more about their experiences with NeuroBlock. The post questionnaire survey included a post self-efficacy and the System Usability Score (SUS) survey [95]. During the interviews the researcher sat alongside the participant as

77 they both faced a computer running NeuroBlock. Audio recordings were generated of each interview for post-experiment analysis. Each of the tasks used during the pilot study were informed by the Bacteria Hunt neurofeedback study [96]. The goal of the session one pre-task was to create a neurofeedback application featuring an amoeba sprite that moves upward as relaxation levels (alpha band power session scores) increased. Participants completed the task once they built an application that moved the amoeba to the top of the stage based on the user’s relaxation level. During the session one task, participants were asked to create a hybrid neurofeedback application featuring an amoeba sprite. This consisted of designing an application that leverages both the keyboard and the EEG apparatus to control the amoeba sprite. Participants were instructed to design an application that boosts the speed of the amoeba sprite when high levels of engagement were detected. They were also asked to reduce the speed when high levels of relaxation (alpha band power session scores) were detected. Additional instructions included adding jittery motions to the amoeba when high levels of relaxation or low levels of engagement are detected. While completing session two’s pre-task, participants attempted to create a hybrid neurofeedback application featuring an amoeba and a shrimp that leverage a BCI device and keyboard. The goal of the application was to move the amoeba to the left side of the stage using the keyboard. Furthermore, participants were asked to positively map the amoeba’s speed to engagement levels provided by the EEG apparatus. They were also instructed to add a collision detection functionality for the amoeba and shrimp. The instructions provided also mentioned that the amoeba should move to a random position whenever two sprites collided. During the session two task participants created a hybrid neurofeedback application that featured an amoeba and bacteria sprite. This also consisted of supporting interactions via keyboard and the EEG apparatus to control the amoeba sprite. Participants were instructed to develop an application that detected a collision between the amoeba and bacteria sprite. Instructions to increase a score variable

78 each time a collision occurred was also provided. Participants were also instructed to set sprites back to their starting positions when the collision occurred. Participants were asked to design the application so that speed was positively influenced when the BCI device detected high relaxation levels. Additional instructions included moving the bacteria towards the center of the stage when high levels of relaxation were detected and positively mapping beta band power session scores to jitter motion. During the session three pre-task, participants created a hybrid neurofeedback application featuring an amoeba and multiple bacteria sprites. Participants were asked to design the applications so that it supported control from both the BCI device and keyboard. The goal of the application was to use the keyboard to move the amoeba to the right and left sides of the stage and attempt to catch falling bacteria. Participants were instructed to design the application so that amoeba moved faster during high levels of relaxation. They were also instructed to make the bacteria fall faster during high levels of relaxation. Participants were asked to create a score variable that increased when collisions between the bacteria and amoeba occurred. During the session two task participants created a hybrid neurofeedback application featuring an amoeba, multiple bacteria characters, and multiple shrimp sprites. The objective of the neurofeedback application was to eat as many bacteria sprites as possible while avoiding the shrimp sprites. To create this application, participants were asked to design features that supported control via the EEG apparatus and keyboard. Participants were instructed to positively map the speed of the amoeba to engagement levels. They were also instructed to create duplicates of the shrimp sprites when high levels of relaxation were detected. The instructions also asked the participants to make the shrimp clones disappear when high beta band power session scores were sustained. Other instructions included reducing points when the amoeba collided with a shrimp sprite, returning the amoeba to the center of the stage when all points were lost, and increasing points when the amoeba collided with bacteria.

79 3.4.2 Observations

Figure 3-26. Pre-task effectiveness.

Figure 3-27. Task effectiveness.

Overall, the preliminary observations from this pilot study were promising. Although no significant relationships were observed between efficacy and efficiency, all participants were able to successfully develop neurofeedback applications using NeuroBlock. According to results from the session one SUS survey, participants rated the system above average (M=73.21, SD=8.255). The SUS survey results for session two were above average

80 (M=75.35, SD=6.68). Similar to the first two sessions, the final session SUS scores were above average (M=77.5, SD=11.54). As shown in Fig. 3-26 there was nearly an even split between participants who were able to complete the pre-task with or without help during session one. During session two all but one participant completed the pre-task without assistance. Participants effectiveness during session three resembled session one with nearly an even split between participants who were able to complete the pre-task with or without assistance. One of the most interesting observation is associated with psychophysiological self-regulation. This is common in traditional biofeedback therapy processes were patients learn techniques to manage emotional and mental states. During the study, NeuroBlock seemed to encourage similar behaviors. While trying to debug their neurofeedback application, a participant stated: “I am trying to relax by counting and breathing.” This occurred during session one while they were testing whether their applications moved the amoeba object based on alpha band power session scores. Another participant stated: “I tried to look at my Alpha levels to help me relax and then the amoeba.” Participants even made statements such as: “I am trying to think about nothing while testing the application.” These kinds of responses may suggest that BBP approaches to neurofeedback applications may assist self-regulation training. The combination of neurophysiological sensing devices and customizable immediate feedback seemed to encourage the participants to explore ways to stimulate various internal states. Although these assumptions require further investigation prior to being validated, additional exploration should be encouraged. Only one of the participants of the study had experience actively working with a BCI device. Assumptions were originally made that BCI would present a novelty effect. Consequently, the original study was designed to observe of how participants interacted with the BCI device unsupervised. What was observed instead was disregard of the EEG apparatus. Participants instead focused on following the instructions and building the

81 application. This mainly consisted of selecting and connecting the appropriate blocks. This observation suggest that novice users needed further detailed instructions regarding the interaction between the BCI device and NeuroBlock. During session one multiple participants expressed their fascination with the system. Statements such as “Cool”, “That was pretty cool”, and “This is pretty tight” suggest that participants were interested in the system during their first encounter. As the novelty of the system wore off during session two and three this fascination evolved into curiosity. Some participants asked to tinker with additional blocks after they completed the session task. Other participants discovered blocks that were not mentioned during training to complete tasks. During the semi-structured interviews, participants frequently commented on NeuroBlock’s ease of use. As one participant stated, “Overall I thought it was easy to use”. Overall, based on the participant interviews, the analysis of Neuroblock can be organized into two categories: simplicity and selection. Participants favored the simple multi-pane design of the application. Many participants gave responses such as “I liked that it is simple to use” and “Building the [neurofeedback application] was straight-forward”. Participants also commented on the ease of discovering and selecting blocks which may shed light on why they perceived the system as simple. As one participant said, “It’s easy to know where things were”. During session two and three these points were reiterated with statements such as “Especially now since I know where most of the blocks are”, “I felt like I knew where everything was better this time”, and “When it was not working I knew exactly which block to go to”. One participant stated: “I pretty much knew where things were this time which made it more enjoyable”. Participants also seemed to enjoy the game-like mechanics of the neurofeedback applications they created. As one participant said: “I enjoyed the game aspect of it”. A similar point was made by another participant who said: “It was more of a game which I liked”. Participants stated they enjoyed the neurophysiological

82 feedback provided in the interface. Responses such as “I enjoyed that I could actually see my brain activity as I was playing the game” and “I like that you are able to see the engagement and attention along with the other measures”. These responses seemed to speak more about the perception of the feedback in general. Participants also seemed to think the feedback was beneficial specifically during the applications development process. This point was iterated with statements such as “I really like seeing the visual feedback while testing” and “I like the immediate feedback when you mount the device and see engagement.”. One participant even went to the extent of explaining how the feedback helped rid skepticism regarding the technology prior to the study. The participant stated: “It actually was like the real first time it was a concrete connection between the BCI and what was happening on the screen”. Another participant made a similar statement saying, “ you could really see how the BCI impacts stuff” in reference to the application they created. Similar comments were made concerning the validity of the created application. Participants stated: “I built a legit game” and “It felt like a real game”. Additionally, one participant stated: “Seeing the shrimp appear made me feel accomplished” when ask how creating the neurofeedback application made them feel. During the interviews participants made statements related to the novelty of the system. These responses included responses such as, “It’s so weird because it’s different from everything else”. Other participants made similar points with statements such as “I have never seen [EEG data] set up in a way that did not seem intimidating” and “I like actually being able to see my brain waves this was my first time seeing that”. Some participants did however have negative comments about the BCI hardware. One participant stated, “I don’t like the way it feels on my head”. An additional participant stated: “I can’t wear my glasses while having the BCI device on”. Participants also reported that it was difficult to keep track of the affective state levels in two different tabs. One participant summed this up saying: “You had to go back and forth to see [engagement and theta]”. Another participants described it as “I was trying to look at

83 two things at once”. Although explanations of EEG data were provided in the tutorial videos one participant stated: “Remembering the EEG stuff was difficult”. This point was reiterated when one participant stated: “I wasn’t sure what engagement was or attention”. Participants also mentioned that the colors of two of the block categories were similar which may have also added to confusion while using the system. These comments were used to improve the system and design of the discussion. This process will be discussed further in the following section. 3.4.3 Discussions

This pilot study investigated how participants used a BBP environment for neurofeedback development. Similar BBP environment have been used for introducing younger audiences to programming. Lack of programming skills has also been a potential barrier to neurofeedback application development [39]. Although the pilot study participants had programming experience, none of them had experience building BCI applications. It is important to explore how work in the field of visual programming languages and block-based programming can be leveraged to extend the reach of BCI technology. As the observations from the study showed, participants enjoyed working with the technology. Although a novelty effect is expected with technology of this kind, participant fascination with the tool persisted for all 3 sessions. This may suggest that users may enjoy building neurofeedback application with approaches like NeuroBlock. Ease of use has been a core goal of many visual programming languages. This is an additional feature that may greatly benefit BCI software platform designers in the future. Many of the participants’ responses during the semi-structured interviews suggest that ease of use is also a feature of NeuroBlock. This is further supported by the above average SUS scores observed for each session. According to statements expressed during the interviews, one of the most captivating and useful features was the neurophysiological feedback. Participants stated that the feedback was not only enjoyable but it was also useful when debugging the neurofeedback application. The pilot study also shed light on

84 drawbacks of neurofeedback application development. Participants expressed negative sentiments towards the physical device. Although this is not within the scope of this work it is important to note the general feelings towards this component of the system. A common issue was the difficulty of switching between two different affective state panes. Since nearly all participants had the same concern this issue was considered a critical drawback. Consequently, the application was designed to focus on a smaller set of affective states. Further work is needed to investigate ways to seamlessly add additional affective states into the interface. Given that this work focuses on the introductory phase of BCI development starting with a smaller set of affective state was considered valid. To cut down on confusion the block categories were also made more distinct. 3.4.4 Pilot Study Conclusion

The pilot study suggest that NeuroBlock is a easy to use tool for neurofeedback application development. Participants seemed interested in the feedback offered by the system and stated that the feedback was useful for neurofeedback application development. Issues related to potential cognitive overload were addressed after the pilot study and before the study discussed in Chapter 4. Although this pilot study provided important insights, all the participants had programming experience. The study discussed in the following chapter investigates how this tool supports users with less experience programming. 3.5 Chapter Summary

This chapter presented an approach to neurofeedback application development using BBP. Users who are interested in investigating BCI technology that lack a programming background may benefit from the approach presented in this chapter. Current alternatives all involve text based programming which could be troublesome for novice programmers. The block-based approach presented in this chapter aims to avoid many of the pitfalls such as syntax, scoping, and other language specific rules. This chapter also described technical

85 architecture which was designed to be able to extend to new types of devices in the future. This was made possible largely due the node.js JavaScript runtime environment. 3.6 Conclusion

Visual programming languages have been used for years to introduce younger populations to programming. However, there has been limited work investigating the applicability of using this approach to lower the current barrier to entry to BCI. The presented approach aims to address this challenge. The work discussed in this chapter presents description of the system design and a preliminary user study. The study suggested that participants enjoyed using the system and thought that it was easy to use. However, these individuals had programming experience so further exploration with a novice population is necessary to understand how this tool may support less technical individuals.

86 CHAPTER 4 EVALUATION OF NEUROBLOCK To explore the BBP approach to neurofeedback application development presented in this work, a user study was conducted. One goal of this study was to gather preliminary insights on the usability of NeuroBlock. This study also aimed to provide qualitative feedback from users that may assist future BCI software platform designers. Specifically, this work investigates the following questions.

• (RQ1) What barriers do novice programmers face when developing neurofeedback applications using a block-based programming approach?

• (RQ2) How do novice programmers perceive the usability of a block-based neurofeedback development tool?

• (RQ3) What is the relationship between BCI self-efficacy and novice programmers ability (efficiency and effectiveness) to develop neurofeedback applications using a visual neurofeedback development tool? 4.1 Participants

40 student participants were recruited from an introductory programming course at the University of Florida. The participants age ranged from 18 to 30. 90% of the participants were between 18-21 years of age. A total of 14 females and 26 males participated in the study. All participants had limited programming experience with Java which was covered in the introductory programming course. Student participants had a wide variety of majors which included computer science, computer engineering, mechanical engineering, electrical engineering, digital arts and sciences, statistics, criminology, and mathematics. Participants were screened to ensure they did not have experience creating neurofeedback applications or using BCI devices. Only two participants had previous experience using a block-based programming environment. In both cases participants reported only faintly remembering using visual programming languages such as scratch while in high school.

87 4.2 Procedures

Each of the participants completed a total of 3 sessions over the span of 7 days. Due to the length of each Neurofeedback development task, participants completed 1 session per day. Each session had a different level of difficulty. Session difficulty was determined based on the number of sprites and scripts. The complexity of scripts was measured using the McCabe cyclomatic complexity metric which counts the number of decision points (if and if else blocks) [97, 98]. Participants completed each session in order of increasing difficulty. Participant received a total of $75 via a gift card for completing all three sessions. Partial compensation was awarded after each session. Participants received $10 for completing the first session. After completing the second session participants received $25. Once participants completed the third session they were awarded $40. After the first session, participants return for their second session two days later. The third session was completed three days after the second session to complete the full week cycle. Each session began with a pre-session questionnaire that collected general demographic and programming self-efficacy information. The questionnaire also gathered self-efficacy information based on a modified version of Compeau and Higgins validated scale [46]. The modifications made the questionnaire task-specific to neurofeedback application development. Participants answered the same questionnaire while completing the post-session questionnaire. Once the pre-session questionnaire was complete participants watched a 13-minute tutorial video, which explained how to use NeuroBlocks core features. This tutorial only focused on basic examples and did not imply any strategies concerning the best way to use NeuroBlocks features. After the system tutorial video, participants watched a think-aloud example video. This video featured someone using the think-aloud protocol while navigating a website. Prior to the task, participants completed a pre-task exercise. The 13-minute tutorial video was only shown during session one. During session two participants watched an approximately 2-minute tutorial that introduced the concept of

88 collision. A 2-minute video tutorial was shown during session three that discussed how to create clones. Participants had 20 minutes to complete the pre-task. During the pre-task, participants were instructed to build a neurofeedback application. Each session pre-tasks were designed to ensure participants were proficient enough to start the session task. During the session task, participants were instructed to build an additional neurofeedback application. Participants had 45 minutes to complete the session task. This featured a different application with more instructions and objects. Screen recordings were captured during each task using the RecordMyDesktop software. Additional information about the pre-task and task will be provided in the following sections. Once participants completed the session task, they were given a post-session questionnaire followed by a semi-structured interview to learn more about their experiences with NeuroBlock. The post questionnaire survey included a post self-efficacy and the System Usability Score (SUS) survey [95]. During the interviews, the researcher sat alongside the participant as they both faced a computer running NeuroBlock. The protocol for the interviews began with questions concerning positive experiences with the system. Afterwards, participants addressed questions concerning bothersome experiences with NeuroBlock during a semi-structured interview. The interview concluded with participants discussing what they would like to change. Audio recordings were generated of each interview for post-experiment analysis. The interviews were used to gain a better understanding of the end-user programming barriers participants encountered during each session. These interviews were analyzed using Grounded Theory [99] to identify concept and major themes related to end-user programming barriers and the general experiences of participants. This approach was also used to generate a list of concepts and categories related to participants’ perceptions of a block-based programming approach to neurofeedback application development. The first step of this analysis featured an open coding phase. During this phase, transcripts of participants interviews were analyzed. A basic inductive theory approach was used to organize participant feedback based on common concepts related to pain points

89 participants encountered while using NeuroBlock. The first phase was primarily focused on gathering positive and negative insights into categories that provide a general description of participants perceptions of the implemented system. Afterwards, axial coding was used to identify patterns in participant’ feedback. A constant comparison approach was used across participants to identify similar and different patterns.

Figure 4-1. Screenshot of the stage component during session one task.

Each of the tasks used during the study were informed by the Bacteria Hunt neurofeedback study [96]. The goal of session ones pre-task was to create a neurofeedback application featuring an amoeba sprite that moves upward as relaxation levels (alpha band power session scores) increased. Participants completed the task once they built an application that moved the amoeba to the top of the stage based on the users relaxation level. During the session one task participants were asked to create a hybrid neurofeedback application featuring an amoeba sprite as show in figure 4-1. This consisted of designing an application that leverages both the keyboard and the EEG apparatus to control the amoeba object. Participants were instructed to design an application that boosts the speed of the amoeba sprite when high levels of engagement are detected. They were also asked to reduce the speed when high levels of relaxation (alpha band power session scores) were detected. Additional instructions included adding jittery motions to the amoeba

90 when high levels of relaxation or low levels of engagement were detected. This program featured one sprite, three scripts, and one decision point.

Figure 4-2. Screenshot of the stage component during session two task.

While completing session two’s pre-task, participants attempted to create a hybrid neurofeedback application featuring an amoeba and a shrimp object that leverage the BCI device and keyboard. The goal of the application was to move the amoeba to the left side of the stage using the keyboard. Furthermore, participants were asked to positively map the amoebas speed to engagement levels provided by the EEG apparatus. They were also instructed to add collision detection functionality for the amoeba and shrimp. The instructions provided also mentioned that the amoeba should move to a random position whenever two objects collided. During the session two task, participants created a hybrid neurofeedback application that featured an amoeba and bacteria object as shown in figure 4-2. This also consisted of supporting interactions via the keyboard and EEG apparatus to control the amoeba sprite. Participants were instructed to develop an application that detected a collision between the amoeba and bacteria sprite. Instructions to increase a score variable each time a collision occurred were also provided. They were also instructed to move the objects back to their starting positions when the collision occurred. Participants were asked to design the application so that speed was positively

91 influenced when the BCI device detected high relaxation levels. Additional instructions included moving the bacteria towards the center of the stage when high levels of relaxation were detected and positively mapping beta band power session scores to jitter motion. Beta EEG frequency bands have been associated with attention and alertness [100]. This program featured two sprites, six scripts, and three decision points.

Figure 4-3. Screenshot of the stage component during session three task.

During the session three pre-task participants created a hybrid neurofeedback application featuring an amoeba and multiple bacteria objects. Participants were asked to design the applications so that it supported control from both the BCI device and keyboard. The goal of the application was to use the keyboard to move the amoeba to the right and left sides of the stage to catch falling bacteria. Participants were instructed to design the application so that amoeba moved faster during high levels of relaxation. They were also instructed to make the bacteria fall faster during high levels of relaxation. Participants were asked to create a score variable that increased when collisions between the bacteria and amoeba occurred. During the session three task, participants created a hybrid neurofeedback application featuring an amoeba, multiple bacteria characters and multiple shrimp objects as shown in figure 4-3. The objective of the neurofeedback application was to eat as many bacteria objects as possible while avoiding the shrimp

92 objects. To create this applications participants were asked to design features that supported control via the EEG apparatus and keyboard. Participants were instructed to positively map the speed of the amoeba to engagement levels. They were also instructed to create duplicates of the shrimp objects when high levels of relaxation were detected. The instruction also asked the participants to make the shrimp clones disappear when high beta band power session scores were sustained. Other instructions included reducing points when the amoeba collided with a shrimp sprite, returning the amoeba to the center of the stage when all points were lost, and increasing points when the amoeba collided with bacteria. This program featured three sprites, nine scripts, and five decision points. 4.3 Methodology

Table 4-1. Learning barriers coding scheme. Code Description Design User does not know what they want NeuroBlock to do Selection User knows what they want to do but does not know what to use Coordination User knows what blocks to use but does not know how to make them work together Use User knows what blocks to use but does not know how to use them Understanding User thinks they know how to use components together but the system did not do what was expected Information User has an idea of why their program did not do what they expected, but does not know how to check

To gain a preliminary understanding of NeuroBlock from an end-user programming perspective a commonly used learning barriers coding scheme was used to code audio recordings of the think-aloud sessions [101] (Table I). The Atlas.ti software was used to code screen recordings captured during each session. Using this approach allowed researchers to analyze participants’ verbalizations along with visual information from the NeuroBlock interface. For example, think-aloud verbalizations related to barriers often occurred soon after the mouse was idle for an extended period. Two researchers coded small sections of the screen recordings and compared their results to reach an agreement. Afterwards one coder finished coding.

93 4.4 Results

This work aims to inform future designers and developers of novice-friendly BCI software platforms. To assist with accomplishing this, a user study was conducted. Information related to programming self-efficacy, BCI self-efficacy, and usability was collected via surveys during the study. Efficiency data (time on task) was collected via a timer controlled by a study facilitator. Effectiveness was determined via post analysis of the block code. The R statistical analysis software package was used to analyze the collected quantitative data. The following sections provide details on the results of the evaluation of NeuroBlock. 4.4.1 Session One

4.4.1.1 Programming efficacy

Table 4-2. Programming self-efficacy questions. Q1 I can use variables, constants and data types. Q2 I can use logic structures: sequence Q3 I can use logic structures: decision (IF) Q4 I can use logic structures: iteration (loop) Q5 I can use methods or procedures Q6 I can use arrays Q7 I can use encapsulation, inheritance and polymorphism

To better understand participants’ programming background, their programming self-efficacy was evaluated using seven questions (Table 4-2). These seven questions measure their confidence of core programming concepts such as variables, logic structures (e.g. sequence, IF statements, loops), methods, arrays, and encapsulation. Each question was rated on a seven point Likert scale. According to the programming self-efficacy assessment, participants were most confident with using concepts related to variables, constants, and data types (M=5.67, SD=1.28) Fig. 4-4. Participants were least confident with concepts related to encapsulation, inheritance, and polymorphism (M=2.37, SD=1.83). Across all seven questions participants reported an average programming score of 4.35 (SD=1.77).

94 Figure 4-4. Session one average programming self-efficacy scores for each question.

4.4.1.2 Learning barriers

Figure 4-5. Session one barriers.

Learning barriers faced by participants while using NeuroBlock were also investigated. As shown in figure 4-5, understanding barriers were the most common barriers during session one. The number reported reflects the total number of barriers that occurred

95 for all participants. This barrier was responsible for 42% (21) of all session one barriers. Selection barriers also occurred frequently. Selection barriers were accountable for 34% (17) of all session one barriers. The third most frequent barriers were coordination barriers. These barriers were liable for approximately 14% (7) of session one barriers. The least frequent barriers during session one were use and information barriers. These results suggest that when participants are initially introduced to the system they have issues evaluating the program’s behavior. This type of barrier was often related to user-generated code errors that caused the participants’ program to exhibit unexpected behavior. For example, after an order of operation error one participant stated: Nothing is happening and I am pressing the arrows ... I am thinking I did the program wrong. In other cases, understanding barriers were related to participants misunderstanding how to use affective state feedback to debug motion commands that are linked to affective state information acquired from the BCI device. For instance, one participant did not realize low levels of engagement were stopping the amoeba from moving and stated: I think I followed the instructions, but I am not sure why it is not doing anything. The results also suggest that some users had issues selecting blocks related to more abstract concepts relative to basic block operations. This often occurred when participants were attempting to add a random function to their program. NeuroBlock has a random block that returns a random number between two user-defined integers. For example, while trying to accomplish this, one participant stated: Do I type random? ... It’s probably here somewhere ... Where are you random? 4.4.1.3 Usability

An additional goal of this work was to investigate the usability and learnability of NeuroBlock. To assist with this investigation, the System Usability Scale (SUS) was adopted. The SUS is a commonly used usability measure that consists of a 10-item

96 Table 4-3. SUS questions Q1 I think that I would like to use this system frequently Q2 I found the system unnecessarily complex Q3 I thought the system was easy to use Q4 I think that I would need the support of a technical person to be able to use this system Q5 I found the various functions in this system were well integrated Q6 I thought there was too much inconsistency in this system Q7 I would imagine that most people would learn to use this system very quickly Q8 I found the system very cumbersome to use Q9 I felt very confident using the system Q10 I needed to learn a lot of things before I could get going with this system

Figure 4-6. Session one average SUS scores for each question.

Likert scale questionnaire (Table 4-3). Each of the items are rated on a 5-point scale. After applying the conversion steps discussed in [95], SUS scores are expressed on a 100-point scale. The reliability of the scale has been demonstrated in numerous studies [95]. According to a previous study that computed SUS scores of over 2,000 studies, an average SUS score is 69.69. Furthermore, scores greater than 70 are considered above average. Participants reported SUS scores after each session. Participants reported an

97 average SUS score of 75.50 (SD = 12.07, min = 45.0, max = 92.5) during session one. This score ranks as above average [45]. Fig. 4-6 shows the average score for each of the SUS questions. This figure shows the SUS results after scores are converted to range from 0-4 as discussed in [95]. 4.4.1.4 Self-Efficacy, effectiveness, and efficiency

Figure 4-7. Session one effectiveness.

Table 4-4. BCI self-efficacy questions Q1 ...if there was no one around to tell me what to do as I go. Q2 ...if I had never used software like it before. Q3 ...if I had only the software manuals for reference. Q4 ...if I had seen someone else using it before trying it myself. Q5 ...if I could call someone for help if I got stuck. Q6 ...if someone else had helped me get started. Q7 ...if I had a lot of time to complete the job for which the software was provided. Q8 ...if I had just the built-in help facility for assistance. Q9 ...if someone showed me how to do it first. Q10 ...if I had used similar software before this one to do the same job.

Self-efficacy is an individual’s belief in his or her ability to perform a task [102]. Previous research has suggested that a person’s belief in their ability to perform a task may impact their performance [103]. One specific type of self-efficacy related to technology is computer self-efficacy. Computer self-efficacy focuses on a persons belief in their

98 Figure 4-8. Session one BCI self-efficacy scores for each question.

Figure 4-9. The relationship between BCI self-efficacy and time on task during session one. capability to use a computer. This work aims to leverage previous computer self-efficacy research to investigate users’ belief in their ability to create applications that leverage brain signals. To accomplish this, a slightly modified version Compeau and Higgins validated scale was used (Appendix A.4). The modifications made the questions specific to BCI applications. Participants answered these questions on a seven-point scale. The

99 questions shown in Table 4-4 were preceded by, “I could develop computer applications that read and interpret users brain signals”. Participants reported numerically different scores before (M = 42.9, SD = 8.97) and after (M = 55.21, SD = 7.78) using NeuroBlock. A paired t-test also showed that there was a significant difference between pre and post BCI self-efficacy score (t = 8.03, p<0.001). The results to each of these questions are shown in Fig. 4-8 . These results suggest that NeuroBlock may have positively impacted participants perception of their ability to create neurofeedback applications. The session one task was completed in 781.1 seconds on average (SD = 239.88, min = 466, max = 1519). As shown in Fig. 4-7, 27.5% of participants completed the task without assistance and 30% completed the task with assistance from the facilitator. During session one 42.5% of participants were unable to complete the task. This research also investigated the relationship between pre BCI self-efficacy and the time participants spent completing the neurofeedback application development task. As Fig. 4-8 shows, pre BCI self-efficacy was a significant predictor of the amount of time participants took to complete the session one task (linear regression: F(1, 21) = 6.148, β = -10.206, R2 = 0.226, p=0.021). This suggests that pre BCI self-efficacy had key implications for the amount of time it took participants to complete the session one task. It is important to note that the time spent addressing participants questions is not included in this analysis. Therefore, the results only reflect the time the participants spent working on the task without assistance. 4.4.1.5 Interviews

Participants responses during post-session interviews were used to better understand the end-user programing barriers observed during the screen recording analysis. The session one interviews gave insight on participants first impressions of NeuroBlock. It also provided details on how the participants initially perceived BCI. Five categories emerged from the session one interviews: self-regulation,interacting with blocks, ease of use, affective-based control and motion, and visual appeal of affective feedback. Neurofeedback

100 Table 4-5. Session one selected responses related to the five main categories. Category Participant quote Self-Regulation “Being able to see alpha and beta signals and engagement signals ... I guess it was kind of difficult to try to manipulate them myself.” Interacting with blocks “Honestly, it was, sometimes I would try to move a block, and I’d move an individual part within the” block.” Ease of Use “The system is very easy to use. The program, it’s very, very straightforward.” Affective-Based Control and Motion “I thought it was really cool how you could use your brain to make it move.” Visual Appeal of Affective feedback “It was cool seeing the level of brainwaves and stuff interact with the computer.” applications often require users to voluntarily manipulate their mental state to achieve a goal. This process can also be defined as the voluntary self-regulation of signals from the central nervous system [44]. Many participants identified self-regulation as challenging during the session one interviews. For example, one participants stated: I guess it was kind of difficult to try to manipulate them myself. I would think I am relaxed but all of a sudden the signals would be all over the place.. So umm it was amazing to see but also kind of frustrating. Although participants were familiar with basic concepts related to programming logic, none of the participants had experience using BCI devices. Consequently, participants often commented on the novelty of self-regulation during session one. One participant stated: Yeah, I thought it was pretty cool... In your everyday life, you’re not really forced to channel your relaxation like that in a way to accomplish a goal ... It was a challenge because you had to suppress anxiety while also staying relaxed so you that you could increase the score and eventually win the game. So I thought that the challenge was pretty enjoyable. Most of the interview questions were focused on identifying drawbacks of the system. However, participants often focused on self-regulation. When asked about aspects of the system that were bothersome one participant responded:

101 I’d say not really the system, it’s more like trying to control your brain, I guess. If it’s doing something, you can’t really control your level of engagement, I mean consciously. But other than that, there wasn’t anything else bothersome or anything. No specific instructions were given to influence how participants approached self-regulation. However, many participants attempted to develop various strategies to assist them with their self-regulation goals. One participants mentioned: When I was trying to move it up, I would look at the words. I’m like, Okay, how do you spell this? I try to concentrate or, Why is the amoeba going down whenever I’m trying to move it up? So I try to think of questions and I try to answer them. A second participant stated, “Like changing to relax more, like a couple deep breaths”. Although self-regulation may not directly impact the kinds of end-user programing barriers that novice programmers face, it is important to investigate how it may influence novice users interacting with a neurofeedback application development platform. Leveraging self-regulation with an interactive and user-friendly environment may engage new audiences with BCI. This approach could also support recent research on ways to improve the self-regulation of children who have suffered from multiple traumas [104]. Many of the responses provided by participants were related to their interactions with the block components provided in the system. The responses can be separated into two categories: Search and Utilization. Since session one was the first time many participants interacted with a system similar to NeuroBlock, many participants faced challenges while trying to find blocks. Participants mentioned these challenges even after watching the tutorial video and completing the pretest. One participant stated: I couldn’t remember where everything was like under the I dont remember what its call the control panel like with all the motion and events but I mean I can just click through it and eventually I would remember where everything was. A second participant reiterated this point stating:

102 The instructions would ask you to use blocks that you didn’t know for sure that were there, but it was nice, you knew that they were gonna be there, kind of thing. Although the participants stated they had issues recalling where blocks were, they seemed comfortable addressing this issue by browsing through the block categories. This observation is also supported by previous studies investigating visual programming environments [33]. Participants that seemed more comfortable with text-based languages also expressed frustrations with searching for blocks. Many of the responses were related to text-based entry. For example, one participant stated: Having your operators on a different page, instead of being able to just enter them on in the keyboard, so you have to navigate then to a different area then to get that. So having to always navigate to get to different variable names and stuff, instead of just being able to type them. This response is also supported by previous research investigating hybrid approaches to visual programming [105]. In general, participants expressed that they needed more practice before being able to efficiently locate all of the blocks. One participant particularly stated: Just that I would have to just gain familiarity with the various where all the things are located, the categories like the variables, the motion, the operators and everything like that. I just needed to get the feel of where everything was. But besides that, no, it was very intuitive. Responses related to block usage in the workspace area were also frequently shared. In this context block utilization focuses on the process of manipulating blocks in the workspace. Participants often mentioned how visual features of the blocks such as color and shape assisted them with creating their applications. For example, one participant shared: While using it umm it actually helps even if you don’t know what you doing and the way the block are shaped I guess in a way you know which goes where Its a block in a diamond shape you know its a boolean so you know if you use that .. Color coding helps to ... it helps you distinguish and even if you dont know what you’re doing you know what you’re doing.

103 Another participant seemed to be fond of their ability to modify the program once they realized they had made a mistake. Because I ran into something when I was doing the task, and I had forgotten to put a Forever Block, and I was going through all the stuff, and I was like, Oh, that was the one thing that I had forgotten. And I just had to move one thing, and it was that simple to fix. Not everyone reported the same experience related to using the blocks. A few participants reported having issues connecting blocks together. This often occurred when participants were building mathematical formulas. One participant stated: Sometimes when I was trying to put the multiplication thing when I was writing the program, there was a couple times where it went to where I didn’t want it to. Although the block-based design supported novice programmers, participants mentioned drawbacks that align with findings found in previous literature. Numerous participants commented that NeuroBlock was easy to use. While discussing what contributed to the ease of use participants mentioned a variety of features. Participants often compared the session one task to their experiences using Java. One participant stated: I don’t know, it was visually appealing, and I’m used to programming just blank, kind of boring pages of code. But this made it look like it’s easier to use, and it’s a little bit more simple and fun, I guess I’d say. A second participant stated: It was pretty easy, it is cool. I don’t know, it was a lot easier than programming ’cause all the statements were kind of the same as programming but a lot simpler. So that was cool, that was nice. One observation was that most automatically compared the system to text-based languages without being asked to. When this occurred, participants mostly mentioned the system was easy to use even with the issues discussed in the interacting with blocks section. Participants also shared their feelings about working with the Muse headband. One participant mentioned: I think it was pretty easy. I’d say not like when you think of a BCI, you think of a big helmet with a bunch of crazy wires, but it was kind of just like a

104 headband or something, which was pretty easy, and easy to turn it on by a button on the side, and you don’t really need a big set up. In general, participants commonly shared the ease of using both the hardware and software components of NeuroBlock. Statements related to participants sentiments towards affective-based control and motion were shared frequently. Specifically, these responses focused on participants’ perceptions of being able to influence objects using voluntary self-regulation. Responses related to affective-based motion, on the other hand, focused on how participants felt about the motion caused by self-regulation. These two types of responses were coded to better understand how participants perceived the input (BCI control) and output (visual feedback, motion) components of the BCI system. When asked about what they enjoyed the most about the session one task one participants stated: Well I never used anything like it before so I thought it was pretty interesting to use like a device like that and control you know something on the screen with just your brain. A second participant stated: It was pretty cool having to focus, trying to move it up and stuff, and it was also interesting seeing what I had to think of in order to get it to move up. Most of these responses were caused by the novelty of using a BCI device. Participants often used words such as cool, interesting, and fun to describe their first interactions with the system. Another participant shared: I just thought it was super cool that I could make it move based on how concentrated I was, that was a really neat experience. It made myself feel more involved in it, the amoeba would go up. That was pretty neat. Yeah. A different participant simply stated: I thought it was really cool how you could use your brain to make it move. Participants also reported enjoying the ability to use affective state blocks to drive the program. For example, one participant mentioned: Just having the alpha and beta waves inputted into the code, and be able to use those values in moving the amoeba.

105 Along with affective-based control, participants were also interested in the affective state feedback provided in the system. Many comments were related to the experience of seeing visualization reflecting their mental state for the first time. When asked about what she enjoyed about the task one participant mentioned: It was really cool seeing the levels of my brain as it was recording it, it was pretty cool seeing the levels of it. Trying to think, make it different by thinking, it’s pretty cool. One participant even mentioned additional ways he would like to use affective feedback stating: It was pretty cool having to monitor how concentrated I was ’cause I think that stuff is really cool, getting to see what I’m thinking or if I’m focused or not. That would probably be useful probably when I’m studying for a test or something. A second participant shared: I thought it was really cool just seeing the levels like that, seeing them change. How you were talking, I could see the differences in real time. That was pretty cool. One interesting observation is that participants seemed to be solely intrigued by the novelty of seeing a visualization mapped to their mental state. None of the participants commented on visual features specific to the design of the waveforms such as color or shape. As discussed in the following section, this interest would shift during the following sessions. 4.4.2 Session Two

4.4.2.1 Programming efficacy

Two participants were unable to attend their scheduled session two session. During session two participants reported the highest confidence for concepts such as variables (M = 5.92, SD = 1.12) 4-10. Participants reported the lowest confidence scores for concepts related to encapsulation, inheritance, and polymorphism (M = 2.82, SD = 1.79). Participants reported an average programming score of 4.9 (SD = 1.48). This accounts for all seven questions.

106 Figure 4-10. Session two average programming self-efficacy scores for each question.

4.4.2.2 Learning barriers

Figure 4-11. Session two barriers.

Similar to session one, understanding barriers were the most frequent barrier during the second session (Fig. 4-11). These barriers made up 73% (22) of all session two barriers. Coordination and information barriers were the least frequently encountered

107 barriers during session two. The re-occurrence of understanding barriers in session two suggest that issues with unexpected program behavior persisted. During session two these barriers were often related to participants forgetting to click the green flag to start testing the program. Prior to realizing this one participant stated: If the alpha is greater than 0.5 which right now it is it should move [user changes threshold values] ... It is not moving at all.

Figure 4-12. Session two average SUS scores for each question.

Although understanding barriers occurred participants reported a mean SUS score of 76.38 (SD = 11.38, min = 55.0, max = 97.50) which is also above average. Figure 4-12 shows the average score for each SUS question. 4.4.2.3 Self-Efficacy, effectiveness, and efficiency

BCI self-efficacy was also evaluated during session two. Participants reported different pre (M = 49.89, SD = 8.96) and post (M = 52.45, SD = 9.53) self-efficacy scores during this session. A paired t-test also showed a significant difference between these scores (t = 3.82, p<0.001). The result for each of the self-efficacy questions are shown in Fig. 4-13.

108 Figure 4-13. Session two average BCI self-efficacy scores for each question.

Figure 4-14. Session two effectiveness.

Similar to session one, these results suggest that NeuroBlock may have positively impacted participants belief in their ability to create neurofeedback applications. Session two tasks were completed on average in 797.6 seconds (SD = 219.66, min = 481, max = 1466). As shown in Fig. 4-14, 50% of participants completed the task with no

109 Figure 4-15. The relationship between BCI self-efficacy and time on task during session two. assistance from the researcher, 23.6% of participants required assistance to complete the task, and 26% of participants failed the task. The relationship between pre BCI self-efficacy and the time it took participants to build a neurofeedback application was also evaluated during session two. An initial evaluation showed that BCI self-efficacy was not a significant predictor of the amount of time it took participants to create the neurofeedback application (linear regression: F(1, 26) = 0.8584, β = -4.748, R2 = 0.226 , p = 0.362). However, after screening the data for extreme time on task values, BCI self-efficacy was a significant predictor of the time it took participants to complete the task (linear regression: F(1, 23) = 4.415, β = -6.819, R2 = 0.161, p=0.047). Fig. 4-15 illustrates these results. These findings suggest that pre BCI self-efficacy may influence the time it took participants to complete the session two task. Although these results complement observations from session one, further analysis is needed to completely confirm these findings. 4.4.2.4 Interviews

As shown in Fig. 4-6, three new categories emerged from the session three interviews: mounting the BCI device, familiarity, and user-defined factors and thresholds. The

110 Table 4-6. Session two selected responses related to the five main categories. Category Participant quote Mounting the BCI Device “The BCI headset somewhat took a little bit of adjustment to work out enough to the point where I’d rather not take it off than to readjust it.” Familiarity “This time also it was easier compared to the one before it because initially I wasn’t familiar with the system.” User-Defined Factors and Thresholds ‘Since alpha is only between zero and one, I’m not gonna do a huge number ’cause then it does nothing.” session two task was a bit more involved as previously discussed in the pre-task and task section. In addition, the novelty of the BCI aspect of the system began to have less of an impact. Instead participants seemed to focus more on the functional aspects of the system. One example of this was participants began to notice the sensor quality feedback provided in the interface. Based on notes collected while observing participants and the analysis of the screen recordings participants often disregarded sensor quality feedback and instead focused on how cool the affective state visualizations and object motion were during session one. During session two participants seemed to begin viewing the system more as a tool than a cool toy. The goals of the task were slightly more difficult which resulted in the need for more self-regulation while testing the neurofeedback applications. Participants often checked the sensor quality feedback while attempting to debug their applications. Consequently, participants expressed concerns with mounting the BCI device. For example, one participant stated: I thought that the BCI device is sometimes fickle, and it’s kind of hard to mount. And I couldn’t get the sensors to be green quite all the time. Participants also commented on how this issue influenced their effectiveness. One participant shared the following in reference to time consumption: Well, I never really knew because the signals would change a lot of times and show up green, or go up black, and then show up yellow. I never really knew if it was completely positioned correctly. That took a while each time for me to try to position it.

111 A common trend was participants blaming themselves for the issue and not the BCI device. One participant stated: So that was a little annoying because I wasn’t sure if it was a fault on my part, if it was the BCI just not working correctly, picking up signals. I always just assumed it was on my part, I just would always try to reposition it. These self-blaming type of responses were more common with females which supports observations reported in previous BCI literature [106]. Although the BCI hardware is beyond the scope of this work, it is important that this observation supports the goal of providing participants feedback about the current state of the BCI apparatus. Otherwise, end-user developers may run into invisible issues related to hardware which may hinder the overall experience of novice users building neurofeedback applications. Along with paying closer attention to the sensor quality feedback, participants were also more familiar with the interface. As discussed previously, participants expressed issues interacting with blocks in the interface during session one. During the session two interviews participants began to express sentiments related to familiarity. One participant stated: I think it’s just because I’ve done it before, like last time. And last time I forgot where some of the stuff were, so usually I spend a lot of time trying to figure it out what does what, and where to find the ’If Bounce’ thing. But now I knew exactly where it was, so it made it quicker this time. A second participant stated: Well, I was already familiar with the system. So the second I sat down at the computer, I already knew what I was gonna do. Participants reported their familiarity with the system made them more effective. For example, one participant stated: Like I was a lot more fluent with it. So I didn’t have to ask as many questions. Other statements seemed to reflect participants trust in the system. For example one participant shared: Especially now that I have more experience with it, and kind of understanding exactly what each level will do to it, and kind of what to expect from it.

112 In general, participants familiarity with NeuroBlock during session two seems to suggest that participants had a positive perception of NeuroBlocks learnability. This is also supported by the drop in observed selection barriers during session two.

Figure 4-16. Simple script.

One of the session two subtasks required participants to define factors and thresholds related to relaxation and attention. Factors were used to manage the intensity of an affective state influence. Fig. 4-16 shows an example of a simple mathematical formula created using blocks. In this example, the user created a factor of -1 and multiplied it by the value of engagement. This value greatly influences how shifts in the participants affective state effects the object motion. While completing the neurofeedback application, participants often adjusted these factors until a desired effect was acquired. During the session two interviews participants often shared this process. On participant stated: It wanted me to select a value that was high, and it didn’t say what specifically is high or low. So I originally put it, I was thinking like maybe 0.75 something like that, but I decided ultimately to go with 0.6. And I left it as that, and when I went to try the program, I found it was kind of... I mean it wasn’t easy to get it to 0.6, like I set it to. So I decided to change it to 0.5, and that seemed to work. A second participant stated: I liked the freedom that I got with it. When it was high, I can make it move however fast I wanted, and I can determine what was high and what was low for all the levels of activity. The ability to design the influence seem to engage the participants. This observation supports the goal of allowing users to create custom feedback applications that leverage

113 data provided by the BCI device. User also shared their experience of controlling the direction of the objects using factors. For example one participant stated: It was fun figuring out how to, making it move to the left, I think I pretty much figured out, there was more than one way to approach it. How I did it, since it’s the alpha was changing the positive, I just made it negative so it would go to the left. One important point shown in the response above is the connection the participant made with the affective state data. Although the nature of data provided by the BCI device is diffident than traditional input modalities such as mice and keyboards, NeuroBlock allowed the participant to use the BCI data to generate similar output. The participants also did not express much trouble figuring out how to create custom factors and thresholds to produce the desired effect. User also shared how they used the objects motion as a way to confirm the impact of factors and thresholds. One participant stated: Once I changed it and I could visually see them moving in larger directions, I guess I, I don’t wanna say it made me happier, but, it made me feel like I was actually on the right track. The participant continued by expressing how the feedback boosted her confidence stating: And so I was like, Okay, I do know what I’m doing. Although user-defined factors and thresholds are basic components of neurofeedback applications, it is fundamental to getting started with making applications that leverage information about the users affective state. According to the session two interviews, NeuroBlock supported participants with accomplishing subtask involving these components. 4.4.3 Session Three

4.4.3.1 Programming efficacy

Three participants were unable to attend session three. The two participants who were not able to attend session two were also excluded from this session. During session three, participants reported the highest ratings for concepts related to variables, constants, and data types (M = 6.03, SD = 1.15) (Fig. 4-17). Similar to sessions two and three, concepts related to encapsulation, inheritance, and polymorphism received the lowest

114 Figure 4-17. Session three average programming self-efficacy scores for each question. scores (M = 2.67, SD = 1.88). Across all seven questions, participants reported an average score of 5.0 (SD = 1.56). 4.4.3.2 Learning barriers

Figure 4-18. Session three barriers.

115 As shown in Fig. 4-18, understanding barriers were also the most frequently encountered barriers during session three. These barriers were responsible for 54% (12) of barriers during session three. Following understanding barriers were selection barriers which accounted for 27% (6) of session three barriers. Coordination and information barriers were the least common barrier during session three. Similar to sessions one and two, the understanding barriers observed in session three were related to participants observing unexpected behavior related to objects movements. After using an incorrect motion block one participant stated: For some reason it won’t go passed the center point [of the stage. Although the participant needed blocks in the motion category, they had an issue finding other required blocks, which were in different categories. The reemergence of selection barriers was related to issues finding blocks that causes the amoeba to exhibit a jittery motion. While attempting to implement this feature, one participant stated: It has to be in the motion category but for some reason I can’t remember any of these being a thing. However, these instances were rare. Although learning barriers occurred, participants reported an above average score of 81.43 (SD = 9.59, min = 55.0, max = 97.50) after completing session three. Figure 4-19 shows the average score for each SUS question. 4.4.3.3 Self-Efficacy, effectiveness, and efficiency

During session three participants’ pre (M = 52.06, SD=9.14) and post (M=53.8, SD=10.2) BCI self-efficacy scores differed. A paired t-test showed that there was a significant difference between these two scores (t = 2.813, p = 0.008). The results for each of the BCI self-efficacy questions are shown in Fig. 4-21. Similar to session one and two, these results suggest that further exposure to NeuroBlock positively impacted participants belief in their ability to create neurofeedback applications.

116 Figure 4-19. Session three average SUS scores for each question.

Figure 4-20. Session three effectiveness.

Session three was completed in 1,018 seconds on average (SD = 261.73, min = 691, max = 1845). As shown in Fig. 4-20, 46.7% of participants successfully completed the task with no assistance and 20% of participants completed the task with assistance. The remaining 34% of participants did not complete the task successfully.

117 Figure 4-21. Session three average BCI self-efficacy scores for each question.

Figure 4-22. The relationship between BCI self-efficacy and time on task during session three.

As shown in Fig. 4-22, BCI self-efficacy was not a predictor of the amount of time participants took to complete the session three task (linear regression: F(1, 20) = 0.2318, β = 2.794, R2 = 0.011, p = 0.64). This may suggest that after extensive exposure to NeuroBlock, participants’ self-efficacy relationship to their performance diverged from

118 what was observed during the previous sessions. Numerous previous studies investigating the relationship between self-efficacy and performance only report results from a single session. Further research is needed to better understand how continued exposure to an environment influences the relationship between self-efficacy and performance. 4.4.3.4 Interviews

Many concepts shared during the previous two sessions did not appear during session three. However, mounting issues were frequently shared during this session. One participant stated: I thought sometimes, taking on and off the BCI device, sometimes it was difficult to get the sensors adjusted just right, or have to wait a couple seconds for it connect to the computer. A second participant stated. So I was trying to focus but it didn’t really kind of work. So I didn’t know if it was my part or the device. As mentioned previously, the hardware component is beyond the scope of this work but should be investigated further as researcher investigate ways to extend BCI the general public. 4.4.4 Summary

Figure 4-23. Total barriers encountered for each session.

119 Figure 4-24. SUS scores for each session.

In total the 40 study participants ran into 101 barriers across all three sessions. Participants encountered an average of 3 barriers during the study. The total number of barriers experienced across all sessions ranged from 0 to 11 with a standard deviation of 1.96. As shown in Fig. 4-23, the most common barriers across all sessions were understanding barriers. These barriers made up 54% of all errors experienced throughout the entire study. Selection barriers were the second most common barriers throughout the entire study. Selections barriers were responsible for 24% of all barriers encountered throughout the study. The results in Fig. 4-24 suggest that NeuroBlock was perceived as usable during all three sessions.

120 CHAPTER 5 SUMMARY AND FUTURE DIRECTIONS The goal of this work was to address the lack of novice-friendly Brain-Computer Interface (BCI) application development tools by investigating a block-based programming (BBP) BCI approach. Specifically, this work focuses on the design, implementation, and evaluation of a system that allows novice programmers to build basic neurofeedback applications. The system design was inspired by previous literature in the areas of visual programming languages and BCI. It was also influenced by common approaches used by manufacturers of consumer grade BCI devices. The system also uses open source web libraries that allows users to build neurofeedback application within a web browser. The modular design covered in Chapter 3 allows the system to be easily extended to support additional functionality in the future. The presented system is informed by previous visual programming languages and BCI literature. To better understand novice programmers interaction with the system, this work focused on an evaluation of learning barriers, usability, and the relationship between self-efficacy, effectiveness, and efficiency (Chapter 4). This evaluation consisted of observation of participants while building neurofeedback application during three separate sessions. Along with reporting quantitative results, this evaluation also provides a qualitative analysis of participants via post-session interviews. This analysis used Grounded Theory to identify factors that may have influenced how they interacted with the system. This chapter revisits the research questions presented in Chapter 1. Results from the evaluation reflecting the six learning barriers in end-user programming systems were used as evidence to address the first research question. The second research questions concerning usability were addressed using SUS. The third question involving understanding the relationship between self-efficacy, effectiveness, and efficiency was addressed using a combination of descriptive statistics and linear regression analysis. The qualitative analysis of post-session interviews provides further evidence that may inform further designers of novice friendly BCI platforms. In addition

121 to discussing how these results address the three research questions this chapter will discuss the contributions of this work. Limitations and future work will also be examined in this chapter. 5.1 Research Questions Revisited

What barriers do novice programmers face when developing neurofeed- back applications using a block-based programming approach: Selection and use barriers have been observed in previous investigations of visual programming environments. Although our investigation presents a similar observation of selection barriers during session one, these barriers seemed to fade during the following sessions. Many participants spent time clicking through each category with no regard to the name on the labels in the command pallet during session one. Often times, this behavior caused participants to completely look over the targeted block even after selecting the correct category. This sequence of events commonly led to selection barrier during section one. The qualitative analysis of the session two barriers demonstrates that participants had gained a familiarity with the system by session two. The reduction of selection barriers also supports this idea. To address the selection barriers, additional features should be integrated into the system. One example of these types of features would be the option to search for blocks. Often times, participants knew what block they wanted to select but had trouble finding it. This evaluation only presented a limited number of blocks to build a basic neurofeedback application. However, future systems that are better equipped to build various types of BCI applications will likely have much more blocks. Addressing selection barriers will be vital going forward to ensure that future BCI development systems designed for novices provide users needed support. Although use barriers were not observed often, understanding barriers were. Understanding barriers were often caused by basic misunderstanding of how to place blocks together to create components such as mathematical formulas. Although participants knew how to use the blocks making them work together correctly to produce the desired output was sometimes troubling. The

122 “interacting with blocks” responses that emerged from analysis of session one interviews support this suggestion. These responses seem to suggest that basic block manipulation tasks such as placing blocks within each other could sometimes be troubling. This may have resulted in participants being less likely to continue debugging. Consequently, understanding barriers were eventually encountered. The complexity of mathematical formulas that leverages affective state data to generate output used by a neurofeedback application can get extremely complicated. In addition, participants often defaulted to attempting to type out commands when they could not find a desired block. The identification of this barrier suggests that a hybrid design that uses text for certain tasks such as creating mathematical formulas may be better for BCI development environment targeting novices. How do novice programmers perceive the usability of a block-based neurofeedback development tool: The SUS was used to assess the usability of the block-based neurofeedback development tool presented in this work. The SUS results show that the usability of the system was rated above average. The ease of use category that emerged from the session one interviews support the SUS results. Although participants sometimes reported having difficulty interacting with blocks, these issues seemed mainly related to formula manipulation. Participants seemed to not have much trouble interacting with other components of the system. Many participants made comments about the colors and the shape of the blocks assisting them with learning what blocks can work with each other. The system also received an above average rating for usability during session two. The familiarity category that emerged after analyzing the post-session interviews also support this finding. Participants seemed to be more comfortable finding blocks during session two. Consequently, selection barriers that were existent during session one were reduced during session two. It is important to note that this reduction occurred even though the difficulty of the task increased between session one and two; this suggest that the learnability of the system was positive. Learnability was also analyzed using specific

123 learnability questions from the SUS learnability. Results from this analysis show that NeuroBlock received an average score on learnability specific SUS questions. The usability of the system was also rated above average during session three. Although participants reported issues with the hardware devices, they seemed to have a positive perception of the usability of the software. However, the design of the software allows NeuroBlock to be extended to various hardware devices in the future. One key insight concerning usability is the need for a hybrid approach that gives users the ability to type certain components. This is especially true when it comes to mathematical formula manipulation. With this hybrid approach, it would be possible for users to simply type out an equation instead of having to find and connect multiple blocks. Not only could this increase efficiency, it may improve the overall experience of the system. This would also be helpful in the case of searching for blocks. Having a search interface where users can search for blocks by simply typing the desired label may cut out some of the selection barriers that occurred. This approach could also have a positive impact on usability. What is the relationship between BCI self-efficacy and novice program- mers ability (efficiency and effectiveness) to develop neurofeedback appli- cations using a visual neurofeedback development tool: Previous research has shown that there is often a relationship between self-efficacy and performance. This work aims to investigate if this is also true for BCI self-efficacy. Efficiency was measured by investigating the amount of time participants took to complete tasks during each of the three sessions. An analysis of these measures show that BCI self-efficacy was a predictor efficiency during session one. The relationship between BCI self-efficacy and efficiency shows that as confidence in their ability to build applications that leverage neurophysiological data increases the time spent on the task decreased. Similar findings are presented in previous studies investigating self-efficacy [107, 108]. The relationship between BCI self-efficacy and efficiency was also observed during session two. Results from the first two session seemed to align with what has been observed previously. However,

124 the same relationship was not observed during session three. Multiple factors could have influenced this shift. One observation that would appear sometimes during the post-interviews was self-doubt. Although this theme did not occur it could have impacted the session three analysis. One example of self-doubt can be observed in the following statement from a participant: It was definitely easy to use umm it was while using it umm it actually helps even if you dont know what you doing and the way the block are shaped I guess in a way you know which goes where Its a block in a diamond shape you know its a boolean so you know if you use that .. Color coding helps to it helps you distinguish and even if you dont know what youre doing you know what youre doing. In this quote the participant expresses that the system is easy to use. The participant also identifies helpful aspects of the systems visual design. Interestingly the participant also stated “even if you dont know what you doing you know what youre doing”. Statements like this suggest that even though participants were not confident in their ability to build neurofeedback applications, they were still able to complete the task efficiently. Further investigation is needed to understand if this behavior influences the relationship between BCI self-efficacy and efficiency. Much of the previous investigations on self-efficacy and performance only include a single-session analysis. However, observation presented by this study suggests that repeated exposure to a tool may influence this relationship. Further investigation is needed to better understand how repeated exposure may impact this relationship. 5.2 Contributions

The main contributions of this work are:

• The design and implementation of the first and only BBP environment for neurofeedback application development. This work presents an approach that allows users to create dynamic neurofeedback applications without worrying about syntactical rules. It also provides an alternative to traditional BCI software platforms for researchers interested in investigating neurofeedback application development with users that lack strong technical backgrounds.

125 • An evaluation of a BBP neurofeedback development environment. Findings from this evaluation showed that novice programmers were able to create neurofeedback applications with functions similar to programs featured in previous BCI literature [42, 96].

• The evaluation also found that mathematical formulas used to create these neurofeedback applications often lead to learning barriers. This finding suggest that a hybrid approach that integrates text entry may assist users with building out formulas featured in neurofeedback applications.

• The identification of frustration caused by the BCI device while building the neurofeedback application suggest that emulated neurophysiological data may be useful for similar systems in the future. 5.3 Limitations

The findings presented in this work are derived from a multi-session study with 40 participants. However, a comparative study that features NeuroBlock and alternative BCI software platforms is not included in this work. This is mainly due to other platforms focus on the signal processing aspect of the BCI pipeline. Given that there are few BCI software platforms similar to NeuroBlock this work takes an exploratory approach. Currently the system only supports three states related to a user’s affective state. Going forward it will be vital to implement ways to support several readings from the BCI device. Additionally, this work does not focus on the role that gender may play in reference to BCI self-efficacy. However, further investigation is needed as previous investigations of self-efficacy have observed gender differences. Although this work presents a novel approach to neurofeedback application development, further investigation is needed to confirm the appropriateness of this approach for more complex applications. 5.4 Future Work

Despite these positive results, the screen capture analysis showed that formula manipulation was a pain point for participants. Going forward a hybrid formula manipulation feature which may address the observed understanding barriers will be integrated. To address the issue of participants disregarding affective state feedback, methods to add the feedback into the stage area will be investigated. This may ensure

126 that novice programmers gain a better understanding of how affective state information is influencing their application. The main goal of this work is to expand the reach of BCI technology. To assist with this, an online version of this tool will be deployed that follows a model similar to tools such as Scratch and Online Python Tutor [109]. 5.5 Conclusion

This works presents NeuroBlock, a block-based programming approach to neurofeedback application development. It discusses the NeuroBlocks system design and present a study that investigates learning barriers and the systems usability. The study shows that novice programmers are initially prone to understanding and selection learning barriers. However, selection barriers decreased after the initial session. Further, participants rated the usability of NeuroBlock above average. Responses related to familiarity that emerged from a qualitative analysis of post-session interviews also supports the decline of selection barriers. Response related to interactions with the blocks provide insight on how understanding barriers occurred. In general, this initial evaluation demonstrates how block-based programming may be leveraged to lower the barrier of entry to BCI technology and provide insight to designers of similar systems in the future.

127 APPENDIX A STUDY PROTOCOL AND MATERIALS This appendix presents the procedures and materials used during the study. A.1 Study Procedures

128 A.2 Recruitment Flyer

129 A.3 Screening Questionnaire

130 A.4 BCI Self-Efficacy Survey

131 A.5 Programming Self-Efficacy Survey

A.6 Session One Pre-Task Instructions

The goal of this task is to create a neurofeedback application featuring an amoeba sprite that moves upward as you become more relaxed. The alpha EEG frequency has been linked to mental states of relaxation in previous research. We will leverage this frequency band to assist us with creating our neurofeedback application. The game is won when the player successfully moves the amoeba to the top of the stage by self-regulating their relaxation level. The instructions below will assist you with creating this neurofeedback application. If you are confused at any point during the task, please raise your hand and the study facilitator will assist you.

132 Amoeba

1. Add an Amoeba sprite.

2. Create a “when the green flag is clicked’” event.

3. Create a variable named ”health” for the ”amoeba” sprite.

4. Create a “forever block” that starts when the green flag is clicked.

5. Create an “If on edge, bounce” block within the ”forever block” created in step 4.

6. Create an “If block” that checks whether Alpha is greater than 0.5.

7. Place the “If block” created in step 6 inside of the “forever block” created in step 4.

8. Inside of the “If block” created in step 6 do the following:

(a) Set y to “(y + 0.2) * 1”. (b) Set the “health” variable to 25.

9. Mount BCI device and test the appliction by trying to move the amoeba to the top of the stage using your level of relaxation. A.7 Session One Task Instructions

The goal of this task is to create a hybrid neurofeedback application featuring an amoeba sprite. In this application users will be able to use the keyboard and a BCI device to control the amoeba sprite. The goal of the game is to use the keyboard to move the amoeba sprite to the top of the stage. While using the application users will be provided a boost by sustaining high engagement levels, however users will be penalized for high relaxation levels. High relaxation levels and low engagement levels will cause the amoeba sprite to exhibit jittery motions. The instructions below will assist you with creating this neurofeedback application. If you are confused at any point during the task, please raise your hand and the study facilitator will assist you. Amoeba

1. Add an “amoeba” sprite.

2. Create a variable named “speed” for the “amoeba” sprite.

133 3. Set y to “y + (random(0, speed) * Engagement)” when the up arrow key is pressed.

4. Set y to “y - (random(0, speed) * Engagement)” when the down arrow key is pressed.

5. Create a “when the green flag is clicked” event.

6. Set the speed variable to 2 when the green flag is clicked.

7. Mount the BCI device and test the application by moving the amoeba using the up and down arrow keys. Test whether engagement influences the amoeba’s motion.

8. Move the “amoeba” sprite to (0,-160) when the green flag is clicked.

9. Create a “forever block” that starts when the green flag is clicked.

10. Create an “If on edge, bounce” block within the ”forever block” created in step 9

11. Set x to “x + (random(-5, speed) * Alpha)” within the ”forever block” created in step 9.

12. Set y to “y + (random(-5, speed) * Alpha)” within the “forever” block created in step 9.

13. Mount the BCI device and test the application by moving the amoeba using the up and down arrow keys. Check whether the random jittery motion responds to alpha EEG frequency band.

14. Create an “If block” that checks whether Engagement is less than 0.15.

15. Place the “If block” created in step 14 inside of the ”forever block” created in step 9.

16. Inside of the “If block” created in step 14 do the following:

(a) Set y to “y - speed”. (b) Add a wait block with the value of 3s.

17. Mount the BCI device and test the application by moving the amoeba using the up and down arrow keys. Check whether the random jittery motion responds to Alpha levels. Test whether engagement influences the application. A.8 Session Two Pre-Task Instructions

The goal of this task is to create a hybrid neurofeedback application featuring an amoeba and a shrimp. In this application users will be able to use the keyboard and a BCI device to control the amoeba. The goal of the game is to use the keyboard to move

134 the amoeba to the left side of the stage. An additional goal is to map the amoebas speed to engagement levels provided by the BCI system. Once the application is successfully developed the amoeba will move faster to the left of the stage when engagement is higher. The game should also support collision. When the amoeba and shrimp sprites collides the amoeba should move to a random location on the stage. The instructions below will assist you with creating this neurofeedback application. If you are confused at any point during the task, please raise your hand and the study facilitator will assist you. Amoeba

1. Add an “amoeba” sprite.

2. Add a “shrimp” sprite.

3. Select the “amoeba” sprite.

4. Add blocks that move the amoeba sprite to the left when the left arrow key is pressed. Connect the speed of the amoeba to the level of engagement. For example, if engagement is low, when the left arrow key is pressed the amoeba should move slowly to the left. However, if engagement is high, the amoeba should move quickly to the left when the left arrow key is pressed

5. Mount the BCI device and test the application by trying to move the amoeba to the left side of the stage using the left key. Test whether engagement influences the amoeba’s movement.

6. Create a “forever block” that starts when the green flag is clicked.

7. Create an “If on edge, bounce” block within the “forever block” created in step 6

8. Create an “If block” that checks whether the “amoeba” sprite is touching a ”shrimp” sprite.

9. Place the “If block” created in step 8 inside of the “forever block” created in step 6.

10. Inside of the “If block” created in step 8 do the following:

(a) Add a wait block with the value of 1s. (b) Move the “amoeba” sprite to a random position.

11. Test the application by dragging the shrimp sprite over the amoeba sprite and confirming that the amoeba moves to a random location.

135 Shrimp

1. Select the “shrimp” sprite.

2. Create a variable named “hit” for the “shrimp” sprite.

3. Create a “forever block” that starts when the green flag is clicked.

4. Create an “If block” that checks whether the “shrimp” sprite is touching the “amoeba” sprite.

5. Place the “If block” created in step 4 inside of the “forever block” created in step 3.

6. Inside of the “If block” created in step 4 do the following:

(a) Increase the “hit” variable.

7. Test the application by dragging the shrimp sprite over the amoeba sprite and confirming that the “hit” variable updates. Test whether the amoeba sprite moves to a random position when it collides with the shrimp sprite. A.9 Session Two Task Instructions

The goal of this task is to create a hybrid neurofeedback application featuring an amoeba and a bacteria character. In this application users will be able to use the keyboard and a BCI device to control the amoeba. The goal of the application is to cause a collision between the amoeba and bacteria sprite. Along with the keyboard the application will also use your level of relaxation. Once the application is complete you will be able to move faster towards the bacteria when your relaxation levels increase. The bacteria will also move closer towards the center of the stage as your level of relaxation increase. As Beta increases the amoeba should exhibit a jittery motion causing it to be harder to control. The game should also support collision. When the amoeba and bacteria sprites collides the bacteria should move back to its starting location. Your score should also increase. The instructions below will assist you with creating this neurofeedback application. If you are confused at any point during the task, please raise your hand and the study facilitator will assist you. Amoeba

136 1. Add an “amoeba” sprite.

2. Add a “bacteria” sprite.

3. Select the “amoeba” sprite.

4. When the right arrow key is pressed, move the amoeba sprite to the right at a speed that is influenced by the level of relaxation.

5. When the left arrow key is pressed, move the amoeba sprite to the left at a speed that is influenced by the level of relaxation.

6. When the up arrow key is pressed, move the amoeba sprite up at a speed that is influenced by the level of relaxation.

7. When the down key is pressed, move the amoeba sprite down at a speed that is influenced by the level of relaxation.

8. Mount the BCI device and test the application by moving the amoeba using the up, down, left and right arrow keys. Test whether relaxation levels influences the amoeba’s speed.

9. Create a “when the green flag is clicked” event.

10. Move the “amoeba” sprite to (0,0) when the green flag is clicked.

11. Create a “forever block” that starts when the green flag is clicked.

12. Create an “If on edge, bounce” block within the “forever block” created in step 11

13. Set x to “x + (random(-5, 5) * Beta)” within the “forever block” created in step 11.

14. Set y to “y + (random(-5, 5) * Beta)” within the ”forever” block created in step 11.

15. Test the application by moving the amoeba using the up and down arrow keys. Check whether the random jittery motion responds to the value of Beta.

16. Create an “If block” that checks whether the ”amoeba” sprite is touching a ”bacteria” sprite.

17. Place the “If block” created in step 18 inside of the “forever block” created in step 13.

18. Create a variable named “score” for the “amoeba” sprite.

19. Inside of the “If block” created in step 16 do the following:

(a) Increase the score variable.

137 (b) Move the ”amoeba” sprite to (0,0).

20. Test the application by dragging the bacteria sprite over the amoeba sprite and confirming that the amoeba moves to (0, 0) Bacteria

1. Select the “bacteria” sprite.

2. Create a “when the green flag is clicked” event.

3. Move the “bacteria” sprite to (-230, -160) when the green flag is clicked.

4. Create a “forever block” that starts when the green flag is clicked.

5. Create an “If block” that checks if relaxation is high.

6. Place the “If block” created in step 5 inside of the forever block created in step 4.

7. Inside of the “If block” created in step 5 do the following:

(a) Change the “x position” of the “bacteria” sprite by 5. (b) Change the “y position” of the “bacteria” sprite by 5. (c) Add a wait block with the value of 5s.

8. Test the application by testing whether the bacteria sprites moves when your relaxation level increases.

9. Create an “If block” that checks whether a “bacteria” sprite is touching the “amoeba” sprite.

10. Place the “If block” created in step 9 inside of the “forever block” created in step 6. Place this block after the “If block” created in step 5.

11. Inside of the “If block” created in step 9 do the following:

(a) Move the “bacteria” sprite to (-230, -160)

12. Test the application by trying to collide the amoeba sprite with the bacteria sprite. Test whether your relaxation (Alpha) level influences the motion of the amoeba and bacteria sprites. Test whether the score increases when the amoeba and bacteria collide.

138 A.10 Session Three Pre-Task Instructions

The goal of this task is to create a hybrid neurofeedback application featuring an amoeba and multiple bacteria characters. In this application you will use the keyboard and a BCI device to control the amoeba. The goal of the game is to use the keyboard to move the amoeba to the right and left sides of the stage and attempt to catch falling bacteria. The amoeba will move faster as your level of relaxation increases. The bacteria will fall faster as your level of engagement increases. When the amoeba collides with bacteria sprites the player points should increase. The instructions below will assist you with creating this neurofeedback application. If you are confused at any point during the task, please raise your hand and the study facilitator will assist you. Amoeba

1. Add an “amoeba” sprite.

2. Add a “bacteria” sprite.

3. Select the ”amoeba” sprite.

4. When the right arrow key is pressed, move the amoeba sprite to the right at a speed that is influenced by the level of relaxation.

5. When the left arrow key is pressed, move the amoeba sprite to the left at a speed that is influenced by the level of relaxation.

6. Mount the BCI device and test the application by trying to move the amoeba to the left and right using the left and right arrow keys. Test whether alpha influences the amoeba’s movement.

7. Create a variable named “points” for the “amoeba” sprite.

8. Set y to -150 when the green flag is clicked.

9. Create a “forever block” that starts when the green flag is clicked.

10. Create an “If block” that checks whether the “amoeba” sprite is touching a “bacteria” sprite.

11. Place the “If block” created in step 10 inside of the “forever block” created in step 9.

12. Inside of the “If block” created in step 10 do the following:

139 (a) Change the points variable by 1. (b) Play the “splash” sound. (c) Add a wait block with the value of 1s. Bacteria

1. Select the “bacteria” sprite.

2. Create a “when the green flag is clicked” event.

3. Create a “repeat block” that starts when “green flag” is clicked.

4. Set the value of the “repeat block” created in step 3 to 4.

5. Create a clone of the “bacteria” sprite inside of the “repeat block” created in step 3.

6. Create an event that starts when a clone of the “bacteria” sprite is created.

7. Create a variable named “points” for the “amoeba” sprite.

8. Set x to random(-220, 220) when a clone of the ”bacteria” sprite is created.

9. Create a “forever block” that starts when the ”bacteria” clone is started.

10. Inside of the “forever block” created in step 9, make the bacteria clone fall down at a speed that is influenced by anxiety levels (caused by the game). For example, if anxiety levels are high, the bacteria should fall faster than when anxiety levels are low.

11. Create an “If block” that checks whether the “y position” of the bacteria sprite is less than -150.

12. Place the “If block” created in step 11 inside of the forever block created in step 9.

13. Inside of the “If block” created in step 11 do the following:

(a) Set y to 165 (b) Set x to random(-220, 220)

14. Test the application by trying to catch the bacteria sprites as they fall. Use the left and right arrow keys to move the amoeba horizontally. Investigate how the amoeba’s movement changes as your level of relaxation fluctuates. Also test whether the speed of the falling bacteria is influenced by anxiety levels (caused by the game).

140 A.11 Session Three Task Instructions

The goal of this task is to create a hybrid neurofeedback application featuring an amoeba and multiple bacteria characters and multiple shrimp sprites. The objective of the application is to “eat as many bacteria sprites as possible while avoiding the shrimp sprites. In this application users will be able to use the keyboard and a BCI device to control the amoeba. The amoeba will move faster as your level of engagement increases. When the player level of relaxation is high clones of the shrimp sprite will appear. To make the shrimp clones disappear players will have to maintain a high level of attention. Colliding with these sprites will cause the player to lose points. When players lose all of their points the amoeba sprite will return back to the center of the stage. Colliding with the bacteria sprite will cause the player to gain points. As engagement drops it should become harder to control the amoeba using the keyboard. The instructions below will assist you with creating this neurofeedback application. If you are confused at any point during the task, please raise your hand and the study facilitator will assist you. Amoeba

1. Add an “amoeba” sprite.

2. Add a “shrimp” sprite

3. Add a “bacteria” sprite.

4. Select the ”amoeba” sprite.

5. Create a variable named “score” for the “amoeba” sprite.

6. When the right arrow key is pressed, move the amoeba sprite to the right at a random speed that is influenced by the level of engagement.

7. When the left arrow key is pressed, move the amoeba sprite to the left at a random speed that is influenced by the level of engagement.

8. When the up arrow key is pressed, move the amoeba sprite up at a random speed that is influenced by the level of engagement.

9. When the down arrow key is pressed, move the amoeba sprite down at a random speed that is influenced by the level of engagement.

141 10. Move the “amoeba” sprite to (0,0) when the green flag is clicked.

11. Mount the BCI device and test the application by trying to move the amoeba using the left, right, up, and down arrow keys. Test whether engagement influences the amoeba’s movement.

12. Create a “forever block” that starts when the green flag is clicked.

13. Create an “If on edge, bounce” block within the ”forever block” created in step 13

14. Move the amoeba slightly in a random direction at a speed influenced by alpha within the “forever loop” created in step 13. For example, if alpha is high the amoeba should move rapidly in random directions. However, if alpha is low the amoeba should move slowly in random directions (or not at all).

15. Test the application by moving the amoeba using the arrow keys. Check whether the random jittery motion responds to the value of alpha.

16. Create an “If block” that checks whether the “amoeba” sprite is touching a ”shrimp” sprite.

17. Place the “If block” created in step 17 inside of the “forever block” created in step 13.

18. Inside of the “If” block created in step 17 do the following:

(a) Change the score variable by -1. (b) Add a wait block with the value of 1s.

19. Create an “If block” that checks whether the score variable is less than 1.

20. Place the “If block” created in step 20 inside of the “forever block” created in step 13 (After previous “If block”).

21. Inside of the “If” block created in step 20 do the following:

(a) Move the “amoeba” sprite to (0,0) (b) Set the “score” variable to 5. Bacteria

1. Select the “bacteria” sprite.

2. Create a “when the green flag is clicked” event.

3. Create a variable named score for the “bacteria” sprite.

142 4. Set the “score” variable to 5 when the green flag is clicked.

5. Move the “bacteria” sprite to (-240,-170) when the green flag is clicked.

6. Create a “repeat block” that starts when the green flag is clicked.

7. Set the value of the “repeat block” created in step 6 to 10.

8. Create a clone of the “bacteria” sprite inside of the “repeat block” created in step 6.

9. Create an event that starts when a clone of the “bacteria” sprite is created.

10. Move the “bacteria” clone to a random position when it is started.

11. Test the application by clicking the green flag and making sure the bacteria clones appear.

12. Create a “forever block” that starts when the ”bacteria” clone is started.

13. Create an “If block” that checks whether a “bacteria” sprite is touching the ”amoeba” sprite.

14. Place the “If block” created in step 13 inside of the forever block created in step 12.

15. Inside of the “If block” created in step 13 do the following:

(a) Move the “bacteria” sprite to a random position. (b) Change the score by 1. (c) Play the “splash” sound.

16. Test the application by using the arrow keys to collide the amoeba with bacteria sprites. Confirm that decreasing alpha levels reduces jittery motion and makes controlling the amoeba easier. Shrimp

1. Select the “shrimp” sprite.

2. Create a “when the green flag is clicked” event.

3. Move the “shrimp” sprite to a random position when the green flag is clicked.

4. Create a “forever block” that starts when the green flag is clicked.

5. Create an “If block” that checks whether relaxation levels are high.

6. Place this “If block” inside of the forever block created in step 4.

143 7. Inside of the “If block” created in step 4 do the following:

(a) Create a clone of the “shrimp” sprite. (b) Add a wait block with the value of 10s.

8. Create an event that starts when a clone of the “shrimp” sprite is created.

9. Move the “shrimp” clone to a random position when it is started.

10. Test the application by confirming that shrimp sprites appear when your relaxation level is very high.

11. Create a “forever block” that starts when the ”shrimp” clone is started.

12. Create an “If block” that checks whether relaxation levels are low.

13. Place the “If block” created in step 12 inside of the forever block created in step 11.

14. Inside of the “If block” created in step 12 do the following:

(a) Delete the “shrimp” clone. position.

15. Test the application by trying to increase points while also dodging the shrimp sprite. Confirm that your engagement level influences the game and that your relaxation level causes the shrimp sprite to appear and disappear.

144 APPENDIX B SELECTED NEUROBLOCK PROGRAMS This appendix presents examples of neurofeedback programs created by participants. B.1 Session One Pre-Task Program

Amoeba

145 B.2 Session One Task Program

Amoeba

146 B.3 Session Two Pre-Task Program

Amoeba

147 Shrimp

148 B.4 Session Two Task Program

Amoeba

149 Bacteria

150 B.5 Session Three Pre-Task Program

Amoeba

151 Bacteria

152 B.6 Session Three Task Program

Amoeba

153 Bacteria

Shrimp

154 REFERENCES [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Braincomputer interfaces for communication and control,” Clinical neurophysiology, vol. 113, no. 6, pp. 767–791, 2002. [2] S. G. Mason and G. E. Birch, “A general framework for brain-computer interface design,” IEEE transactions on neural systems and rehabilitation engineering, vol. 11, no. 1, pp. 70–85, 2003. [3] R. Caton, “Electrical currents of the brain.” The Journal of nervous and mental disease, vol. 2, no. 4, p. 610, 1875. [4] H. Berger, “ber das elektrenkephalogramm des menschen,” European archives of psychiatry and clinical neuroscience, vol. 87, no. 1, pp. 527–570, 1929. [5] J. J. Vidal, “Toward direct brain-computer communication,” Annual Review of Biophysics and Bioengineering, vol. 2, no. 1, pp. 157–180, 1973. [6] ——, “Real-time detection of brain events in eeg,” Proceedings of the IEEE, vol. 65, no. 5, pp. 633–641, 1977. [7] C. Guger, A. Schlogl, C. Neuper, D. Walterspacher, T. Strein, and G. Pfurtscheller, “Rapid prototyping of an eeg-based brain-computer interface (bci),” IEEE Trans- actions on Neural Systems and Rehabilitation Engineering, vol. 9, no. 1, pp. 49–58, 2001. [8] A. Delorme and S. Makeig, “Eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,” Journal of neuroscience methods, vol. 134, no. 1, pp. 9–21, 2004. [9] G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw, “Bci2000: a general-purpose brain-computer interface (bci) system,” IEEE Transac- tions on biomedical engineering, vol. 51, no. 6, pp. 1034–1043, 2004. [10] C. Brunner, G. Andreoni, L. Bianchi, B. Blankertz, C. Breitwieser, S. Kanoh, C. A. Kothe, A. Lcuyer, S. Makeig, and J. Mellinger, Bci software platforms, ser. Towards Practical Brain-Computer Interfaces. Springer, 2012, pp. 303–331. [11] P. Brunner, L. Bianchi, C. Guger, F. Cincotti, and G. Schalk, “Current trends in hardware and software for braincomputer interfaces (bcis),” Journal of neural engineering, vol. 8, no. 2, p. 025001, 2011. [12] T. Carlson and J. del R. Millan, “Brain-controlled wheelchairs: a robotic architecture,” IEEE Robotics & Automation Magazine, vol. 20, no. 1, pp. 65–73, 2013.

155 [13] I. Iturrate, J. M. Antelis, A. Kubler, and J. Minguez, “A noninvasive brain-actuated wheelchair based on a p300 neurophysiological protocol and automated navigation,” IEEE Transactions on Robotics, vol. 25, no. 3, pp. 614–627, 2009. [14] F. Galn, M. Nuttin, E. Lew, P. W. Ferrez, G. Vanacker, J. Philips, and J. del R. Milln, “A brain-actuated wheelchair: asynchronous and non-invasive braincomputer interfaces for continuous control of robots,” Clinical Neurophysi- ology, vol. 119, no. 9, pp. 2159–2169, 2008. [15] G. R. Mller-Putz, R. Scherer, G. Pfurtscheller, and R. Rupp, “Eeg-based neuroprosthesis control: a step towards clinical practice,” Neuroscience letters, vol. 382, no. 1, pp. 169–174, 2005. [16] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kbler, J. Perelmouter, E. Taub, and H. Flor, “A spelling device for the paralysed,” Nature, vol. 398, no. 6725, pp. 297–298, 1999. [17] J. Williamson, R. Murray-Smith, B. Blankertz, M. Krauledat, and K.-R. Mller, “Designing for uncertain, asymmetric control: Interaction design for braincomputer interfaces,” International Journal of Human-Computer Studies, vol. 67, no. 10, pp. 827–841, 2009. [18] G. Pfurtscheller, C. Guger, G. Mller, G. Krausz, and C. Neuper, “Brain oscillations control hand orthosis in a tetraplegic,” Neuroscience letters, vol. 292, no. 3, pp. 211–214, 2000. [19] E. Buch, C. Weber, L. G. Cohen, C. Braun, M. A. Dimyan, T. Ard, J. Mellinger, A. Caria, S. Soekadar, A. Fourkas, and N. Birbaumer, “Think to move: a neuromagnetic brain-computer interface (bci) system for chronic stroke,” Stroke, vol. 39, no. 3, pp. 910–917, Mar 2008, lR: 20161124; GR: Intramural NIH HHS/United States; JID: 0235266; ppublish. [20] J. van Erp, F. Lotte, and M. Tangermann, “Brain-computer interfaces: beyond medical applications,” Computer, vol. 45, no. 4, pp. 26–34, 2012. [21] C.-T. Lin, C.-J. Chang, B.-S. Lin, S.-H. Hung, C.-F. Chao, and I.-J. Wang, “A real-time wireless braincomputer interface system for drowsiness detection,” IEEE Transactions on Biomedical Circuits and Systems, vol. 4, no. 4, pp. 214–222, 2010. [22] E. Cutrell and D. Tan, “Bci for passive input in hci,” in Proceedings of CHI, vol. 8. Citeseer, 2008, pp. 1–3. [23] D. Grimes, D. S. Tan, S. E. Hudson, P. Shenoy, and R. P. Rao, “Feasibility and pragmatics of classifying working memory load with an electroencephalograph,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2008, pp. 835–844.

156 [24] G. Chanel, C. Rebetez, M. Btrancourt, and T. Pun, “Emotion assessment from physiological signals for adaptation of game difficulty,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 41, no. 6, pp. 1052–1063, 2011. [25] D. Szafir and B. Mutlu, “Pay attention!: designing adaptive agents that monitor and improve user engagement,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2012, pp. 11–20. [26] C. S. Crawford, M. Andujar, F. Jackson, S. Remy, and J. E. Gilbert, “User experience evaluation towards cooperative brain-robot interaction,” in Interna- tional Conference on Human-Computer Interaction. Springer, 2015, pp. 184–193. [27] J. Reis, A. Portugal, M. Pereira, and N. Dias, “Alpha and theta intensive neurofeedback protocol for age-related cognitive deficits,” in Neural Engineering (NER), 2015 7th International IEEE/EMBS Conference on. IEEE, 2015, pp. 715–718. [28] D. Ariely and G. S. Berns, “Neuromarketing: the hope and hype of neuroimaging in business,” Nature reviews neuroscience, vol. 11, no. 4, pp. 284–292, 2010. [29] J. A. Wilson, C. Guger, and G. Schalk, “Bci hardware and software,” Brain- computer interfaces: principles and practice, pp. 165–188, 2012. [30] M. Boshernitsan and M. S. Downes, Visual programming languages: A survey. Citeseer, 2004. [31] C. Kelleher and R. Pausch, “Lowering the barriers to programming: A taxonomy of programming environments and languages for novice programmers,” ACM Computing Surveys (CSUR), vol. 37, no. 2, pp. 83–137, 2005. [32] A. J. Ko, R. Abraham, L. Beckwith, A. Blackwell, M. Burnett, M. Erwig, C. Scaffidi, J. Lawrance, H. Lieberman, and B. Myers, “The state of the art in end-user software engineering,” ACM Computing Surveys (CSUR), vol. 43, no. 3, p. 21, 2011. [33] D. Weintrop and U. Wilensky, “To block or not to block, that is the question: students’ perceptions of blocks-based programming,” in Proceedings of the 14th International Conference on Interaction Design and Children. ACM, 2015, pp. 199–208. [34] A. F. Blackwell and R. Hague, “Autohan: An architecture for programming the home,” in Human-Centric Computing Languages and Environments, 2001. Proceed- ings IEEE Symposia on. IEEE, 2001, pp. 150–157. [35] C. Letondal, Participatory programming: Developing programmable bioinformatics tools for end-users, ser. End user development. Springer, 2006, pp. 207–242.

157 [36] A. Millner and E. Baafi, “Modkit: blending and extending approachable platforms for creating computer programs and interactive objects,” in Proceedings of the 10th International Conference on Interaction Design and Children. ACM, 2011, pp. 250–253. [37] C. S. Crawford, M. Andujar, F. Jackson, I. Applyrs, and J. E. Gilbert, “Using a visual programing language to interact with visualizations of electroencephalogram signals,” 2016. [38] A. Nijholt, B. Z. Allison, and R. J. Jacob, “Brain-computer interaction: can multimodality help?” in Proceedings of the 13th international conference on multi- modal interfaces. ACM, 2011, pp. 35–40. [39] B. Venthur, “Design and implementation of a brain computer interface system,” 2015. [40] S. Tilkov and S. Vinoski, “Node. js: Using javascript to build high-performance network programs,” IEEE Internet Computing, vol. 14, no. 6, pp. 80–83, 2010. [41] C. Bogart, C. Kstner, J. Herbsleb, and F. Thung, “How to break an api: Cost negotiation and community values in three software ecosystems,” in Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 2016, pp. 109–120. [42] C. Mhl, H. Grkk, D. P.-O. Bos, M. E. Thurlings, L. Scherffig, M. Duvinage, A. A. Elbakyan, S. Kang, M. Poel, and D. Heylen, “Bacteria hunt: A multimodal, multiparadigm bci game,” 2010. [43] N. Chu, “How should the state of the brain be described?: A call to standardize descriptions of brain states for data collection and research,” IEEE Consumer Electronics Magazine, vol. 5, no. 3, pp. 55–59, 2016. [44] F. Nijboer, F. O. Morin, S. P. Carmien, R. A. Koene, E. Leon, and U. Hoffmann, “Affective brain-computer interfaces: Psychophysiological markers of emotion in healthy persons and in persons with amyotrophic lateral sclerosis,” in Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009. 3rd International Conference on. IEEE, 2009, pp. 1–11. [45] A. Bangor, P. T. Kortum, and J. T. Miller, “An empirical evaluation of the system usability scale,” Intl.Journal of HumanComputer Interaction, vol. 24, no. 6, pp. 574–594, 2008. [46] D. R. Compeau and C. A. Higgins, “Computer self-efficacy: Development of a measure and initial test,” MIS quarterly, pp. 189–211, 1995. [47] B. Allison, J. Huggins, and J. Pineda, “Survey results regarding bci terminology,” in Sixth International Brain-Computer Interface Conference 2014, 2014.

158 [48] C. A. Kothe and S. Makeig, “Bcilab: a platform for braincomputer interface development,” Journal of neural engineering, vol. 10, no. 5, p. 056014, 2013. [49] J. R. Wolpaw and E. W. Wolpaw, “Brain-computer interfaces: something new under the sun,” Brain-computer interfaces: principles and practice, pp. 3–12, 2012. [50] A. Schlogl, C. Brunner, R. Scherer, and A. Glatz, “Biosig: An open-source software library for bci research,” Toward Brain-computer Interfacing, p. 347, 2007. [51] B. Venthur, S. Scholler, J. Williamson, S. Dhne, M. S. Treder, M. T. Kramarek, K.-R. Mller, and B. Blankertz, “Pyff—a pythonic framework for feedback applications and stimulus presentation in neuroscience,” Frontiers in neuroscience, vol. 4, p. 179, 2010. [52] B. Venthur and B. Blankertz, “Mushu, a free-and open source bci signal acquisition, written in python,” in Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE. IEEE, 2012, pp. 1786–1788. [53] B. Venthur, S. Dhne, J. Hhne, H. Heller, and B. Blankertz, “Wyrm: A brain-computer interface toolbox in python,” Neuroinformatics, vol. 13, no. 4, pp. 471–486, 2015. [54] Y. Renard, F. Lotte, G. Gibert, M. Congedo, E. Maby, V. Delannoy, O. Bertrand, and A. Lcuyer, “Openvibe: An open-source software platform to design, test, and use braincomputer interfaces in real and virtual environments,” Presence: teleoperators and virtual environments, vol. 19, no. 1, pp. 35–53, 2010. [55] W. Feurzeig, “Programming-languages as a conceptual framework for teaching mathematics. final report on the first fifteen months of the logo project.” 1969. [56] M. Conway, S. Audia, T. Burnette, D. Cosgrove, and K. Christiansen, “Alice: lessons learned from building a 3d system for novices,” in Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, 2000, pp. 486–493. [57] W. P. Dann, S. Cooper, and R. Pausch, Learning to Program with Alice (w/CD ROM). Prentice Hall Press, 2011. [58] S. Fincher, S. Cooper, M. Klling, and J. Maloney, “Comparing alice, greenfoot & scratch,” in Proceedings of the 41st ACM technical symposium on Computer science education. ACM, 2010, pp. 192–193. [59] B. Moskal, D. Lurie, and S. Cooper, “Evaluating the effectiveness of a new instructional approach,” ACM SIGCSE Bulletin, vol. 36, no. 1, pp. 75–79, 2004. [60] P. Mullins, D. Whitfield, and M. Conlon, “Using alice 2.0 as a first language,” Journal of Computing Sciences in Colleges, vol. 24, no. 3, pp. 136–143, 2009.

159 [61] K. Johnsgard and J. McDonald, “Using alice in overview courses to improve success rates in programming i,” in Software Engineering Education and Training, 2008. CSEET’08. IEEE 21st Conference on. IEEE, 2008, pp. 129–136. [62] W. Dann and S. Cooper, “Education alice 3: concrete to abstract,” Communications of the ACM, vol. 52, no. 8, pp. 27–29, 2009. [63] M. Resnick, J. Maloney, A. Monroy-Hernndez, N. Rusk, E. Eastmond, K. Brennan, A. Millner, E. Rosenbaum, J. Silver, and B. Silverman, “Scratch: programming for all,” Communications of the ACM, vol. 52, no. 11, pp. 60–67, 2009. [64] T. Booth and S. Stumpf, “End-user experiences of visual and textual programming environments for arduino,” in International Symposium on End User Development. Springer, 2013, pp. 25–39. [65] D. J. Malan and H. H. Leitner, “Scratch for budding computer scientists,” ACM SIGCSE Bulletin, vol. 39, no. 1, pp. 223–227, 2007. [66] M. Armoni, O. Meerbaum-Salant, and M. Ben-Ari, “From scratch to real programming,” ACM Transactions on Computing Education (TOCE), vol. 14, no. 4, p. 25, 2015. [67] J. H. Maloney, K. Peppler, Y. Kafai, M. Resnick, and N. Rusk, Programming by choice: urban youth learning programming with scratch. ACM, 2008, vol. 40, no. 1. [68] L. K. G. at the MIT Media Lab. Scratch - imagine, program, share. [Online]. Available: https://scratch.mit.edu/statistics/ [69] T. software BV. Tiobe index — tiobe - the software quality company. [Online]. Available: https://www.tiobe.com/tiobe-index/ [70] S. C. Pokress and J. J. D. Veiga, “Mit app inventor: Enabling personal mobile computing,” arXiv preprint arXiv:1310.2830, 2013. [71] N. Fraser, “Blockly: A visual programming editor,” Published.Google, Place, 2013. [72] J. Gray, H. Abelson, D. Wolber, and M. Friend, “Teaching cs principles with app inventor,” in Proceedings of the 50th Annual Southeast Regional Conference. ACM, 2012, pp. 405–406. [73] D. Wolber, “App inventor and real-world motivation,” in Proceedings of the 42nd ACM technical symposium on Computer science education. ACM, 2011, pp. 601–606. [74] K. Roy, “App inventor for android: report from a summer camp,” in Proceedings of the 43rd ACM technical symposium on Computer Science Education. ACM, 2012, pp. 283–288.

160 [75] A. Wagner, J. Gray, J. Corley, and D. Wolber, “Using app inventor in a k-12 summer camp,” in Proceeding of the 44th ACM technical symposium on Computer science education. ACM, 2013, pp. 621–626. [76] M. MacLaurin, “Kodu: end-user programming and design for games,” in Proceedings of the 4th international conference on foundations of digital games. ACM, 2009, p. 2. [77] P. Henriksen and M. Klling, “Greenfoot: combining object visualisation with interaction,” in Companion to the 19th annual ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications. ACM, 2004, pp. 73–82. [78] A. Begel and E. Klopfer, “Starlogo tng: An introduction to game development,” Journal of E-Learning, 2007. [79] C. Duncan, T. Bell, and S. Tanimoto, “Should your 8-year-old learn coding?” in Proceedings of the 9th Workshop in Primary and Secondary Computing Education. ACM, 2014, pp. 60–69. [80] K. Wang, C. McCaffrey, D. Wendel, and E. Klopfer, “3d game design with programming blocks in starlogo tng,” in Proceedings of the 7th international conference on Learning sciences. International Society of the Learning Sciences, 2006, pp. 1008–1009. [81] D. Krebs, A. Conrad, and J. Wang, “Combining visual block programming and graph manipulation for clinical alert rule building,” in CHI’12 Extended Abstracts on Human Factors in Computing Systems. ACM, 2012, pp. 2453–2458. [82] M. ngeles Serna, C. J. Sreenan, and S. Fedor, “A visual programming framework for wireless sensor networks in smart home applications,” in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on. IEEE, 2015, pp. 1–6. [83] R. Krepki, B. Blankertz, G. Curio, and K.-R. Mller, “The berlin brain-computer interface (bbci)towards a new communication channel for online control in gaming applications,” Multimedia Tools and Applications, vol. 33, no. 1, pp. 73–90, 2007. [84] T. W. Price and T. Barnes, “Comparing textual and block interfaces in a novice programming environment,” in Proceedings of the eleventh annual International Conference on International Computing Education Research. ACM, 2015, pp. 91–99. [85] C. Kelleher and R. Pausch, “Lowering the barriers to programming: A taxonomy of programming environments and languages for novice programmers,” ACM Computing Surveys (CSUR), vol. 37, no. 2, pp. 83–137, 2005.

161 [86] R. T. Pivik, R. J. Broughton, R. Coppola, R. J. Davidson, N. Fox, and M. R. Nuwer, “Guidelines for the recording and quantitative analysis of electroencephalographic activity in research contexts,” Psychophysiology, vol. 30, no. 6, pp. 547–558, 1993. [87] M. A. Lebedev, A. Messinger, J. D. Kralik, and S. P. Wise, “Representation of attended versus remembered locations in prefrontal cortex,” PLoS Biol, vol. 2, no. 11, p. e365, 2004. [88] C. Kothe, “Lab streaming layer (lsl),” https://github.com/sccn/labstreaminglayer.Accessed on October, vol. 26, p. 2015, 2014. [89] C. Mhl, B. Allison, A. Nijholt, and G. Chanel, “A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges,” Brain-Computer Interfaces, vol. 1, no. 2, pp. 66–84, 2014. [90] Interaxon. Available data - muse developers. [Online]. Available: http: //developer.choosemuse.com/research-tools/available-data [91] J. Maloney, M. Resnick, N. Rusk, B. Silverman, and E. Eastmond, “The scratch programming language and environment,” ACM Transactions on Computing Education (TOCE), vol. 10, no. 4, p. 16, 2010. [92] R. V. Roque, OpenBlocks: an extendable framework for graphical block programming systems, 2007. [93] A. T. Pope, E. H. Bogart, and D. S. Bartolome, “Biocybernetic system evaluates indices of operator engagement in automated task,” Biological psychology, vol. 40, no. 1, pp. 187–195, 1995. [94] A. Dzialocha. osc-js. [Online]. Available: https://www.npmjs.com/package/osc-js [95] J. Brooke, “Sus-a quick and dirty usability scale,” Usability evaluation in industry, vol. 189, no. 194, pp. 4–7, 1996. [96] C. Mhl, H. Grkk, D. P.-O. Bos, M. E. Thurlings, L. Scherffig, M. Duvinage, A. A. Elbakyan, S. Kang, M. Poel, and D. Heylen, “Bacteria hunt,” Journal on Multimodal User Interfaces, vol. 4, no. 1, pp. 11–25, 2010. [97] T. J. McCabe, “A complexity measure,” IEEE Transactions on Software Engineering, no. 4, pp. 308–320, 1976. [98] E. Aivaloglou and F. Hermans, “How kids code and how we know: An exploratory study on the scratch repository,” in Proceedings of the 2016 ACM Conference on International Computing Education Research. ACM, 2016, pp. 53–61. [99] J. Corbin and A. Strauss, “Basics of qualitative research: Techniques and procedures for developing grounded theory,” 2008.

162 [100] B. H. Cho, J.-M. Lee, J. Ku, D. P. Jang, J. Kim, I.-Y. Kim, J.-H. Lee, and S. I. Kim, “Attention enhancement system using virtual reality and eeg biofeedback,” in Virtual Reality, 2002. Proceedings. IEEE. IEEE, 2002, pp. 156–163. [101] A. J. Ko, B. A. Myers, and H. H. Aung, “Six learning barriers in end-user programming systems,” in Visual Languages and Human Centric Computing, 2004 IEEE Symposium on. IEEE, 2004, pp. 199–206. [102] B. Albert, “Social foundations of thought and action: A social cognitive theory,” NY.: Prentice-Hall, 1986. [103] M. B. Kinzie, M. A. Delcourt, and S. M. Powers, “Computer technologies: Attitudes and self-efficacy across undergraduate disciplines,” Research in higher education, vol. 35, no. 6, pp. 745–768, 1994. [104] A. N. Antle, L. Chesick, A. Levisohn, S. K. Sridharan, and P. Tan, “Using neurofeedback to teach self-regulation to children living in poverty,” in Proceedings of the 14th International Conference on Interaction Design and Children. ACM, 2015, pp. 119–128. [105] R. Koitz and W. Slany, “Empirical comparison of visual to hybrid formula manipulation in educational programming languages for teenagers,” in Proceedings of the 5th Workshop on Evaluation and Usability of Programming Languages and Tools. ACM, 2014, pp. 21–30. [106] S. I. Hjelm and C. Browall, “Brainball-using brain activity for cool competition,” in Proceedings of NordiCHI, vol. 7, 2000. [107] L. Beckwith, D. Inman, K. Rector, and M. Burnett, “On to the real world: Gender and self-efficacy in excel,” in Visual Languages and Human-Centric Computing, 2007. VL/HCC 2007. IEEE Symposium on. IEEE, 2007, pp. 119–126. [108] L. Beckwith, M. Burnett, S. Wiedenbeck, C. Cook, S. Sorte, and M. Hastings, “Effectiveness of end-user debugging software features: Are there gender issues?” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2005, pp. 869–878. [109] P. J. Guo, “Online python tutor: embeddable web-based program visualization for cs education,” in Proceeding of the 44th ACM technical symposium on Computer science education. ACM, 2013, pp. 579–584.

163 BIOGRAPHICAL SKETCH Chris S. Crawford was born in 1990 in Tuscaloosa, Alabama. He grew up in Knoxville, Alabama and graduated salutatorian from Greene County High School. He attended the University of Alabama and graduated cum laude with a Bachelor of Science in computer science in 2012. In 2017, he received his Doctor of Philosophy in human-centered computing from the University of Florida under the supervision of Dr. Juan E. Gilbert.

164